id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.02895
Flexible and Probabilistic Topology Tracking with Partial Optimal Transport
In this paper, we present a flexible and probabilistic framework for tracking topological features in time-varying scalar fields using merge trees and partial optimal transport. Merge trees are topological descriptors that record the evolution of connected components in the sublevel sets of scalar fields. We present a new technique for modeling and comparing merge trees using tools from partial optimal transport. In particular, we model a merge tree as a measure network, that is, a network equipped with a probability distribution, and define a notion of distance on the space of merge trees inspired by partial optimal transport. Such a distance offers a new and flexible perspective for encoding intrinsic and extrinsic information in the comparative measures of merge trees. More importantly, it gives rise to a partial matching between topological features in time-varying data, thus enabling flexible topology tracking for scientific simulations. Furthermore, such partial matching may be interpreted as probabilistic coupling between features at adjacent time steps, which gives rise to probabilistic tracking graphs. We derive a stability result for our distance and provide numerous experiments indicating the efficacy of distance in extracting meaningful feature tracks.
Mingzhe Li, Xinyuan Yan, Lin Yan, Tom Needham, Bei Wang
2023-02-06T16:02:32Z
http://arxiv.org/abs/2302.02895v1
# Flexible and Probabilistic Topology Tracking ###### Abstract In this paper, we present a flexible and probabilistic framework for tracking topological features in time-varying scalar fields using merge trees and partial optimal transport. Merge trees are topological descriptors that record the evolution of connected components in the sublevel sets of scalar fields. We present a new technique for modeling and comparing merge trees using tools from partial optimal transport. In particular, we model a merge tree as a measure network, that is, a network equipped with a probability distribution, and define a notion of distance on the space of merge trees inspired by partial optimal transport. Such a distance offers a new and flexible perspective for encoding intrinsic and extrinsic information in the comparative measures of merge trees. More importantly, it gives rise to a partial matching between topological features in time-varying data, thus enabling flexible topology tracking for scientific simulations. Furthermore, such partial matching may be interpreted as probabilistic coupling between features at adjacent time steps, which gives rise to probabilistic tracking graphs. We derive a stability result for our distance and provide numerous experiments indicating the efficacy of distance in extracting meaningful feature tracks. Merge trees, optimal transport, topological data analysis, topology in visualization ## 1 Introduction Feature extraction and tracking for time-varying data play an important role in scientific visualization. Over the past two decades, topology-based techniques have been successfully applied to study the evolution of features of interest, which is at the core of many scientific applications, including combustion [1], climatology [2], and astronomy [3]. In particular, topology-based techniques utilize topological descriptors such as persistence diagrams and merge trees for feature extraction and tracking in scalar field data; see [4, 5] for surveys. In this paper, we present a novel and flexible framework for tracking features (i.e., critical points) in time-varying scalar fields by combining merge trees with partial optimal transport. Merge trees are topological descriptors that record the evolution of connected components in the sublevel sets of scalar fields. Our contributions include: * We present a new technique for modeling and comparing merge trees using tools from partial optimal transport. In particular, we model a merge tree as a measure network (that is, a network equipped with a probability distribution) and define a partial fused Gromov-Wasserstein distance between a pair of merge trees. * We show that such a distance comes with good theoretical justifications and offers a new and flexible way to encode intrinsic and extrinsic information in the comparative measures of merge trees. * Most importantly, we demonstrate via extensive experiments that such a distance gives rise to a partial matching between topological features in time-varying data, thus enabling flexible topology tracking for scientific simulations. * Finally, the partial optimal transport provides a probabilistic coupling between features at adjacent time steps, which are then visualized by weighted tracks from probabilistic tracking graphs. Furthermore, our implementation is open source, available at https: //github.com/tdavislab/GWMT, including a video that demonstrates the probabilistic tracking graph. **Overview.** After reviewing related work on partial optimal transport and topology-based feature tracking in Sect. 2, we review the technical background of merge trees, measure networks, and various distances used in (partial) optimal transport in Sect. 3. We then describe our feature-tracking framework in Sect. 4. In particular, we introduce a new distance - _partial fused Gromov-Wasserstein distance_ - in Sect. 4.1 and describe its theoretical properties (Sect. 5). We demonstrate the utility of our framework with extensive experiments and comparisons with the state-of-the-art (Sect. 6). A direct consequence of our framework is that it enables richer representations of tracking graphs, referred to as _probabilistic tracking graphs_, for which we give a visual demonstration in Sect. 7. ## 2 Related Work **Optimal transport and Gromov-Wasserstein distance.** This paper builds upon the _Gromov-Wasserstein (GW) distance_, a tool from optimal transport for deriving probabilistic registration/correspondences between nodes of different networks. Specifically, we use GW distance to study _merge trees_, which are topological descriptors of scalar fields; see Sect. 3 for formal definitions. The GW distance was introduced by Memoli [6, 7] as a way to compare metric measure spaces (i.e., compact metric spaces endowed with probability measures), with a view to shape analysis applications. More recently, this framework was extended to allow comparisons between networks endowed with kernel functions that are not necessarily metrics [8, 9]. As a flexible tool for registering complex datasets, GW distance has become an important tool in machine learning applications, such as graph matching and partitioning [10, 11, 12], natural language processing [13], and alignment of single cell multicomics data [14]. A number of recent works have focused specifically on applications of GW distance to merge trees. Combining a Riemannian interpretation of GW distance developed in [15, 16] with matrix sketching techniques, Li et al. [17] introduced a pipeline for finding structural representatives among a set of merge trees. In [18], GW techniques were combined with theory developed in [19] in order to give an estimate of an _interleaving distance_ on the space of merge trees. Theoretical properties of a refined generalization of GW distance between merge tree-like objects called _ultra dissimilarity spaces_ were studied in [20]. In this paper, we present a novel distance between merge trees, called the _partial fused Gromov-Wasserstein (pFGW) distance_, which is built upon variants of the GW pipeline, including the _Fused Gromov-Wasserstein distance_[21] and _partial optimal transport_[22]. **Topology-based feature tracking.** Topological techniques have been used for feature extraction and tracking in scalar fields [5] and vector fields [23, 24]. Topology has been used to track features for time-varying scalar fields by solving an explicit correspondence problem. A number of topological descriptors have been used for feature tracking, including persistence diagrams, merge trees, contour trees, Reeb graphs, extremum graphs, and Morse complexes; see [5, Section 7.1] for a survey. Recently, persistence diagrams and an extension of the Wasserstein metric have been used to perform topology tracking [25, 26]. A metric on the space of merge trees was recently introduced [27] based on the \(L_{2}\)-Wasserstein distance between extremum persistence diagrams. Yan et al. [28] performed geometry-aware comparisons of merge trees using labeled interleaving distances. Their framework uses a labeling step to find a correspondence between the critical points of two merge trees, and integrates geometric information of the data domain in the labeling process [28]. Instead, our distance computation utilizes information from the data domain within the distances themselves. Our pFGW distance applies to any task involving merge tree comparisons, but we focus in this paper on feature tracking in time-varying scalar fields using merge trees. Saikia et al. [29, 30] presented a strategy for topological feature tracking with merge trees called Global Feature Tracking (GFT). Their strategy determines the similarity of subregions segmented by merge trees at adjacent time steps, based on the overlap size between two regions, and the similarity between histograms of scalar values within each region. In GFT, the information of a critical point includes its subtree, whereas our work considers the relation between every pair of critical points in the merge tree. Furthermore, GFT uses the segmentation of scalar fields to compare the overlapping subtree regions, which can be memory-consuming. Recent works [25, 26, 27] have utilized persistence diagrams for feature tracking. They could be considered as solving an assignment problem using branch decompositions of merge trees. Such assignment problems are closely related to (partial) optimal transport [31]. In comparison, our approach is to solve an assignment problem (a) in a _probabilistic setting_, and (b) using more topological constraints encoded by entire merge trees. Another interesting feature of our approach is that we are able to derive a stability result (Theorem 2), which has so far not been established for some of the other methods (e.g., [27]) in the literature. Although this paper focuses on feature tracking in scalar fields, we review feature tracking in vector fields briefly, which also aims to associate features from one time step to the next, and to detect topological events. Helman and Hesselink [32, 33] tracked critical points in vector fields over time, and Wischgoll et al. [34] tracked closed streamlines and detected bifurcations. Tricoche et al. [35, 36] provided critical point tracking using spacetime grids. Thesel and Seidel [37] introduced Feature Flow Fields (FFF), followed by stable [38] and combinatorial [39] variants. See [24, Section 4.1] for a survey. ## 3 Technical Background We combine ingredients from diverse areas: topology in visualization, optimal transport, and measure theory. For the technical background, we begin by reviewing the notion of a merge tree that arises from a scalar field in topology-based visualization (Sect. 3.1), and then we introduce concepts from optimal transport (Sect. 3.2). In particular, we review the notion of measure networks within the Gromov-Wasserstein (GW) framework of Chowdhury and Memoli [9]. We then discuss the fused Gromov-Wasserstein (FGW) framework of Vayer et al. [40], which offers additional flexibility in modeling and comparing merge trees (Sect. 3.3). ### _Merge Trees_ Let \(f:\mathbb{M}\to\mathbb{R}\) be a scalar field defined on the domain of interest \(\mathbb{M}\), where \(\mathbb{M}\) can be a manifold or a subset of \(\mathbb{R}^{d}\). For our experiments, \(\mathbb{M}\subset\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\). Merge trees capture the connectivity among the _sublevel sets_ of \(f\), i.e., \(\mathbb{M}_{a}=f^{-1}(-\infty,a]\). Formally, two points \(x,y\in\mathbb{M}\) are considered to be _equivalent_, denoted by \(x\sim y\), if \(f(x)=f(y)=a\), and \(x\) and \(y\) belong to the same connected component of a sublevel set \(\mathbb{M}_{a}\). The _merge tree_, \(T(\mathbb{M},f)=\mathbb{M}/\sim\), is the quotient space obtained by gluing together points in \(\mathbb{M}\) that are equivalent under the relation \(\sim\); see Fig. 1 for an example. The construction of a merge tree for a given \(f:\mathbb{M}\to\mathbb{R}\) is described procedurally as follows: we sweep the function value \(a\) from \(-\infty\) to \(\infty\), and we create a new branch originating at a leaf node for each local minimum of \(f\). As \(a\) increases, such a branch is extended as its corresponding component in \(\mathbb{M}_{a}\) grows until it merges with another branch at a saddle point. Assuming \(\mathbb{M}\) is connected and \(f\) achieves a unique global maximum, then all branches eventually merge into a single component, which corresponds to the root of the tree. For a given merge tree, leaves, internal nodes, and root node represent the minima, merging saddles, and global maximum of \(f\), respectively. Fig. 1 displays a height function \(f:\mathbb{M}\subset\mathbb{R}^{2}\to\mathbb{R}\) in (a), together with its Fig. 1: An example of a merge tree from a height field \(f:\mathbb{M}\to\mathbb{R}\) defined on a 2D domain. From left to right: (a) 2D scalar field visualization with local minima in blue, saddles in white, and local maxima in red; (b) a merge tree embedded in the graph of the scalar field; and (c) an abstract (straight-line) visualization of a merge tree as a rooted tree equipped with the height function. corresponding merge tree embedded in the graph of the scalar field, i.e., \(\{(x,f(x)):x\in\mathbb{M}\}\) in (b). Abstractly, a merge tree \(T\) is a rooted tree equipped with \(f\) restricted to its node set, \(f:V\rightarrow\mathbb{R}\), as shown in (c). ### _Measure Networks and GW Distances_ Gromov-Wasserstein (GW) distance, which we will define below, was introduced by Memoli as a way to compare metric measure spaces [6, 7]. Sturm [15] and Peyre et al. [8] made the observation that GW distance gives a meaningful comparison between general square kernel matrices (i.e., not just distance matrices). This perspective was formalized mathematically by Chowdhury and Memoli [9], who showed that a natural setting for generalized GW distance is as a metric on the space of _measure networks_. This level of generality will be most appropriate for our purposes, so we give an exposition of GW distance in this context, with a focus on the concrete setting of finite measure networks. **Measure networks.** In its most general form, a _measure network_ is a triple \((X,\mu,\omega)\): \(X\) is a separable, completely metrizable space (i.e., a Polish space), \(\mu\) is a fully supported Borel probability measure on \(X\), and \(\omega:X\times X\rightarrow\mathbb{R}\) a bounded, measurable function [9]. We adapt and simplify this formulation for our framework, which is focused specifically on measure networks arising from finite graphs. We then adapt this framework to a merge tree, considered to be a special case of a finite graph. A finite graph \(G\) may be represented as a _measure network_ using a triple \((V,p,W)\): \(V\) is the set of \(n\) nodes in the graph, \(p\) is a probability measure supported on the nodes of \(G\), and \(W\in\mathbb{R}^{|V|\times|V|}\) is a matrix that encodes relational information between the nodes. For example, \(W\) may be a weighted adjacency matrix [11], a graph Laplacian [16], or a matrix of graph distances [41]. Without prior knowledge about \(G\), \(p\) is typically taken to be uniform; that is, \(p(x)=1/n\), for each \(x\in V\). We represent \(p\) as a vector of size \(n\), \(p=\frac{1}{4}\mathbf{1}_{n}\), where \(\mathbf{1}_{n}=(1,1,\ldots,1)^{T}\in\mathbb{R}^{n}\). In the following sections, we slightly abuse the notation and identify a graph \(G\) with a particular choice of measure network representation \((V,p,W)\). **GW distance.** The key idea behind the GW distance is to find a _probabilistic matching_ between a pair of measure networks by searching the convex set of couplings of the probability measures defined on the networks. Let \(G_{1}=(V_{1},p_{1},W_{1})\) and \(G_{2}=(V_{2},p_{2},W_{2})\) be a pair of measure networks with \(n_{1}\) and \(n_{2}\) nodes, respectively. Let \([n]\) denote the set \(\{1,2,\ldots,n\}\) and suppose that \(V_{1}=\{x_{i}\}_{i\in[n_{1}]}\) and \(V_{2}=\{y_{j}\}_{j\in[n_{2}]}\). A _coupling_ between probability measures \(p_{1}\) and \(p_{2}\) is a joint probability measure on \(V_{1}\times V_{2}\) whose marginals agree with \(p_{1}\) and \(p_{2}\). That is, a coupling is represented as an \(n_{1}\times n_{2}\) non-negative matrix \(\mathcal{C}\) such that \(\mathbf{C}\mathbf{1}_{n_{2}}=p_{1}\) and \(C^{T}\mathbf{1}_{n_{1}}=p_{2}\). The set of all such couplings is denoted as \(\mathcal{C}\), that is, \[\mathcal{C}=\mathcal{C}(p_{1},p_{2})=\{C\in\mathbb{R}_{+}^{n_{1}\times n_{2}} \mid C\mathbf{1}_{n_{2}}=p_{1},C^{T}\mathbf{1}_{n_{1}}=p_{2}\}. \tag{1}\] Following [9], the \(q\)_-th GW distance_ between two measure networks is defined as \[d_{q}^{GW}(G_{1},G_{2})=\frac{1}{2}\min_{C\in\mathcal{C}}\left( \sum_{i,j,k,l}|W_{1}(i,k)-W_{2}(j,l)|^{q}C_{i,j}C_{k,l}\right)^{1/q}. \tag{2}\] The term \(|W_{1}(i,k)-W_{2}(j,l)|\) is considered as the _distortion_ of matching pairs of nodes \((x_{i},x_{k})\) in \(G_{1}\) with \((y_{j},y_{l})\) in \(G_{2}\). The minimizers in Eq. (2) are referred to as _optimal couplings_. ### _Fused Gromov-Wasserstein Distances_ The fused Gromov-Wasserstein (FGW) distance is a hybrid between the Wasserstein distance from classical optimal transport and the GW distance discussed in Sect. 3.2. A measure network \(G=(V,p,W)\) may be equipped with additional information on its nodes, namely, the _node attributes_. That is, we associate each node \(x\in V\) with an attribute \(a\) in some attribute space - a metric space denoted as \((A,d_{A})\). Possible node attributes include labels on the nodes or information derived from the data domain from which \(G\) arises. **Wasserstein distance.** Classical optimal transport theory compares probability measures in terms of the Wasserstein distance \(d_{W}\). Given a pair of measure networks \(G_{1}=(V_{1},p_{1},W_{1})\) and \(G_{2}=(V_{2},p_{2},W_{2})\), where nodes \(x_{i}\in V_{1}\) and \(y_{j}\in V_{2}\) are equipped with attributes \(a_{i}\) and \(b_{j}\) within the same attribute space, we define their \(q\)_-th Wasserstein distance_ based on distances between node attributes to be \[d_{q}^{W}(G_{1},G_{2})=\min_{C\in\mathcal{C}}\left(\sum_{i,j}d_{A}(a_{i},b_{j} )^{q}C_{i,j}\right)^{1/q}. \tag{3}\] We refer to \(d_{A}(a_{i},b_{j})\) as the _attribute distance_ between nodes \(x_{i}\in V_{1}\) and \(y_{j}\in V_{2}\). **FGW distance.** Vayer et al. introduced the fused Gromov-Wasserstein (FGW) distance between attributed graphs and other structured objects [40, 42]. We describe their framework in the setting of measure networks. In a nutshell, the FGW distance is a trade-off between the Wasserstein distance in Eq. (3) and the GW distance in Eq. (2). For \(q\in[1,\infty)\) and a trade-off parameter \(\alpha\in[0,1]\), the _FGW distance_\(dF_{\text{GW}}\) between attributed measure networks \(G_{1}\) and \(G_{2}\) is defined (following [40]) as \[d_{q}^{FGW}(G_{1},G_{2})= \tag{4}\] \[\min_{C\in\mathcal{C}}\sum_{i,j,k,l}[(1-\alpha)d_{A}(a_{i},b_{j}) ^{q}+\alpha|W_{1}(i,k)-W_{2}(j,l))|^{q}]C_{i,j}C_{k,l}. \tag{5}\] Here, \(C\) is considered as a soft assignment matrix, and \(\alpha\) gives a trade-off between labels and structures. As shown in Sect. 4, Eq. (5) plays an important role in encoding both intrinsic and extrinsic information for merge tree comparisons. The FGW distance enjoys a number of desirable properties (see [40] and its supplementary material, as well as [21]). Specifically, it interpolates between the Wasserstein distance on the labels and GW distances on the structures: **Theorem 1**.: _[_40_, Theorem 3.1]_ _As \(\alpha\to 0\), the FGW distance recovers the Wasserstein distance,_ \[\lim_{\alpha\to 0}d_{q}^{FGW}=(d_{q}^{W})^{q}. \tag{6}\] _As \(\alpha\to 1\), the FGW distance recovers the GW distance (ignoring the constant factor in Eq. (2)),_ \[\lim_{\alpha\to 1}d_{q}^{FGW}=(d_{q}^{GW})^{q}. \tag{7}\] Furthermore, \(d_{q}^{FGW}\) defines a metric for \(q=1\) and a semimetric for \(q\geq 2\) (i.e., the triangular inequality is relaxed by a factor \(2^{q-1}\)) [40, Theorem 3.2]. For the remainder of the paper, we work with \(d_{q}^{FGW}\) for \(q=2\). For easy reference, we have \[d_{2}^{FGW}(G_{1},G_{2})=\] \[\min_{C\in\mathcal{C}}\sum_{i,j,k,l}[(1-\alpha)d_{A}(a_{i},b_{j})^ {2}+\alpha|W_{1}(i,k)-W_{2}(j,l))|^{2}|C_{i,j}C_{k,l}. \tag{8}\] The choice of \(q=2\) is justified for computational reasons: given two measure networks with \(n_{1}\) and \(n_{2}\) nodes respectively, we can simplify the computation of the tensor product involved in the evaluation of the GW loss from \(\mathcal{O}(n_{1}^{2}n_{2}^{2})\) to \(\mathcal{O}(n_{1}n_{2}^{2}+n_{1}^{2}n_{2})\) when considering \(q=2\)[8]. ### _Partial Wasserstein and Partial GW Distances_ Our final ingredient comes from partial optimal transport (see, e.g., [43, 44, 45]). We review the framework of Chapel et al. [22] that studies partial Wasserstein and partial GW distances. Notations are simplified in our setting of measure networks. A major drawback of classical optimal transport is that it requires that all mass be transported. This requirement may be too restrictive for many applications where "mass changes may occur due to a creation or an annihilation while computing an optimal transport plan" [22]. In the setting of feature tracking, we need to account for mass changes due to the appearances and disappearances of features. **Partial Wasserstein distance.** Partial optimal transport focuses on transporting a fraction \(0\leq m\leq 1\) of the mass as cheaply as possible [22]. The set of admissible couplings is defined to be \[\mathcal{C}_{m}=\mathcal{C}_{m}(p_{1},p_{2})\] \[=\{C\in\mathbb{R}_{+}^{n_{1}\times n_{2}}\mid C\mathbf{1}_{n_{2} }\leq p_{1},C^{T}\mathbf{1}_{n_{1}}\leq p_{2},\mathbf{1}_{n_{1}}^{T}C\mathbf{1 }_{n_{2}}=m\}, \tag{9}\] and the _partial \(q\)-Wasserstein distance_ is defined as \[d_{q}^{pW}(G_{1},G_{2})=\min_{C\in\mathcal{C}_{m}}\left(\sum_{i,j}d_{A}(a_{i}, b_{j})^{q}C_{i,j}\right)^{1/q}. \tag{10}\] A main difference between partial Wasserstein distance and Wasserstein distance is that we replace the equalities in Eq. (1) with inequalities in Eq. (9) to account for "partial mass transport". **Partial GW distance.** In a similar fashion, given the set of admissible couplings \(\mathcal{C}_{m}\), the _partial \(q\)-GW distance_ is defined as \[d_{q}^{pGW}(G_{1},G_{2})=\frac{1}{2}\min_{C\in\mathcal{C}_{m}} \left(\sum_{i,j,k,l}|W_{1}(i,k)-W_{2}(j,l)|^{q}C_{i,j}C_{k,l}\right)^{1/q}. \tag{11}\] ## 4 Method We now describe our novel framework that performs feature tracking with partial optimal transport. We first introduce a new, partial Fused Gromov-Wasserstein (pFGW) distance between a pair of measure networks (Sect. 4.1). We then model and compare merge trees as measure networks (Sect. 4.2). The pFGW distance gives rise to a partial matching between topological features (i.e., critical points) in merge trees, thus enabling flexible topology tracking for time-varying data (Sect. 4.3). ### _Partial Fused Gromov-Wasserstein Distance_ For topology-based feature tracking, oftentimes features (i.e., critical points) will appear and disappear in time-varying data. Features that appear at time \(t\) do not need to be matched with features at time \(t-1\); similarly, features that disappear at time \(t\) do not need to be matched with features at time \(t+1\). Therefore, we need to introduce a partial Fused Gromov-Wasserstein (pFGW) distance for feature tracking to handle the appearances and disappearances of multiple features in time-varying data. The _pFGW distance_ is defined based on the set of admissible couplings \(\mathcal{C}_{m}\) in Eq. (9) and the FGW distance in Eq. (5). Given a pair of measure networks \(G_{1}\) and \(G_{2}\), formally, we have \[d_{q}^{pFGW}(G_{1},G_{2})=\] \[\min_{C\in\mathcal{C}_{m}}\sum_{i,j,k,l}[(1-\alpha)d_{A}(a_{i},b_{ j})^{q}+\alpha|W_{1}(i,k)-W_{2}(j,l))|^{q}|C_{i,j}C_{k,l}. \tag{12}\] Notice that the newly defined pFGW distance is not too different from the FGW distance, except that it is more flexible by allowing a \(m\) fraction of the total mass to be transported. In practice, we set \(q=2\) and work with \(d_{2}^{pFGW}\). We remark that a related distance was recently introduced in [46] and applied to brain anatomy alignment. The difference between the two distances is that [46] employs a different notion of partial optimal transport (rather, _unbalanced_ optimal transport), where the coupling set is expanded to all joint probability measures and disagreement of marginals is penalized by Kullback-Liebler (KL) divergence. In [46], instead of choosing the amount of mass to be preserved, one must tune the relative weight of the KL regularization term. **Computing pFGW distance.** Computing the pFGW distance is a slight modification of the FGW computation in [21] with ingredients of the Frank-Wolfe optimization algorithm [47] for partial GW computation [22]. On a high-level, computing the partial Wasserstein and the partial GW distances relies on adding dummy nodes in the transportation plan and allowing such dummy nodes to "absorb" a fraction of the mass during transportation. With these dummy nodes added onto the marginals, the Frank-Wolfe algorithm then solves an iterative first-order optimization for constrained convex optimization. Our implementation is based on a minor modification of the code for the FGW framework in [40] ([https://github.com/tvayer/FGW](https://github.com/tvayer/FGW)) with components from the partial optimal transport solvers, part of the open-source Python library for optimal transport [48] ([https://pythonot.github.io/gen_modules/ot.partial.html](https://pythonot.github.io/gen_modules/ot.partial.html)). ### _Modeling Merge Trees as Measure Networks_ Unless otherwise specified, we represent a merge tree \(T\) as an attributed measure network \((V,p,W)\) for the remainder of this paper, where the attributes, weight matrix \(W\), and probability measure \(p\) are defined below. Given a merge tree \(T=(V,p,W)\), information that is typically topological and intrinsic to a merge tree, such as tree distances, may be encoded via the weight matrix \(W\) and the probability measure \(p\) (Sect. 4.2.1). Information that is extrinsic to a merge tree may be encoded via the _node labels_\((A,d_{A})\). Extrinsic information is typically geometrical or statistical, and arises from the data domain, such as the coordinates of the critical points of \(f:\mathbb{M}\rightarrow\mathbb{R}\) (that give rise to the merge tree), function values \(f\) restricted to the set of nodes \(V\), and prior knowledge (such as labels) associated with nodes in a measure network. We discuss various strategies that encode extrinsic and intrinsic information for merge tree comparisons. The key takeaway is that the pFGW distance we build upon provides a flexible framework that encodes geometric and topological information for comparative analysis of merge trees. #### 4.2.1 Encoding Intrinsic Information A merge tree \(T\) is represented using a triple \((V,p,W)\). Information intrinsic to \(T\) may be encoded via \(p\) and \(W\) as we now describe. **Encoding edge information.** Recall that a merge tree \(T\) is a tree equipped with a function \(f:V\rightarrow\mathbb{R}\) defined on its nodes \(V\). To encode the information of \(f\), we explore a _shortest path (SP) strategy_. Recall that each node \(x\) in \(T\) is associated with a scalar value \(f(x)\). Using the SP strategy, for \(x,x^{\prime}\in V\), we define \(W(x,x^{\prime})\) as follows: we associate the weight \(W(x,x^{\prime})=|f(x)-f(x^{\prime})|\) with each pair of adjacent nodes; for nonadjacent nodes, \(W(x,x^{\prime})\) is the sum of the edge weights along the unique shortest path in \(T\) from \(x\) to \(x^{\prime}\). By construction, the shortest path between two nodes goes through their lowest common ancestor in \(T\). That is, an _ancestor_ of a node \(x\) in \(T\) is any node \(v\) such that there exists a path from \(x\) to \(v\) where \(f\)-values are nondecreasing along the path. The _lowest common ancestor_ of two nodes \(x,x^{\prime}\), denoted \(\mathrm{lca}(x,x^{\prime})\), is the common ancestor of \(x\) and \(x^{\prime}\) with the lowest \(f\)-value. We explore an additional strategy by encoding the function values of the lowest common ancestors among pairs of nodes, referred to as the _lowest common ancestor (LCA) strategy_. Using the LCA strategy, we define \(W(x,x^{\prime})=f(\mathrm{lca}(x,x^{\prime}))\) for \(x,x^{\prime}\in V\). For a given ordering of vertices, \(W\) is also known as the induced ultra matrix of a merge tree [19]. **Encoding node information.** Without prior knowledge, we may define \(p\) as a uniform measure, i.e., \(p=\frac{1}{|V|}\mathbf{1}_{|V|}\). This _uniform strategy_ means that all nodes in the merge trees are considered to be equally important during merge tree comparison and matching. On the other hand, \(p\) could be made more general by giving higher weights to nodes deemed more important by an application. For example, we may assign each node \(x\in V\) an importance value that is proportional to the functional difference to its _parent node_, parent(\(x\)), which is the unique neighbor \(x^{\prime}\) of \(x\) in \(T\) with \(f(x^{\prime})\geq f(x)\). That is, we set \(p(x)\propto(f(\mathrm{parent}(x))-f(x))\). Such assignment is referred to as the _parent strategy_. #### 4.2.2 Encoding Extrinsic Information Extrinsic information that typically arises from the geometry of the data domain may be encoded via the attribute space \((A,d_{A})\) and attribute distance \(d_{A}\) in Eq. (5). For a node in the merge tree, the assigned attribute may be a high-dimensional vector or a categorical label. Given a pair of merge trees \(T_{1}=(V_{1},p_{1},W_{1})\) and \(T_{2}=(V_{2},p_{2},W_{2})\), nodes \(x_{i}\in V_{1}\) and \(y_{j}\in V_{2}\) are equipped with attributes \(a_{i}\) and \(b_{j}\) from the same attribute space \((A,d_{A})\). These attributes may be coordinates associated with critical points in the data domain. Specifically, assume \(x_{i}\in V_{1}\) corresponds to a critical point of a function \(f_{1}:\mathbb{M}\rightarrow\mathbb{R}\) in the data domain \(\mathbb{M}\) with coordinates \((x_{1}^{1},x_{2}^{2})\) (assuming \(\mathbb{M}\subset\mathbb{R}^{2}\)), whereas \(y_{j}\in V_{2}\) corresponds to a critical point of \(f_{2}:\mathbb{M}\rightarrow\mathbb{R}\) with coordinates \((y_{j}^{1},y_{j}^{2})\). By setting \(a_{i}=(x_{i}^{1},x_{i}^{2})\) and \(b_{j}=(y_{j}^{1},y_{j}^{2})\), we define \(d_{A}\) to be the Euclidean distance between \(a_{i}\) and \(b_{j}\), \(d_{A}(a_{i},b_{j})=\sqrt{(x_{1}^{1}-y_{j}^{1})^{2}+(x_{i}^{2}-y_{j}^{2})^{2}}\). This definition is referred to as the _coordinates strategy_. This strategy is a natural choice because a core method for critical point tracking is often based on their Euclidean distance proximity. #### 4.2.3 Simple Examples In Fig. 2, we show a simple example of using pFGW distance for critical point matching between a pair of merge trees \(T_{1}\) and \(T_{2}\). These merge tree arise from slightly different mixtures of Gaussian functions \(f_{1}\) and \(f_{2}\) in 2D; see (a) and (c), respectively. As shown in (a), \(T_{1}\) and \(T_{2}\) are structurally similar: \(T_{1}\) contains 10 critical points, and \(T_{2}\) has 8 critical points with a pair of critical points removed (see the region enclosed by the red box). Here, we apply a _uniform strategy_ to \(p\). We set \(m=0.8\), since we suspect that only 8 of 10 nodes in \(T_{1}\) could find its proper match in \(T_{2}\). After computing the pFGW distance, the \(10\times 8\) coupling matrix \(C\) is shown in (d) and visualized in (b). An entry \(C(i,j)\) in the coupling matrix indicates the probability of a node \(i\in T_{1}\) being matched to node \(j\in T_{2}\). In particular, rows \(C(2,\cdot)\) and \(C(3,\cdot)\) (in a red box) are both zero, indicating that no partners in \(T_{2}\) are matched with nodes 2 and 3 in \(T_{1}\). In other words, by design, nodes 2 and 3 in \(T_{1}\) are matched to a dummy node during the partial optimal transport. Furthermore, nodes in \(T_{2}\) are colored by its most probable partner in \(T_{1}\), which aligns well with our intuition. In this example, each node in \(T_{1}\) has a unique partner in \(T_{2}\); however, in practice, a node may be coupled with multiple nodes with nonzero probabilities, as shown in the next example. We provide another example in Fig. 3 to demonstrate probabilistic matching with our framework. As shown in (b), \(f_{1}\) is a mixture of four positive and one negative Gaussian functions. In \(f_{2}\), a positive Gaussian function on top is split into two Gaussian functions, resulting in two local maxima and one saddle point. Merge trees \(T_{1}\) and \(T_{2}\) in (a) describe the topology of scalar fields \(f_{1}\) and \(f_{2}\), respectively. Notice that the topological change in \(T_{2}\) (enclosed by a red box) highlights the feature splitting event in \(f_{2}\). Rather than enforcing a one-to-one correspondence between the critical points, our pFGW framework allows probabilistic matching among them. As shown in (c)-(d), the coupling matrix \(C\) contains multiple rows and columns with more than one nonzero entries. For example, the row \(C(3,\cdot)\) (red box) has two nonzero entries, namely 0.025 at \(C(3,1)\) and 0.01 at \(C(3,4)\), see (d), which indicates that node 3 in \(T_{1}\) can be matched to both node 1 and 4 in \(T_{2}\) with Fig. 2: Partial optimal matching using pFGW, \(m=0.8\). (a) Merge trees that arise from mixtures of Gaussian functions in (c). The coupling matrix (d) is visualized with a heat map in (b). varying probabilities. Such a matching is probable due to the feature splitting event. As node 4 in \(T_{2}\) is closer to node 3 in \(T_{1}\) (than node 1 in \(T_{2}\)), \(C(3,4)\) has a higher coupling probability than \(C(3,1)\). ### _Flexible Topology Tracking_ By modeling merge trees as measure networks (Sect. 4.2) and introducing a new pFGW distance based on partial optimal transport (Sect. 4.1), we are ready to describe our topology tracking framework in Sect. 4.3.1 and discuss its flexibility in Sect. 4.3.2. #### 4.3.1 Tracking Framework Our topology tracking framework consists of three steps. **1. Feature detection.** First, we compute a merge tree for each time step. We use the algorithm implemented in TTK [49]. Since each of our datasets is a 2D time-varying scalar field, each merge tree contains local minima, saddles, and a global maximum (assuming there is a unique global maximum). When the data is noisy, we apply persistent simplification [50] to remove pairs of critical points with low persistence. In other words, we retain significant features in the domain for tracking purposes. **2. Feature matching.** Second, we utilize our pFGW framework for feature matching across adjacent time steps. Let \(T_{1}\) and \(T_{2}\) be two merge trees computed at time steps \(t\) and \(t+1\), respectively. We then model them as measure networks \(T_{1}=(V_{1},p_{1},W_{1}),T_{2}=(V_{2},p_{2},W_{2})\) and apply the pFGW framework described in Sect. 4.1 to match critical points from \(T_{1}\) with \(T_{2}\). We utilize a conservative _bijective matching strategy_. Based on the optimal coupling \(C\), a node \(x\in V_{1}\) may be coupled (matched) with multiple nodes in \(V_{2}\). We will choose \(x^{\prime}\in V_{2}\), which has the highest matching probability with \(x\) (referred to as the most probable partner). Similarly, for \(x^{\prime}\in V_{2}\), we will choose its most probable partner \(x^{\prime\prime}\in V_{1}\). If \(x=x^{\prime\prime}\), and then \(x\) and \(x^{\prime}\) are matched to form a trajectory. **3. Trajectory extraction.** Trajectories are constructed by connecting successively matched critical points. For any two adjacent time steps \(t\) and \(t+1\), if a node \(x\) at time \(t\) is matched with a node \(x^{\prime}\) at time \(t+1\), then a segment is constructed connecting \(x\) and \(x^{\prime}\) in the spacetime domain. If a node \(x\) at time \(t\) is ignored (i.e., matched to the dummy node) during the partial optimal transport, then the current trajectory terminates. If a node \(x^{\prime}\) at time \(t+1\) is ignored during the partial optimal transport, it is considered as a new feature, and a new trajectory begins. #### 4.3.2 A Discussion on Flexibility Modeling a merge tree \(T\) as a measure network \(T=(V,p,W)\) and its associated pFGW distance offers great flexibility in the comparative analysis of merge trees. The flexibility is reflected via a number of parameters. First, parameters \(W\) and \(p\) allow various strategies for encoding intrinsic and extrinsic information of a merge tree, including the _shortest path_ (SP) and _lowest common ancestor_ (LCA) strategies for encoding edge information; _uniform_ and _parent_ strategies for encoding node information; and _coordinates strategy_ for encoding geometric information from the data domain. Second, parameter \(\alpha\) from Eq. (12) strikes a balance in considering intrinsic information (via the GW distance) and extrinsic information (via the Wasserstein distance) for merge tree comparisons. Third, parameter \(m\) from Eq. (12) allows partial mass transport to accommodate the appearances and disappearances of topological features. ## 5 A New Stability Result We now state a new theoretical stability result involving the GW distance, which shows that a small change in the function data produces a small change in merge tree representations, as measured by the GW distance; see the supplementary material for a detailed proof and some experimental validation of Theorem 2. Let \(X\) be a finite, connected geometric simplicial complex with vertex set \(V\). Let \(f:X\to\mathbb{R}\) be a function obtained by starting with a function \(f:V\to\mathbb{R}\) on the vertex set and extending linearly over higher dimensional simplices. Let \(p\) be a probability distribution over the vertex set \(V\). We will assume that \(p\) is _balanced_, in the sense that for any \(u,v,w\in V\), we have \(p(u)\cdot p(v)\leq p(w)\); this property holds for the uniform distribution, for example. We then define the measure network representation of \(T_{f}\) to be \(G_{f}=(V,p,W_{f})\), with \(W_{f}\) defined based on the least common ancestor (LCA) strategy. We also define a family of weighted norms on the space of functions \(f:V\to\mathbb{R}\) by \[\|f\|_{L^{q}(p)}:=\left(\sum_{v\in V}|f(v)|^{q}p(v)\right)^{1/q}.\] We can now state our theorem. **Theorem 2**.: _Let \(f,g:X\to\mathbb{R}\) be functions defined as above and let \(p\) be a balanced probability distribution. Then_ \[d_{q}^{\text{GW}}(G_{f},G_{g})\leq\frac{1}{2}|V|^{2/q}\|f-g\|_{L^{q}(p)}.\] We also show in the supplementary material that the Lipschitz constant \(\frac{1}{2}|V|^{2/q}\) is asymptotically tight for general probability measures. When the measure is uniform, the constant can be improved to \(\frac{1}{2}|V|^{1/q}\). Finally, we have the following corollary, which treats the shortest path strategy for encoding a merge tree as a measure network. **Corollary 1**.: _Let \(f,g:X\to\mathbb{R}\) be functions defined as above and let \(p\) be a balanced probability distribution. Let \(G_{f}\) (respectively, \(G_{g}\)) denote the representation of the merge tree \(T_{f}\) (\(T_{g}\)) defined by the shortest path strategy. Then_ \[d_{q}^{\text{GW}}(G_{f},G_{g})\leq\left(|V|^{2/q}+2\right)\|f-g\|_{L^{q}(p)}.\] Fig. 3: Partial optimal matching using pFGW, \(m=0.85\). (a) Merge trees that arise from mixtures of Gaussian functions in (c). The coupling matrix (d) is visualized with a heat map in (b). Experiments We demonstrate the utility of our framework with four 2D datasets and one 3D dataset. For each dataset, we also compare against two state-of-the-art approaches. ### _Datasets Overview_ The first dataset is a simulation of a 2D flow generated by a heated cylinder using the Boussinesq approximation [51, 52], referred to as the Heated Cylinder dataset. The simulation was done with a Gerris flow solver and was resampled onto a regular grid. It shows a time-varying turbulent plume containing numerous small vortices that, in part, rotate around each other. We generate a set of merge trees from the magnitude of the velocity fields based on 31 time steps (600-630 from the original 2000 time steps). These time steps describe the evolution of small vortices. The second dataset is a 2D unsteady cylinder flow (synthetic), referred to as the Unsteady Cylinder Flow dataset. This synthetic vector field represents a simple model of a von-Karman vortex street generation and was constructed by Jung, Tel, and Ziemniak as co-gradient to a stream function [53]. The obstacle is positioned at \((0,0)\) and has a radius of 1. In the LIC image on the side, the flow in the interior of the obstacle has not been set to zero. Note that only two vortices are present at the same time. We sampled four periods onto a regular grid. We use the first 499 time steps in the dataset, and use merge trees computed for the velocity magnitude field that primarily capture the behavior of local maxima, saddles, and a global minimum. The first two datasets are available via the Computer Graphics Laboratory [54]. The third dataset is the classic 2D von Karman vortex street dataset coming from the simulation of a viscous 2D flow around a cylinder, referred to as the VortexStreet dataset. It contains vortices moving with almost constant speed to the right, except directly in the wake of the obstacle, where they accelerate. We model vorticity as scalar fields, and track the evolution of local maxima over time. The fourth dataset comes from the 2008 IEEE Visualization Design Contest [55], referred to as the Ionization Front dataset. This time-varying dataset simulates the propagation of an ionization front instability. The simulation is done with 3D radiation hydrodynamical calculations of ionization front instabilities in which multifrequency radiative transfer is coupled to the primordial chemistry of eight species [56]. For this experiment, we use the density to generate merge trees from the 2D slices near the center of the simulation volume for 123 time steps, which correspond to steps 11-133 from the original 200 time steps. These time steps show the density over time as the instability progresses toward the right. Finally, we use a collection of 3D volumes simulating the wind velocity magnitude of the Isabel hurricane, referred to as the Isabel dataset. We use this dataset to demonstrate the ability of our method to track features in 3D scientific datasets. We use 12 time steps that depict the key events of the hurricane (formation, drift, and landfall): time steps 2 to 5, 30 to 33, and 45 to 48. This 3D dataset is acquired from the Climate Data Gateway at NCAR [57]. ### _Heated Cylinder Dataset_ We first use the Heated Cylinder dataset to demonstrate in detail our parameter tuning process in Sect. 6.2.1. We then showcase the tracking results based on partial optimal transport in Sect. 6.2.2. Finally, we compare against previous approaches in Sect. 6.2.3. #### 6.2.1 Parameter Tuning There are two types of parameters in our framework: the preprocessing parameter \(\epsilon\) that is used to de-noise the input data; and the in-processing parameters \(W\), \(p\), \(\alpha\), and \(m\) for feature tracking. **Preprocessing parameter tuning.** Persistence simplification is considered a preprocessing step for data de-noising. Let \(\epsilon\in[0,1]\) denote the persistence simplification parameter. Let \(R\) denote the range of a given scalar field. Using persistence simplification, critical points with persistence less than \(\epsilon\cdot R\) are removed from the domain. \(\epsilon\) is typically chosen based on the shape of a _persistence graph_, where a plateau in a persistence graph indicates a stable range of scales to separate features from noise. Such a strategy has been used perviously in simplifying scientific data (e.g., [58, 59]). For Heated Cylinder, we use \(\epsilon=6\%\), which is slightly left of the first observable plateau in the persistence graph, as we try to maintain a slightly larger number of features; see Fig. 4. **In-processing parameter tuning.** To evaluate the quality of the extracted trajectories, we aim to reduce two types of artifacts during parameter tuning: _oversegmentations_ where a single trajectory is unnecessarily segmented into subtrajectories; and _mismatches_ between critical points that appear as zigzag patterns connecting adjacent time steps. We introduce two metrics to evaluate these artifacts quantitatively: first, the _number of trajectories_, denoted as \(N\); and second, the maximum Euclidean distance between matched critical points across time (referred to as the _maximum matched distance_ for simplicity), denoted as \(L\). Specifically, we introduce a parameter \(L^{*}\) that represents an upper bound on \(L\). During parameter tuning, a guiding principle is to reduce oversegmentations and mismatches by minimizing \(N\) and \(L\). In this paper, we focus on tracking features surrounding local maxima; therefore, we compute \(N\) and \(L\) only for local maxima trajectories. First, we consider parameter tuning for \(W\) and \(p\). We inspect the behavior of \(W\) (or \(p\)) while keeping other parameters fixed. Through extensive experiments across all datasets in this paper, we observe that the SP (i.e., shortest path) strategy for \(W\) generally behaves equal to or better than the LCA strategy in minimizing \(N\) and \(L\). We also observed that the uniform strategy for \(p\) performs better than the parent strategy. Therefore, for the rest of the paper, \(W\) uses the SP strategy and \(p\) uses the uniform strategy. Second, we study the parameter tuning of \(m\) for a fixed \(\alpha\). \(m\) may be considered as an in-processing step for data de-noising, by matching a certain number of features to the dummy nodes during partial optimal transport. We use an example in Fig. 5 (left) to demonstrate the process. For a fixed \(\alpha=0.1\), we perform a grid search of \(m\in[0.5,1.0]\) with an increment of 0.01. For instance, at \(m=0.90\), we see a number of oversegmented trajectories in the Fig. 4: Heated Cylinder: persistent simplification \(\epsilon=6\%\); x-axis is \(\epsilon\). blue boxes; such oversegmentations decrease as \(m\) increases from \(0.90\) to \(0.94\) (top left). On the other hand, obvious mismatches appear in the red boxes for \(m\geq 0.96\) (bottom left). As \(m\) increases from \(0.9\) to \(1.0\), we observe a decrease in the number of trajectories \(N\) and an increase in the maximum matched distance \(L\); this is additionally demonstrated in the plots of \(N\) and \(L\), see Fig. 5 (top right and bottom right). If our goal is to choose an appropriate _global_ value for \(m\), then we are interested in striking a balance between minimizing \(N\) and minimizing \(L\); therefore, we may choose \(m=0.94\) in this example. However, as shown in Fig. 5, at \(m=0.94\), there are still oversegmentations within the blue box, indicating that a _locally adaptive_ value of \(m\) might be more appropriate in practice. Our final strategy aims to automatically adjust the value of \(m\) between adjacent time steps to reduce \(N\), without increasing \(L\) drastically. Specifically, we perform a 2D grid search of \(\alpha\) and \(m\): \(\alpha\in[0.0,1.0]\) with an increment of \(0.1\), and \(m\in[0.5,1.0]\) with an increment of \(0.01\). For each fixed \(\alpha\), we apply the following procedure. First, we plot the curve of \(L\) as we increase \(m\). Second, we apply the elbow method and pick the elbow of the \(L\) curve as an upper bound on \(L\), denoted as \(L^{*}\). Finally, for each pair of adjacent steps \(t\) and \(t+1\), we automatically choose the largest value of \(m\) such that \(L\) does not exceed \(L^{*}\). In other words, \(m\) varies adaptively across time steps, see Fig. 6 (top) with marked elbow points. As \(\alpha\) varies, we plot the number of trajectories \(N\) and the maximum matched distance \(L\) (\(\leq L^{*}\)) at each \(\alpha\), as shown in Fig. 6 (bottom). We look for a proper value of \(\alpha\) to minimize both \(N\) and \(L\). However, \(N\) and \(L\) may not be minimized at the same \(\alpha\). In this scenario, we look for an \(\alpha\) such that it minimizes \(N\) while keeping \(L\) to be small enough to minimize the number of mismatches. Using this strategy, we set \(\alpha=0.1\), with a corresponding \(L^{*}=0.00997\). #### 6.2.2 Tracking Result Fig. 7 shows our final tracking result on the left with views of scalar fields on the right that highlight the appearances and disappearances of critical points. In Fig. 7 (left), the \(xy\)-plane visualizes the scalar field at \(t=0\), and the \(z\)-axis shows the trajectories for all the local maxima and the global minimum as time increases. Most trajectories are shown to be straight lines as only minor topological changes occur in this dataset. Meanwhile, our framework successfully captures the appearances and disappearances of critical points. As shown in Fig. 7 (right), for time steps \(2\to 3,10\to 11,16\to 17\) and \(19\to 20\), critical points disappear in the blue boxes, resulting in the termination of trajectories; for time steps \(9\to 10\), a critical point appears in a red box, resulting in the start of a new, green trajectory. #### 6.2.3 Comparison with Previous Approaches We compare the tracking results for our pFGW framework with two other state-of-the-art feature tracking approaches, referred to as Global Feature Tracking (GFT) [29, 30] and Lifted Wasserstein Matcher [25] (LWM); see the supplementary material for parameter tuning of GFT and LWM. **Implementations.** Our pFGW framework utilizes the libraries implemented in TTK [49] for merge tree computation. GFT is implemented in \(C++\) and is available at [60]. It computes the merge trees and region segmentations, and outputs the tracking results between critical points at adjacent time steps. GFT allows tracking between saddles and local extrema, whereas pFGW only Fig. 5: Heated Cylinder. Left: for a fixed \(\alpha=0.1\), perform a grid search of \(m\) and observe the number of oversegmentations (in blue boxes) and mismatches (in red boxes). Right: the trend among the number of trajectories (\(N\)) and the maximum matched distance (\(L\)) as \(m\) increases. Fig. 6: Heated Cylinder. (a) \(L\) as we change the global \(m\) for each \(\alpha\); elbows of curves are marked with dotted horizontal and vertical lines, (b) \(N\) and \(L\) with respect to \(\alpha\) (using adaptive \(m\)). focuses on tracking between local extrema. Therefore, we adjust the postprocessing of GFT to remove trajectories involving saddles. LWM is implemented as an embedded library in TTK. Results of all three methods are visualized via ParaView [61] with VTK [62]. Since neither LWM nor GFT includes details on their parameter tuning, we apply the same parameter tuning strategy as pFGW to both LWM and GFT, that is, minimizing the number of trajectories and the maximum matched distances; see the supplementary material for details. Furthermore, all three methods apply the same persistence-based simplification during preprocessing. However, since GFT is defined on a regular grid of squares, and pFGW and LWM use identical simplicial meshes, we expect minor inconsistencies on the simplified datasets between GFT and other two methods. **Tracking Results Comparison.** All three tracking results are shown in Fig. 8 (top). All three methods produce 24 trajectories, but there are noticeable differences in GFT-produced trajectories (comparing red and blue boxes, respectively). We evaluate these results quantitatively based on observable oversegmentations and mismatches. There are obvious oversegmentations from GFT compared to the other two methods: a trajectory in the red box is broken in GFT, but remains continuous in pFGW and LWM. As for mismatches, GFT produces a different tracking result from pFGW and LWM in the blue box, from time steps \(24\to 27\); the corresponding scalar fields are shown in Fig. 8 (bottom). We interpret the topological changes as follows: a critical point appears from \(24\to 25\), another critical point appears from \(25\to 26\), and a critical point disappears from \(26\to 27\). The trajectories in pFGW and LWM correctly reflect these topological changes, whereas those in GFT consider these changes to be the movements of critical points. Therefore, pFGW and LWM perform similarly, but GFT performs slightly worse for the Heated Cylinder dataset. ### _Unsteady Cylinder Flow_ For the Unsteady Cylinder Flow dataset, we employ the same parameter tuning strategy detailed in Sect. 6.2.1. We use a persistence simplification level at \(\epsilon=1\%\). We set \(\alpha=0.1\) and \(L^{*}=0.03768\), see the supplementary material for details. #### 6.3.1 Tracking Results Our tracking result using pFGW is highly periodic, where the extracted trajectories exhibit repetitive patterns that include the appearances, disappearances, and movements of local maxima over time, see Fig. 9 (left). We show a few time steps at \(t=53,178,303\), and \(428\) to highlight a periodicity of \(\approx 125\). Furthermore, as shown in Fig. 9 (right), six snapshots show the evolution of the scalar field within a single period between \(t=3\) and \(t=128\), where the scalar field at \(t=128\) is mostly identical to the one at \(t=3\). #### 6.3.2 Comparison with Previous Approaches We compare our pFGW framework against the LWM and GFT methods, which give rise to 44, 44, and 108 trajectories, respectively, see Fig. 9. Fig. 7: Heated Cylinder. Tracking result (left) with views of scalar fields (right) that capture topological changes in the time-varying scalar field at selected time steps. The appearances and disappearances of critical points are highlighted in red and blue boxes, respectively. When considering mismatches, the trajectories from all three methods are visually similar, where there are no obvious mismatches for any of these methods. In particular, the (normalized) maximum matched distances across the three methods are the same, \(L=0.03768\). When considering oversegmentations, GFT produces 108 trajectories, whereas pFGW and LWM each produces 44 trajectories. Correspondingly, GFT shows many more broken trajectories visually in comparison with pFGW and LWM. ### _2D von Karman vortex street dataset_ We then study the VortexStreet dataset. We set \(\epsilon=1\%\), \(\alpha=0.1\), and \(L^{*}=0.02537\); see the supplementary material for details. #### 6.4.1 Tracking Results The tracking results for VortexStreet using pFGW, LWM and GFT are shown in Fig. 10 (left), in which there are 17, 17, and 27 trajectories, respectively. The results for pFGW and LWM are mostly identical, whereas the results from GFT show a number of oversegmentations and missing trajectories at later time steps (e.g., see the blue box). A few snapshots of the scalar field are shown in Fig. 10 (right top), where local maxima are well aligned horizontally and moving rightward at an almost constant speed. This characteristic leads to a large number of straight-line trajectories, as shown in Fig. 10 (left). Meanwhile, a critical point remains stable in location to the left of the cylinder, whose trajectory is shown as a single straight line on the leftmost part of Fig. 10 for both pFGW and LWM. #### 6.4.2 Comparison with Previous Approaches Our pFGW method and the LWM method perform similarly on VortexStreet in terms of reducing oversegmentations and mismatches. Meanwhile, similar to HeatedCylinder and Unsteady Cylinder Flow, GFT typically introduces more oversegmentations in comparison with pFGW and LWM; in addition, certain trajectories may be missing due to insufficient feature overlaps between adjacent time steps. In Fig. 10 (right bottom), we show merge-tree-based segmentation of the scalar field at time steps 44 and 45. Here, the corresponding features at \(t=44\) and \(t=45\) move rapidly to the right (see the purple and green boxes, respectively). Although the features associated with these adjacent time steps are visually similar, their overlap is quite small based on their Jaccard index. Such insufficient feature overlaps appear to impact the tracking results significantly. This observation motivates us to further compare pFGW, LWM, and GFT for subsampled data. We are interested in exploring the strengths and weaknesses of these methods when there are insufficient feature overlaps due to subsampling. Fig. 8: Heated Cylinder. Top: from left to right, pFGW (ours), LWM, and GFT, respectively. Bottom: the appearances and disappearances of local maxima in the blue boxes on top. Fig. 10: VortexStreet. Left: comparing tracking results for pFGW, LWM, and GFT, respectively. Right top: snapshots of scalar fields at \(t=0,28\), and 56. Right bottom: merge-tree-based segmentation of the scalar fields at \(t=44\) and \(45\). Fig. 9: Unsteady Cylinder Flow. Left: comparing tracking results for pFGW, LWM, and GFT, respectively: Right: snapshots of scalar fields within a single period between \(t=3\) and \(t=128\). ### _Ionization Front Dataset_ We study the Ionization Front dataset by setting \(\epsilon=10\%\), \(\alpha=0.4\), and \(L^{*}=0.02693\). A few snapshots of the scalar field at time steps \(0,30,60\) and \(90\) are shown in Fig. 11, as the instability progresses towards the right. #### 6.5.1 Tracking Results We demonstrate our pFGW tracking results in Fig. 12 (left), where trajectories are shown with the scalar field at \(t=0\). We then visualize these trajectories with the landscape of the time-varying scalar field in Fig. 12 (right), which is constructed by stacking the original scalar field at time steps 0, 10, 20,..., 100, and 110. Such a landscape clearly shows the rightward propagation of the ionization front. The results shown in Fig. 12 (top) thus contain a number of trajectories that capture such a trend. We further split these trajectories into two sets: trajectories that last longer than 29 time steps (_long-term trajectories_) in Fig. 12 (middle), and those that last between 5 and 29 steps (_short-term trajectories_) in Fig. 12 (bottom). We ignore trajectories shorter than 5 time steps as they do not capture the global trend of the data. A number of the long-term trajectories appear to follow the direction of the radiation waves, whereas some short-term trajectories capture local interactions among them. #### 6.5.2 Comparison with Previous Approaches In terms of oversegmentations, pFGW, LWM, and GFT give rise to 51, 52, and 92 trajectories, respectively. pFGW produces slightly fewer trajectories than LWM, whereas GFT oversegments and produces the largest number of trajectories, see Fig. 13. In particular, GFT produces noticeably broken long-term trajectories, implying that it fails to track some major features consistently. In terms of mismatches, trajectories from all three methods interpret the evolution of features in a similar way. However, pFGW produces the smallest (normalized) maximum distance of 0.02693, whereas LWM and GFT give rise to a (normalized) maximum distance of 0.03840. ### _3D Isabel Dataset_ Finally, for the 3D Isabel dataset, we apply both pFGW and LWM to track the trajectory of the global maximum, which highlights the movement of the main hurricane. This dataset contains a discrete set of time steps with large gaps; thus, it is not suitable for feature tracking based on region overlaps (such as GFT). As shown in Fig. 14, both pFGW and LWM successfully track the movement of the hurricane. These results highlight the robustness of topology-based feature tracking in 3D. ### _Subsampling and Robust Tracking_ For both the VortexStreet and Isabel datasets, we observe that topology-based feature tracking (such as pFGW and LWM) behaves better than geometry-based methods (such as GFT) when there are not sufficient region overlaps between adjacent time steps. In this section, we further examine the robustness of the three methods by subsampling time steps from previous datasets. We generate subsampled datasets by sampling a single instance for every 6, 15, and 10 time steps for Heated Cylinder, Unsteady Cylinder Flow, and Ionization Front datasets, respectively. Fig. 11: A few snapshots of Ionization Front dataset. Fig. 14: Isabel. Tracking results for pFGW (left) and LWM (right). Fig. 12: Ionization Front. Left: pFGW trajectories are shown with the scalar field at time step 0. Right: pFGW trajectories are visualized with the landscape of the time-varying scalar field. Top: all trajectories; middle: long-term trajectories; bottom: short-term trajectories. Fig. 13: Ionization Front. Comparing tracking results for pFGW, LWM, and GFT, respectively. #### 6.7.1 Qualitative Comparisons The tracking results for these subsampled datasets are shown in Fig. 15, using the original tracking pFGW results (first column) as a reference. We expect the tracked trajectories to be similar for a robust tracking method, with and without subsampling. For the subsampled \(\mathsf{Heated}\)\(\mathsf{Cylinder}\) dataset in Fig. 15 (top), all three methods preserve the overall shape of trajectories, whereas pFGW demonstrates a slight advantage. In particular, some trajectories obtained by pFGW are missing by LWM (c.f., red boxes), whereas GFT produces oversegmentations (c.f., blue boxes). For the subsampled \(\mathsf{Unsteady}\)\(\mathsf{Cylinder}\) Flow dataset in Fig. 15 (middle), LWM introduces obvious mismatches by matching geometrically distant critical points, whereas GFT creates a great number of broken trajectories on the left. In comparison, pFGW produces better tracking results without oversegmentations or mismatches. For the subsampled \(\mathsf{Ionization}\)\(\mathsf{Front}\) dataset in Fig. 15 (bottom), all three methods show their limitations on tracking. LWM is able to preserve only a subset of long-term features and misses other features. It also produces a number of mismatches. For example, LWM incorrectly tracks the feature at the center of the domain to the outer boundary of the wave. GFT fails to preserve any major trajectories under subsampling. In comparison, pFGW is able to replicate major patterns of the trajectories, especially the long-term ones. However, we can also see some mismatches in its tracking results. Based on these visualizations, pFGW is the best at preserving trajectories under subsampling while minimizing oversegmentations and mismatches. To evaluate these results quantitatively, we now discuss quantitative comparisons. #### 6.7.2 Quantitative Comparisons We utilize the notion of the Jaccard index to study the similarity between two sets of trajectories. Let \(a\) and \(b\) denote a pair of trajectories, each of which contains a finite number of critical points sampled at discrete time steps. We define the overlap between \(a\) and \(b\) as their Jaccard index, \[J(a,b)=\frac{|a\cap b|}{|a\cup b|}.\] Let \(A\) and \(B\) be two sets of trajectories produced by two tracking methods, respectively. For each trajectory \(a\in A\), define its matched trajectory \(\pi(a)\in B\) such that \(\pi(a)=\operatorname*{argmax}_{b\in B}J(a,b)\). For any \(a\), \(\pi(a)\) may not be unique. We then introduce two measures that quantify the similarity between \(A\) and \(B\): \[S(A,B)=\frac{\sum_{a\in A}J(a,\pi(a))}{|A|}\] \[S_{W}(A,B)=\frac{\sum_{a\in A}J(a,\pi(a))|a|}{\sum_{a\in A}|a|}\] \(S\) captures the average overlap of all trajectories in \(A\) against their matched ones in \(B\), whereas \(S_{W}\) is a weighted version of \(S\) considering the lengths of trajectories in the summations. \(S\) and \(S_{W}\) are not symmetric and have optimal values of \(1\) when \(A=B\). In a subsampled dataset, a number of critical points may be missing from the original dataset. Let \(A\) be the set of sub-trajectories from the original tracking results restricted to the subsampled time steps. Let \(B\) be the set of trajectories obtained from the subsampled dataset. \(S(A,B)\) and \(S_{W}(A,B)\) describe how well a tracking algorithm preserves the trajectories against subsampling, whereas \(S(B,A)\) and \(S_{W}(B,A)\) indicate how well a tracking algorithm avoids mismatches in the subsampled dataset. In our experiment, we ignore (sub)trajectories of length \(1\) as they are isolated critical points. The quantitative evaluation results are provided in Table I. pFGW is shown to have better performance than GFT and LWM in terms of capturing original trajectories under subsampling, for almost all cases. In particular, for the \(\mathsf{Unsteady}\)\(\mathsf{Cylinder}\) Flow Fig. 15: Tracking results of Heated Cylinder (top), Unsteady Cylinder Flow (middle), and Ionization Front (bottom) dataset under subsampling. From left to right: the original pFGW, pFGW, LWM, and GFT with subsampling, respectively. and Ionization Front datasets, pFGW obtains significantly higher similarity measures than GFT and LWM. These results align well with our observations in Fig. 15 that pFGW is better at preserving trajectories and avoiding mismatches for subsampled datasets. Drawbacks of GFT and LWM are also evident in Table I. For GFT, \(S(A,B)\) and \(S_{W}(A,B)\) over the Unsteady Cylinder Flow dataset are low because GFT fails to maintain continuity of trajectories on the left. For the Ionization Front dataset, GFT does not maintain long-term trajectories, leading to low similarity measures. For LWM, similarity measures are lowest for the Unsteady Cylinder Flow dataset due to significant mismatches in the tracking results. LWM maintains only a few long-term features for the Ionization Front dataset, leading to measures lower than those from pFGW. To summarize, based on both qualitative and quantitative evaluations, GFT appears to lose its ability to track features when there are not sufficient time resolutions for geometry-based tracking, for instance, the subsampled Ionization Front dataset. Whereas LWM captures major features during tracking, it is not as robust as pFGW in tracking features for datasets with low time resolutions. For example, for the subsampled Heated Cylinder and Ionization Front datasets, LWM misses a large portion of the original trajectories. For the Unsteady Cylinder Flow dataset, LWM generates many obvious mismatches. Such drawbacks are also clearly reflected in the similarity measures. In comparison, our pFGW method performs quite well in robustly tracking features on datasets with low time resolutions. #### 6.7.3 Runtime Analysis We perform runtime analysis for all three approaches (GFT, LWM, and pFGW) under the fine-tuned parameter configurations, as shown Table II. For the Unsteady Cylinder Flow, Vortex Street, Ionization Front, and Isabel datasets, pFGW achieves a similar runtime with LWM. GFT is generally slower than the other two methods; it runs extremely slow on Isabel, for which the runtime is not reported. pFGW is the slowest among the three for the Heated Cylinder dataset. Overall, all three methods take \(\leq 0.01\) second to compute the feature matching between a pair of adjacent time steps. We do not include the runtime for merge tree generation as it is part of the data preprocessing. However, GFT is expected to spend more time on merge tree generation than LWM and pFGW, since it requires some extra information on merge tree segmentation. ## 7 Probabilistic Tracking Graphs A direct consequence of our pFGW method is that it enables richer representations of tracking graphs, referred to as _probabilistic tracking graphs_. The partial optimal transport provides a probabilistic coupling between features at adjacent time steps, which are then visualized by weighted tracks of these tracking graphs. We provide a visual demo for probabilistic feature tracking for several 2D time-varying datasets. We illustrate its visual interface using a synthetic dataset. As shown in Fig. 16, the synthetic dataset is constructed as a mixture of nine Gaussian functions: one negative Gaussian function stays fixed at the center, eight Gaussian functions are positioned on a cycle, four of which remain stationary, whereas the other four move clockwise around the center. We focus on tracking the local maxima across time. As shown in Fig. 16, the _graph view_**(a)** visualizes a tracking graph whose feature tracks are equipped with probabilistic tracking information. The _track view_**(b)** displays tracks across five consecutive time steps in 3D spacetime centered around the selected time step. The _data view_**(c)** presents the scalar fields at the same five time steps. With multiple views, users can explore the probabilistic feature tracking results from global and local perspectives. The graph view **(a)** visualizes a tracking graph that captures the evolution of features across all time steps. Vertical _time bars_ are positioned along the x-axis to represent time steps in increasing order, whereas _tracks_ associated with individual features are laid out horizontally in a way that minimizes edge crossings. Nodes at the intersection of time bars and tracks represent features that appear, disappear, merge, or split. If feature \(i\) from time \(t\) is coupled with feature \(j\) from time \(t+1\) with a nonzero measure in the coupling matrix \(C\), an edge is drawn between these two features in the tracking graph, whose color and opacity encodes the value of \(C(i,j)\) as indicated in the color bar. \(C(i,j)\) is a probability measure, where higher value implies a higher probability of matching feature \(i\) with feature \(j\). Users could filter the tracks in the tracking graphs by scrolling the color bar, in order to explore the tracking graphs at different probability thresholds. In the tracking graph shown in Fig. 16 (a), when the four moving Gaussian functions coincide with the four stationary ones, their corresponding features merge together at time step 8. Subsequently, these merged features split at time step 10. The tracking graph depicts such events as probabilistic merges and splits. From time step 7 to 8, two features \(o_{1}\) and \(o_{2}\) merged into feature \(o\) with the equal probability. Similarly, from time step 10 to 11, a single feature \(o^{\prime}\) (that corresponds to \(o\)) splits into two (\(o_{3}\) and \(o_{4}\)) with equal probability. These merging and splitting events are also encoded in the data view and the track view, see Fig. 16 (b) and (c). In this case, \(o_{2}\) at time step 7 corresponds to \(o_{4}\) at time step 11, \(o\) at time step 8 corresponds to \(o^{\prime}\) at time step 10; however, \(o_{1}\) at time step 7 does not match to \(o_{3}\) at time step 11, since there are ambiguities in matching due to symmetry. We now visually demonstrate the probabilistic tracking graph \begin{table} \begin{tabular}{c|c|c|c c|c} \hline Dataset & Method & \(S(A,B)\) & \(S(B,A)\) & \(S(A,B)\) & \(S(B,A)\) \\ \hline \multirow{3}{*}{Heated Cylinder} & GFT & 0.763 & **0.806** & 0.804 & **0.828** \\ & LWM & 0.642 & 0.714 & 0.663 & 0.672 \\ & pFGW & **0.821** & 0.782 & **0.827** & 0.806 \\ \hline \multirow{3}{*}{Unsteady Cylinder Flow} & GFT & 0.760 & 0.858 & 0.716 & 0.839 \\ & LWM & 0.229 & 0.403 & 0.307 & 0.446 \\ & pFGW & **0.907** & **0.990** & **0.957** & **0.991** \\ \hline \multirow{3}{*}{Ionization Front} & GFT & 0.314 & 0.452 & 0.228 & 0.419 \\ & LWM & 0.274 & 0.590 & 0.345 & 0.566 \\ \cline{1-1} & pFGW & **0.552** & **0.624** & **0.588** & **0.598** \\ \hline \end{tabular} \end{table} TABLE I: Similarity measures between a pair of tracking results with and without subsampling. For a fixed tracking method (pFGW, LWM, and GFT), \(A\) denotes the subtrajectories from the original tracking results restricted to subsampled time steps. \(B\) denotes the trajectories from the subsampled dataset. for the \(\mathsf{IonizationFront}\) dataset. In the example shown in Fig. 17, we set the probability threshold at 0.023 (main view) and 0.032 (inserted view), respectively. To investigate the data of interest, users could select a specific time step \(t\) by clicking its corresponding time bar, which updates views **(a)**, **(b)**, and **(c)**. For the graph view (**a)**, the select time bar is highlighted in red, with the previous two (at \(t-2\), \(t-1\)) and subsequent two (at \(t+1\), \(t+2\)) time bars colored in orange and blue, respectively. We smoothly adjust intervals between time steps based on the fisheye technique using animation, so that the focus area surrounding the selected time bar is magnified and the area away from the focus is compressed. For the track view **(b)**, we render five scalar fields centered by the selected time step and highlight the tracks among them in a 3D spacetime, while supporting zooming and rotation. For the data view **(c)**, we visualize the five 2D scalar fields side by side, where tracked features are highlighted in red. Furthermore, our visual demo allows users to explore tracks associated with specific features. As illustrated in Fig. 17, users can select a feature of interest (denoted by \(o\)), which sits at the intersection of four tracks \(l_{1}\),\(l_{2}\),\(l_{3}\), and \(l_{4}\); all of which are highlighted in red while maintaining their opacity. The track view **(b)** then displays these four tracks, whereas the data view **(c)** highlights the corresponding features (in magenta) along these tracks. In particular, two features, \(o_{1}\) and \(o_{2}\) at time step 69 are coupled with feature \(o\) at time step 70 with relatively high probabilities. By increasing the tracking probability threshold, shown in the box insert, feature \(o_{1}\) will stop its track at time step 69, whereas features \(o_{2}\) and \(o\) remain matched with each other. Our visual demo showcases such uncertainty in tracking. The visual demo is implemented using _JavaScript_ for the front-end, where the tracking graphs are visualized with _D3,js_ and the scalar fields are visualized using _WebGL_. The computational back-end is built with _Python_ and _Flask_. ## 8 Conclusion In this paper, we provide a flexible framework for tracking topological features in time-varying scalar fields. Our framework builds upon tools from topological data analysis (i.e., merge trees) and partial optimal transport. In particular, we model a merge tree as a measure network, and define a new partial fused Gromov-Wasserstein distance between a pair of merge trees. Such a distance gives rise to a partial matching between topological features in time-varying data, thus enabling flexible topology tracking for scientific simulations, as demonstrated by our extensive experiments. On the other hand, our framework is not without limitations. First, we focus on feature tracking using merge trees, that is, we aim to preserve _sublevel set_ relations between features (i.e., critical points) that are captured by merge trees. Other topological descriptors such as Reeb graphs and Morse complexes may capture different topological relations such as _level set_ or _gradient_ relations. We would like to explore topology tracking with partial optimal transport using other types of topological descriptors, which are left for future work. Second, we provide experimental justifications for parameter tuning; understanding parameter tuning from a theoretical standpoint seems elusive. For future work, given the efficiency of our implementation, we would like to perform experiments involving datasets from large-scale simulations. ## Acknowledgments This project was partially supported by DOE DE-SC0021015, NSF IIS 2145499, IIS 1910733, and DMS 2107808.
2305.09967
Variable Length Embeddings
In this work, we introduce a novel deep learning architecture, Variable Length Embeddings (VLEs), an autoregressive model that can produce a latent representation composed of an arbitrary number of tokens. As a proof of concept, we demonstrate the capabilities of VLEs on tasks that involve reconstruction and image decomposition. We evaluate our experiments on a mix of the iNaturalist and ImageNet datasets and find that VLEs achieve comparable reconstruction results to a state of the art VAE, using less than a tenth of the parameters.
Johnathan Chiu, Andi Gu, Matt Zhou
2023-05-17T05:59:53Z
http://arxiv.org/abs/2305.09967v1
# Variable Length Embeddings ###### Abstract In this work, we introduce a novel deep learning architecture, Variable Length Embeddings (VLEs), an autoregressive model that can produce a latent representation composed of an arbitrary number of tokens. As a proof of concept, we demonstrate the capabilities of VLEs on tasks that involve reconstruction and image decomposition. We evaluate our experiments on a mix of the iNaturalist [1] and ImageNet [2] datasets and find that VLEs achieve comparable reconstruction results to a state of the art VAE, using less than a tenth of the parameters. ## 1 Introduction We introduce a novel deep learning architecture, called Variable Length Embeddings (VLE). A VLE is an autoencoder that differs from traditional ones in one key aspect: whereas conventional autoencoders have a fixed embedding dimension, VLEs (as their name suggests) use a variable-length embedding dimension. Allowing the embedding dimension to vary is a natural idea: not all images are created equal. Images that contain more complex semantics should naturally require more resources to represent efficiently. Viewed through the lens of information theory, this is a well-known idea: we ought to use less resources to represent 'easy' samples. This idea is formalized by Shannon coding, which is an efficient compression scheme that maps samples \(x\) to code words with length \(l=\lceil-\log p(x)\rceil\). Here, the difficulty of a sample is measured by \(-\log p(x)\), which means that we take frequently occurring samples to be easy. VLEs borrow from this idea, but diverge from the information-theoretic approach in terms of what is used to measure complexity. Rather than taking a bottom-up approach (which would involve modeling the density \(p(x)\)), we take a top-down approach. VLEs attempt to decompose the image into a sequence of semantically distinct objects, ordered by contextual significance. For images with sparse or simple content, the image should be accurately modeled with just a few tokens, whereas images with complex scenes may take much more. This iterative approach to reconstruction is again a well-known idea in different fields. In physics and applied mathematics, one often tries to represent an unknown function \(f\) as a power series expansion (known as a perturbative expansion): \[f=f_{0}+\epsilon f_{1}+\epsilon^{2}f_{2}+\ldots, \tag{1}\] where \(\epsilon\ll 1\) is some small parameter. Put simply, \(f_{0}\) represents the coarsest approximation to \(f\). The addition of \(\epsilon f_{1}\) corrects the coarse approximation \(f_{0}\) by taking into account some additional details in \(f\), then \(\epsilon^{2}f_{2}\) further improves this approximation, and so on. Similarly, in signal processing, one is often interested in finding a representation of a signal in terms of a weighted sum \[f=\alpha_{0}f_{0}+\alpha_{1}f_{1}+\alpha_{2}f_{2}+\ldots. \tag{2}\] The matching pursuit algorithm [3] is an effective, albeit suboptimal, method that finds this representation by iteratively'matching' \(f_{0},f_{1},\ldots\) to the signal, at each iteration taking whichever function \(f_{i}\) has the largest inner product with the remaining unmodeled components of the input signal. Related worksAs hinted at above, a variable length encoding is a natural idea for a number of tasks, one of which is compression. Indeed, Toderici et al. [4] developed this idea using a long-term memory (LSTM) model to generate variable length codes to represent a given image. However, their focus is on achieving a maximal possible compression rate (i.e., faithfully representing an image using as few tokens as possible). In this work, we focus on using the variable-length approach as a means to find useful (i.e., interesting or interpretable) decompositions of the image. Other works such as DRAW [5] or diffusion models [6] approach generative modeling by iteratively adding detail at different scales. We find that by focusing purely on generative modeling, although these models often produce interpretable results, they are unable to generalize to other downstream tasks, such as classification or captioning. On the contrary, our aim when designing VLE was that tokens could be used for any number of downstream tasks, including generative modeling, classification, or image captioning for future works. ## 2 Methodology ### Autoregressive Encoding Inspired by the decompositions of Eqs. (1) and (2), we aim to represent images as a series of tokens, with each token being some vector in \(\mathbb{R}^{d}\). The core of our idea is to use an autoregressive approach to generating these tokens; we propose a simple formulation for this. We have a single encoder \(\mathcal{E}\) and decoder \(\mathcal{D}\), and at each iteration the encoder input is equal to the remaining portion of the image that is unaccounted for by previous tokens. That is, the input at every iteration is the current residual (see Algorithm 1). For training purposes, we set a maximum number of tokens \(n_{max}\). ``` \(n\gets 0\) \(\hat{X}_{n}\gets 0\)\(\triangleright\) Initialize the reconstruction while\(n<n_{max}\)do \(z_{n}\leftarrow\mathcal{E}(X-\hat{X}_{n-1})\) \(\hat{X}_{n}\leftarrow\hat{X}_{n-1}+\mathcal{D}(z_{n})\) \(n\gets n+1\) endwhile ``` **Algorithm 1** VLE Autoregressive Loop One might naively define a loss to be the mean squared error between the final reconstruction \(\hat{X}_{n_{max}}\) and the input \(X\). However, limiting the number of tokens to \(n_{max}\) is somewhat artificial (it is necessary as a practical matter for training purposes)1, so it is rather unnatural to define a loss that depends on \(\hat{X}_{n_{max}}\) alone. Our true objective is not merely to find a good final reconstruction; it is that each subsequent token improves the reconstruction as much as possible. Therefore, we might consider a loss in the form Footnote 1: Future tasks could include determining the number of tokens needed per image through a metric or a learnable method. \[\mathcal{L}=\frac{1}{n_{max}}\sum_{n=1}^{n_{max}}MSE(X,\hat{X}_{n}). \tag{3}\] This penalizes poor intermediate reconstructions, which encourages each token to play a significant role in modeling the image. We will term models trained with Algorithm 1 and a loss Eq. (3) 'vanilla VLEs', in contrast to a modified variant that we discuss in Section 2.2. Given the structure of Algorithm 1, it may seem reasonable to use pretrained models for the encoder \(\mathcal{E}\) and decoder \(\mathcal{D}\). However, we find that the autoencoders do not generalize well without further training on \(n_{max}>1\) tokens (see Fig. 1). This is likely due to the fact that for later tokens \(n\geq 2\), the encoder-decoder pair is being run on out-of-distribution inputs. ### Semantics vs. Pixels Losses in the form of Eq. (3), by design, encourage the model to simply match the _pixels_ of the input image as closely as possible. Indeed, models trained with this loss perform very well on reconstruction (for details see Section 3.3 and Appendix B). In fact, one can test a VLE model's dependence on pixel values by finding the number of tokens required to model an image to a given error, as a function of the entropy in the image's pixel distribution. The positive correlation between the two for vanilla VLEs (see Fig. 2) indicates that these models have a strong dependence on the complexity of the pixel distribution, rather than the semantic content of the image. Although this dependence on pixel distribution may be desirable in certain cases, we prefer a reconstruction that matches the _content_ of the image as closely as possible. In fact, this goal has led to the development of perceptual losses such as LPIPS [7]. Unsurprisingly, we find that if we simply minimize Eq. (3), although we get faithful reconstructions, intermediate tokens often do not represent semantically distinct objects, and the model instead learns fairly elementary decompositions of the input (see Appendix B). For instance, a fairly typical mode was for early tokens to represent low frequency data in the image, and later tokens to contain higher frequency data. Although this may be interesting in its own right, we prefer a model that is able to model semantically different objects with each token, since representations of an image that use different tokens for different objects are more: * useful for downstream tasks, such as classification or caption generation, * intepretable, * and amenable to generative modeling. Figure 1: Applying Algorithm 1 with a trained VLE and a pretrained VAE, with an identical token dimension. The violin plot shows the distribution of mean squared error as a function of number of tokens used to model the image. This evaluation is done on the INaturalist and ImageNet validation set. Note the good generalization of the vanilla VLE: we only train the VLE with 4 tokens, yet the mean modeling error is still strictly decreasing past 4 tokens. We attempt to remedy this by imposing a loss which encourages each token be distinct from all other tokens. This stems from the observation that semantically distinct objects are typically spatially localized, whereas simpler decompositions (such as a frequency-space decomposition) often reconstruct the image globally. We represent this distinctness loss as \[\mathcal{L}=\exp\Bigl{(}-MSE(\mathcal{D}(z_{n}),\hat{X}_{n-1})\Bigr{)}. \tag{4}\] This loss is minimized when the intermediate reconstruction on token \(n\) is maximally different from the sum of the previous \(n-1\) reconstructions.2 However, imposing this loss alone pushes the model to decompose the image into a color scheme representation (where different color channels are modeled for each token). In light of this, we move one step further and introduce a'mask' component which we output from the initial layers of the encoder. In doing so, we provide a guidance mechanism to the encoder detailing an area/region of the image to encode. Rather than imposing the loss in Eq. (4) on the reconstructions, we impose it on the _mask_ instead: Footnote 2: In our initial approach we applied Binary Cross Entropy (BCE) and found this loss to be numerically unstable, as it can diverge. The loss in Eq. (4) has an inherent benefit such that the loss is maximized at 1.0 and can only occur if each reconstruction is equivalent to the sum of the previous reconstructions. \[\mathcal{L}_{mask,n}=\exp\Bigl{(}-MSE(\tilde{M}_{n},\hat{M}_{n-1})\Bigr{)}. \tag{5}\] We additionally modify the loss represented in Eq. (3) to incorporate the information of the mask. We define this as such: \[\mathcal{L}_{rec,n}=\frac{1}{D}||\tilde{M}_{n}\odot(X-\hat{X}_{n})||_{2}^{2}. \tag{6}\] Combining Eq. (5) and Eq. (6) results in our final loss defined as such: \[\mathcal{L}=\frac{1}{n_{max}}\sum_{n=1}^{n_{max}}{(\mathcal{L}_{rec,n}+ \mathcal{L}_{mask,n})}. \tag{7}\] Figure 2: We show the number tokens required to reach a certain MSE threshold against the internal Shannon entropy of an image (computed using Scikit-Image’s Shannon Entropy function [8]). The plot shows an overall upward trend meaning that the as the image entropy increases, the number of tokens required to reconstruct the image increases as well. where \(M_{n}\) represents the decoded mask associated with the \(n\)th token, \(\odot\) is the Hadamard product and \(D\) is the dimension of the matrix. In words, for each decoded mask and reconstruction, we only impose the MSE loss on regions of the image that are included in the mask \(M_{n}\). Although imposing this loss slightly impairs the reconstruction performance, it helps us achieve a better balance between our twin goals of good image reconstruction and finding intepretable tokens (see Fig. 3). The modified algorithm for our model is defined in Algorithm 2, trained on the loss Eq. (7) (\(\mathcal{S}\) refers to a model that produces a mask from the residual \(X-\hat{X}_{n-1}\).). We term this variant of VLE'masked VLE', in contrast to 'vanilla VLE'. We remark that both models are trained in an unsupervised fashion (more specifically, self-supervised in the case of the masked VLE model). ``` \(n\gets 0\)\(\hat{X}_{n}\gets 0\)\(\triangleright\) Set the reconstruction to \(0\)\(\hat{M}_{n}\gets 0\)\(\triangleright\) Set the mask to \(0\)while\(n<n_{max}\)do \(\tilde{X}_{n},\tilde{M}_{n}\leftarrow\mathcal{S}(X-\hat{X}_{n-1})\)\(\triangleright\) Extract the mask to save for loss computation \(z_{n}\leftarrow\mathcal{E}(\tilde{X}_{n}\mid\tilde{M}_{n})\)\(\triangleright\) Encode the transformed image, conditioned on the mask \(X_{n}\leftarrow\mathcal{D}(z_{n})\) \(\hat{X}_{n}\leftarrow\hat{X}_{n-1}+X_{n}\) \(\hat{M}_{n}\leftarrow\hat{M}_{n-1}+\tilde{M}_{n}\) \(n\gets n+1\) endwhile ``` **Algorithm 2** VLE Autoregressive Loop with Masks Figure 3: Intermediate reconstructions (top row) and corresponding masks (bottom row). The final column is the source (top image) and reconstruction (bottom image). We emphasize here that these masks are produced using self-supervision. See more results in Appendix A. ## 3 Experiments ### Model Architecture We experiment with a very simple autoencoder structure which we implement from decomposing a U-Net model by removing the skip-connections. Each layer consists of a \(n\) residual blocks followed by a downsampling convolution layer. The dimension of the latent space is 64x smaller than the original image (same as the VAE we benchmark against). To iterate quickly on the experiments, we use a model with a small number of trainable parameters: \(\sim\)7M parameters for both the vanilla VLE and the mask VLE. For mask VLEs we include a "precursor" model we define in Algorithm 2 as \(\mathcal{S}\). The purpose of this model is to identify a focus object for the encoder to compress and is just a residual block. Additionally, we add a small convolutional LSTM after the precursor model of the encoder to provide a memory state for the encoder. This LSTM layer uses the hidden state to guide what objects to look at in relation to the previously seen objects in the image. For a schematic representation of our architecture, see Fig. 4. ### Training We experimented with a number of different VLE variants, which comprised of small modifications to loss functions or model architecture (keeping intact the core idea of a variable length embedding). Notably, we find that different variants corresponded to different decompositions of the image. Whereas some models performed something as simple as a naive color decomposition, remarkably, other variants were able to identify meaningfully distinct objects in an image in an unsupervised manner. During training, the minimum number of tokens is one in the vanilla case (which simply corresponds to a generic, fixed dimension autoencoder). In the mask case, we start with two tokens. The number of tokens used in a given iteration is chosen somewhat randomly. We sample the number of tokens from a folded-normal distribution where the mean value of the folded-normal distribution increases linearly with the number of iterations. This means at low iterations, the model should see smaller number of tokens and at higher iterations, the opposite. Five tokens was determined by the capacity of the GPUs' memory. We do observe that the model does generalize well when trained Figure 4: One iteration of Algorithm 2. jointly and that an additional number of tokens used during inference can only boost reconstruction performance (as observed in Fig. 1). We train on the LAION dataset [9]. The aspect ratios of the images vary, but we reshape them to have a fixed \(512\times 512\) resolution. We parallelize each training run across 4 Nvidia A100 80GB SXM4 GPUs with a batch size of 16 (4 images/GPU). We train every model used in this paper to 120000 gradient updates (30000 steps/GPU). ### Evaluation We evaluate the models on their reconstruction performance for 10000 images from a combination of iNaturalist [1] and ImageNet [2] datasets and benchmark against the VAE used in Latent Diffusion Models [10]. We also note here again, as shown in Table 1, that the mask VLE does perform worse in reconstruction as a consequence of outputting more interpretable tokens compared to vanilla VLEs. ## 4 Conclusion Motivated by an information-theoretic approach to compression and representation, we introduce a way to allow the latent dimension of an autoencoder to vary. We do this by representing images as (variable length) sequences of tokens, terming this technique Variable Length Embeddings. However, rather than purely aiming to compress the image as much as possible, we include an inductive bias that encourages the model to learn interpretable tokens via the masking mechanism discussed in Section 2.2. We find that the model is then able to perform well on both tasks: it achieves a competitive reconstruction error compared to other autoencoders and finds decompositions of the image into human interpretable masks. Although we already found our models to output good masks, we believe their quality could be improved further by * adding image segmentation or saliency priors/semi-supervised losses so that the model can better understand the objects in the image, or * adding other modalities (such as image captioning) during training so the model better understands the contextual objects in an image. \begin{table} \begin{tabular}{|c||c c c|} \hline Token number & Vanilla VLE & Mask VLE & VAE \\ \hline 1 & 0.0119/0.645 & 0.2162/0.368 & 0.0113/0.654 \\ 2 & 0.0073/0.742 & 0.1355/0.272 & - \\ 3 & 0.0053/0.804 & 0.1156/0.285 & - \\ 4 & 0.0045/0.829 & 0.0098/0.656 & - \\ 5 & 0.0041/0.841 & 0.0115/0.640 & - \\ 6 & 0.0038/0.849 & 0.0099/0.655 & - \\ 7 & 0.0036/0.854 & 0.0103/0.654 & - \\ \hline \end{tabular} \end{table} Table 1: Mean squared error and structural similarity index measure (SSIM) [11] for two variants of VLE (vanilla and mask), compared to VAE. The token dimension is downsampled by a factor of 64 from input dimension for both VLE and VAE. The VAE architecture has \(\sim\)80M parameters, while the VLE architecture is described in Section 3.1 and has \(\sim\)7M parameters. We emphasize that we are able to achieve similar results to a pretrained VAE with less than a tenth of the parameters as seen in the first row between the vanilla VLE and the VAE. Extension to Generative ModelingWe believe VLEs have the potential to address a number of shortcomings for diffusion-based approaches, particularly the failure of diffusion models to accurately place objects in a user-specified quadrant. Since each token in VLE fully characterizes particular objects in an image, they must contain information about their spatial location within the frame; this spatial information is much more natural to manipulate compared to diffusion-based approaches, for which it is not so clear where spatial information of individual objects is encoded. A first experiment in this direction would be to use a VLE model to encode all objects in a provided image and randomize the location of the objects without modifying the characteristics of these objects. Following this, one might extend VLEs as an end-to-end text-to-image generative model. Text prompts to the model would be embedded as a sequence of tokens, with each token having a direct connection to some subject in the prompt. For instance, the prompt "a penguin on the right side standing on a bed of ice with the Sun on the upper left side" would contain at least 3 key tokens: the penguin on the right, the bed of ice below, and the Sun on the upper-left side. AcknowledgementsWe thank key members of the Runway team that supported and provided guidance in our research: Rohan Agarwal, Jonathan Granskog, Deepti Ghadiyaram, Patrick Esser, Anastasis Germanidis. We additionally thank our friend Joe Zou for his valuable feedback on early drafts of this manuscript.
2303.04115
Predicted Embedding Power Regression for Large-Scale Out-of-Distribution Detection
Out-of-distribution (OOD) inputs can compromise the performance and safety of real world machine learning systems. While many methods exist for OOD detection and work well on small scale datasets with lower resolution and few classes, few methods have been developed for large-scale OOD detection. Existing large-scale methods generally depend on maximum classification probability, such as the state-of-the-art grouped softmax method. In this work, we develop a novel approach that calculates the probability of the predicted class label based on label distributions learned during the training process. Our method performs better than current state-of-the-art methods with only a negligible increase in compute cost. We evaluate our method against contemporary methods across $14$ datasets and achieve a statistically significant improvement with respect to AUROC (84.2 vs 82.4) and AUPR (96.2 vs 93.7).
Hong Yang, William Gebhardt, Alexander G. Ororbia, Travis Desell
2023-03-07T18:28:39Z
http://arxiv.org/abs/2303.04115v2
# Predicted Embedding Power Regression for Large-Scale Out-of-Distribution Detection ###### Abstract Out-of-distribution (OOD) inputs can compromise the performance and safety of real world machine learning systems. While many methods exist for OOD detection and work well on small scale datasets with lower resolution and few classes, few methods have been developed for large-scale OOD detection. Existing large-scale methods generally depend on maximum classification probability, such as the state-of-the-art grouped softmax method. In this work, we develop a novel approach that calculates the probability of the predicted class label based on label distributions learned during the training process. Our method performs better than current state-of-the-art methods with only a negligible increase in compute cost. We evaluate our method against contemporary methods across \(14\) datasets and achieve a statistically significant improvement with respect to AUROC (\(84.2\) vs \(82.4\)) and AUPR (\(96.2\) vs \(93.7\)). ## 1 Introduction Out of distribution (OOD) detection is a critical tool for the development of safe and reliable autonomous and semi-autonomous machine learning (ML) systems. Identification of anomalous inputs allows intelligent systems to initiate a conservative fallback policy or to defer to human judgment, reducing the risks to all stakeholders. Hendrycks [14] noted that OOD detection is important for safety critical systems, such as self-driving cars and detecting novel microorganisms. As a result, a plethora of literature has emerged over the years for addressing the problem of OOD detection [2, 15, 16, 22, 25]. However, these efforts focus on small and lower resolution datasets. Although, recent work has attempted to address the issue of large scale OOD detection [13, 18] on large datasets at high resolution [8, 33]. Emerging safety critical systems operate on a much larger dataset at much higher resolutions, the real world. Unlike small datasets, real world data often contains hundreds or thousands of classes at a very high resolution. Use cases for OOD detection range from autonomous driving [3] to medical imaging [29], applications that contain a significant number of potential classes. However, the reliability of the typical baseline OOD detection method [15] decreases rapidly as the number of classes increases, resulting in a change from \(17.3\)% false positive rate at \(95\)% true positive rate (FPR95) with \(50\) classes to a \(76.9\)% FPR95 when using \(1000\) classes [18]. Our research is motivated by the need to improve OOD detection methods for use in safety critical applications. We focus on the theoretical concept of Bayesian conditional probability for improving the decision boundary between in-distribution and OOD data. If we consider softmax classification as estimating the class \(y\) given data \(x\) as \(P(y|x)\), our method considers modeling \(P(x|\hat{y})\), the probability of the data given the predicted class itself. Doing so will allow us to further consider the marginal probability of \(P(\hat{y})\) existing within our in-distribution data, leading us to develop an efficient and effective method for scoring OOD data. We estimate \(P(\hat{y})\) by leveraging the behavioural characteristics of the exponential linear unit (ELU) [6] activation function and the process of batch normalization [19]. Notably, we find that combining the ELU function with batch normalization results in sparse, large values for an embedding \(z\) prior to the final classification layer for in distribution data. A large expected embedding value \(\mathbb{E}(z|\hat{y})\) functions as a proxy for \(P(\hat{y})\), such that larger expected embedding values indicate a \(\hat{y}\) that has been observed repeatedly during training, see Figure 3. We extensively evaluate our approach on models trained with the Imagenet \(1\)k dataset [8], leveraging the state-of-the-art pre-trained BiT-S models [21] as our pre-trained model backbone. We significantly reduce required computation and memory usage by pre-computing our backbone outputs and demonstrate that our efficient approach successfully reproduces the results from [18], the state-of-the-art large-scale OOD detection task on Imagenet \(1\)K. Compared to the previous best method [18], we demonstrate that our method scores higher in terms of area under the receiver operating characteristic (AUROC) curve (\(84.2\) versus \(82.4\)) and area under the precision-recall (AUPR) curve (\(96.2\) versus \(93.7\)) on a larger, more diverse set of benchmarks. The results of this paper mark an important step towards leveraging the conditional probability of intermediate outputs for OOD detection. Below, we summarize this study's **key results and contributions:** * We propose a novel conditional probability-based scoring method, the Predicted Embedding Power Regression (PEPR) method and the Combined PEPR (C-PEPR) method, that performs better than current state-of-the-art by a statistically significant margin. * We introduce a high variance OOD detection method that benefits from ensembling. Contemporary methods tend to exhibit extreme stability across runs for all measured performance metrics. In contrast, our approach's stochasticity results in ensembling benefits. * We reproduce contemporary methods across a larger group of benchmarks and demonstrate that our computational method is more energy and compute efficient, reducing the training step time by nearly \(80\)%. Unlike previous papers, our experiments measure the standard deviation across multiple runs and establish a statistically significant improvement over the state of the art. ## 2 Related Work **Multi-class OOD Detection with Pre-trained Models:** A common baseline for OOD was historically established by Hendrycks and Gimpel [15] which specifically used the maximum softmax probability. Further efforts have attempted to improve the OOD estimation by using the ODIN score [25], deep ensembles [22], a Mahalanobis distance-based confidence score [24], Energy score [26], as well as the Minimum Other Score [18]. Note that these methods do not use any OOD data for fine tuning or training. **Multi-class OOD Detection with Model Fine-tuning:** An alternative research direction is to leverage additional data from outside of the distribution in order to regularize the model [2, 11, 28]. In this setup, additional/auxiliary data may be realistic images [16] or synthetic images generated by generative adversarial networks (GANs) [23]. These extra images are used in various ways, such as to regularize the probabilities back to a uniform distribution [23] or as "background" class samples [30]. Note that constructing additional OOD data assumes a distribution for such data points. This means that the additional OOD data may not be similar to other OOD data samples that the model would encounter in the wild. **Large Scale Multi-class OOD Detection:** This line of work focuses on datasets with a large number of classes, most commonly found in Imagenet \(1\)k. [34] utilized half of Imagenet \(1\)k as in-distribution data and the other half as out-of-distribution data. They also used the Places-\(434\) dataset and evaluated a plethora of different approaches, including KL matching and MSP. [18] introduced a grouped cross-entropy to generate an implicit inter-group background class, which was demonstrated to improve system performance. [13] further increased the number of classes by using data from Imagenet \(21\)k. **Hierarchical Classification:** Hierarchical structure can provide additional label information, which can facilitate efficient inference [9], improved generalization accuracy [7], and better object detection [32]. Some efforts have made use of a label tree structure when a taxonomy is unavailable [7, 9]. Many studies explore the benefits and importance of basic hierarchical structures for various classification tasks [12, 17, 36]. **Bayesian Randomised MAP Sampling:** This line of related work exploits the fact that adding a regularisation term to a loss function returns a maximum a posteriori (MAP) parameter estimate, i.e., a point estimate of the Bayesian posterior. Repeating this calculation produces a distribution of MAP solutions that mimicks that of the true posterior. This allows for efficient sampling of high-dimensional posteriors [4]. Some methods allow for sampling of the posterior but fail to recover the true posterior itself [27]. In contrast, other methods require significant computational resources for recovering the true posterior [1]. Work done in [31] provided a suitable compromise with respect to the accuracy of the posterior and an increase in computational cost. ## 3 Methodology ### Preliminaries We consider a training dataset drawn i.i.d. from the in-distribution \(P_{X}\), with label space \(Y=1,2,\ldots,C\). For the OOD detection problem, we train a classifier \(F(x)\) on the in-distribution \(P_{X}\), and evaluate it on samples that are drawn from a different (outside) distribution \(Q_{X}\). An OOD detector \(G(x)\) is a binary classifier defined as: \[G(\mathbf{x})=\begin{cases}1&\text{if }S(\mathbf{x})\geq\gamma\quad\text{// ``in''}\\ 0&\text{if }S(\mathbf{x})<\gamma\quad\text{// ``out''}\end{cases} \tag{1}\] where \(S(x)\) is a scoring function and \(\gamma\) is a threshold, determined by the target practical application of the OOD detector (different applications would have different precision and recall targets). ### KL Matching and Class Similarity KL matching [13] provides a useful intuition for solving large scale OOD detection. As the number of classes/categories increases, the similarity between classes may also increase, leading to reduced confidence in the classifier's predictions. This means that the performance of the Maximum Softmax Probability (MSP) [15] method for OOD degrades with a greater number of classes. KL matching attempts to solve the class size problem by generating expected probability distributions around each category, a.k.a. posterior distribution templates, expecting in-distribution image patterns to more closely match the posterior distribution templates. In effect, it measures \(P(\hat{y})\), the probability of the class existing in the in-distribution dataset. However, KL matching has a major disadvantage compared to other methods; it needs to store the posterior distribution templates and use them for comparison. When performing OOD detection, each image's predicted class distribution must be compared against each distribution template in order to find the minimum KL divergence value. This means that the computational cost scales with the square of the number of classes. Although this is not problematic with \(1000\) classes, it quickly becomes prohibitive when there tens of thousands of classes or more. ### Predicted Embedding Power Regression In this work, we use the intuition behind KL matching to formulate a new conditional probability test that does not rely on an array of posterior distribution templates. Concretely, we consider the positive expectation of a model's selected intermediate layer outputs conditioned on the predicted softmax probability distribution. Note that this test is crucially built on two key theoretical notions. First, if a selected intermediate layer uses a linear rectifier based (ReLU-like) activation function followed by a batch normalization (batch norm) operation, that layer's output can be viewed as an embedding \(z\). A negative value in this layer's Figure 1: Overview of the PEPR model. The process consists of the following steps: **(1)** Learn to classify training images via the learnable embedding, **(2)** Estimate the embedding values conditioned on the predicted class probabilities, and **(3)** Define a threshold and use PEPR to calculate the score. No gradient flows between parts **(1)** and **(2)** and no OOD images are used during training. Note that the classifier can be any fully connected layer, depending on the number of classes. Definitions for the embedder and regressor can be seen in listings 1 and 2 activity represents the absence of a feature and a positive value represents its presence. We observe empirically that such a layer, which precedes the final softmax classification output layer, exhibits a tail-heavy behaviour, with a large proportion of values below zero (the batch norm mean) and a minority of values significantly above zero. Figure 2 displays the distribution of embedding values of such a layer after model training. This means that, for any given image, we would expect to observe a minority consisting of very large positive values and a majority of consisting of negative values in \(z\). Second, an additional model, which we will refer to as the regressor \(\hat{z}=R(\hat{y})\), will learn to predict the expected value of the embedding layer, i.e, \(\mathbb{E}(z|\hat{y})\), where the predicted class distribution is \(F(x)=\hat{y}\). When \(R(\hat{y})\) is trained using a mean squared error loss, the regressor should learn to predict mostly negative values and a few large positive values. For class distributions that are not present in the training data, we would expect \(R(\hat{y})\) to predict values close to zero, which is the true batch normalized mean for the embedding unconditional of the class, i.e., \(\mathbb{E}(z)=0\). Combining the above two notions together yields our proposed OOD detection method, i.e., the predicted embedding power regression (PEPR) model. By using an intermediate layer between the model backbone and the softmax classification layer, PEPR can then focus on modeling a batch normalized embedding. We then train a nonlinear regression model to estimate the embedding values based on the softmax classification distribution. Note that we expect that the average of the squared positive expected embedding values to be higher for in-distribution data than for OOD data. In effect, we recover an estimate of \(P(\hat{y})\) via the magnitude of \(\hat{z}\), which is learned via regressor \(R(\hat{y})\) from training data patterns. Note that our method only Figure 3: The distribution of \(S_{\text{PEPR}}(x)\) for the OOD data _arachnids_ compared to the Imagenet 1K in-distribution data. The regressor \(\hat{z}=R(\hat{y})\) predicts significantly higher values for \(\hat{z}\) for in-distribution patterns when compared to OOD ones. Figure 2: Distribution of embedding \(z\) for the OOD dataset arachnids compared to the in distribution dataset of Imagenet 1K. There is a noticeable tail in the distribution of the in-distribution dataset. These distributions overlap significantly. Frequency is such that the area sums to one. adds a small amount of computational overhead. The actual score computed using our regressor is formally: \[S_{\text{PEPR}}(x)=\frac{1}{n}\sum_{i=1}^{n}\left(\text{ReLU}\big{(}R(\hat{y})_{i }\big{)}\right)^{2} \tag{2}\] where \(n\) is the dimensionality of the embedding. An empirical sample of the PEPR score distribution is presented in Figure 3, which corroborates our hypothesis. Finally, we further improve the PEPR model by also utilizing the actual embedding values. Specifically, we compute the actual embedding in-distribution score by calculating the mean of the squared embedding values, which we call the embedding power (EPOW). We then add this score, weighted by the coefficient parameter \(\psi\), to the PEPR score and refer to the final score as C-PEPR. Formally, this score is calculated as follows: \[S_{\text{C-PEPR}}(x)=S_{\text{PEPR}}(x)+\psi\frac{1}{n}\sum_{i=1}^{n}(z_{i})^{2} \tag{3}\] Note that, in this work, we set the factor \(\psi=0.01\). #### 3.3.1 Stability Our empirical results indicated that the PEPR and C-PEPR methods have a higher standard deviation than contemporary methods. Our methods tend to have a standard deviation of \(\approx\) 0.9 AUROC, while MSP [15] and Minimum Other Score (MOS) [18] have a standard deviation of less than \(0.1\) AUROC. We decided to evaluate a 10 regressor ensemble of C-PEPR (C-PEPR-10) and a 10 classifier ensemble of MOS (MOS-10) to investigate the stability and AUROC improvements of ensembling. #### 3.3.2 Grouped Labels In this paper, we train CPEPR and PEPR using the grouped softmax approach presented in [18]. Experimental results indicated that PEPR did not work quite as well with the standard softmax setup (average AUROC 78.4 vs 84.2). Detailed results for PEPR using standard softmax will be provided in the supplementary material. We believe that this issue may be caused by the high level of sparsity in the softmax probability values. Another possible explanation for this issue is that the learned embeddings are better when using the grouped softmax approach. Either way, grouped softmax PEPR achieves state of the art performance. #### 3.3.3 Bayesian Ensembling Notably, we integrated the anchored ensembling approach presented in [31]. This scheme functions almost identically to L2 regularization of model weights, except that the regularization target is the random initial weight values rather than zero (as done in traditional parameter regularization, which assumes a zero-mean Gaussian prior over parameteres). By taking an ensemble of networks regularized in this manner, Pearce [31] demonstrated that one can emulate the desired behaviour of a Bayesian neural network (without the prohibitive cost). While initially we intended to utilize Bayesian behaviour for OOD, PEPR and C-PEPR often worked quite well with an ensemble size of one. However, whether or not the regressor \(R(\hat{y})\) is ensembled, our scoring models do require anchored regularization in order to achieve the best results presented as presented in Section 4. Note that the anchored regularization scheme is only applied to our regressor \(R(\hat{y})\) and not to the model classifier, embedding component, or model backbone. Equation 4 describes the anchored mean squared error loss for the \(j\)th (regressor) model for batches of size \(N\) with weights \(\theta_{j}\) and initial weight values \(\theta_{\text{anc},j}\). \[\mathrm{Loss}_{j}=\frac{1}{N}\left\|\mathbf{z}-\hat{\mathbf{z}}_{j}\right\|_{ 2}^{2}+\frac{1}{N}\left\|\gamma\cdot\left(\boldsymbol{\theta}_{j}-\boldsymbol {\theta}_{\text{anc},j}\right)\right\|_{2}^{2} \tag{4}\] We note that \(\hat{z}_{j}\) is the predicted value of \(z\) from the \(j\)th regressor model \(R_{j}(\hat{y})\). We treat \(\gamma\) as a hyper-parameter and set it to \(0.03\) for all experiments unless otherwise specified. ## 4 Experiments We provide a Github repository to fully replicate the experiments. We also provide Colab notebooks to allow users without access to local compute resources to run the experiments for themselves. ### Datasets #### 4.1.1 In-Distribution Dataset We use Imagenet \(1\)k as our in-distribution dataset [8]. This dataset has been used for large-scale OOD experiments such as those conducted by [18] and [13], which allows us to properly compare methods. #### 4.1.2 Out-of-Distribution Datasets We used two sets of out-of-distribution (OOD) datasets. First is curated version of Textures, SUN, Places, and iNaturalist benchmarks, presented in [18]. Second, is the the Anomalous Species Dataset, presented in [13]. **iNaturalist:** iNaturalist [35] contains \(859,000\) images of \(5,000\) species of plants and animals. [18] manually selected \(110\) plant classes not present in ImageNet-1k and randomly sampled \(10,000\) images from these \(110\) classes. All images were resized to have a maximum dimension of \(800\) pixels. **SUN:** SUN [37] is a scene database of \(397\) categories across \(130,519\) images with sizes larger than \(200\times 200\). We used the curated version presented by [18], which randomly selected \(10,000\) images from \(50\) concepts not in ImageNet 1k. **Textures:** Textures [5] consists of \(5,640\) images of textural patterns, with sizes ranging between \(300\times 300\) and \(640\times 640\). Huang [18] uses the full dataset. **Places:** Places65 [38] is a scene dataset that is similar to SUN. [18] resized all of the images to have a minimum dimension of \(512\). They [18] randomly selected \(10,000\) images from \(50\) concepts not in ImageNet 1k. It is unclear why [18] curated Places365 while [13] used Places365 without curation as the OOD set for Imagenet 1k. **Anomalous Species Dataset:** The Species dataset [13] contains \(700,000\) species from the [35] dataset that do not overlap with Imagenet \(21\)k [33]. Due to the quantity of images, we limit each species group to \(12,800\) images, with the exception of the micro-organisms group, where we use only \(1,408\) images due to the lack of images. ### PEPR Embedder and Regressor Unlike other OOD detection methods, PEPR requires a learnable intermediate layer to generate embeddings \(z\) and a regression model to estimate the embedding conditioned on the predicted softmax probability, i.e., \(\hat{z}=R(\hat{y})\). Unless specified otherwise, the embedding model and regressor models are feedforward neural networks (FNNs) - the embedding FNN is specified in Listing 1 while the regressor FNN is specified in Listing 2. Note that only the embedding model's outputs are used as input to the (softmax) classifier, while other methods use the backbone's outputs directly as inputs to the classifier. Furthermore, the regressor \(R(\hat{y})\) gradients must not flow to the embedding or classifier FNN modules. The interaction between these components is visualized in Figure 1. Figure 4: Samples of in-distribution/OOD data (as in [18]). Figure 5: Samples from OOD datasets, as in [13]. Images extracted from the all species datasets. ### Experiment Setup **Pre-trained Backbone:** Similar to [18], we use the Google BiT-S-R101x1 model [21] with a depth of \(101\) and width factor of one. Pre-trained models facilitate the extraction of high-quality features with minimal time and energy consumption. We choose to fix the backbone of our system and only train the final layers for each method investigated. **Pre-Computed Backbone Outputs:** Due to the significant number of trials required for our experiments, we decided to significantly reduce energy consumption (and thus our computational carbon footprint) by pre-computing the backbone's outputs and then re-using these for downstream simulation. Specifically, we pre-computed over \(12\) million backbone output vectors for Imagenet \(1\)k, ensuring that each image contained multiple augmented backbone outputs. On a TPUv2-8 at batch size \(512\), pre-computed backbone outputs require \(\approx 115\) milliseconds (ms) for one training step while computing the backbone outputs during training requires \(\approx 952\) ms for one training step (due to memory limits, we must use gradient accumulation when computing backbone values). This is a \(80\)% reduction in training time which results in a significant reduction in electricity usage and thus green house gas emissions. To ensure validity, we calculate the validation and OOD testing steps by calculating the backbone outputs at validation and test time, i.e., we run the model with input images during validation/testing. **Training Details:** All models are trained using the Adam optimizer [20] with a step size of \(0.0003\) until the 9th epoch, where the learning rate is decreased by \(40\)% (and again in the 10th epoch). We train for \(10\) epochs with batches of \(512\) samples and \(1000\) steps per epoch, i.e., a total of \(10\)k steps. We decided that the BIT Hyperrule [21] does not apply if we choose to freeze the backbone or use pre-computed backbone outputs. When pre-computing backbone outputs, all images are resized to \(512\times 512\) and randomly cropped to \(480\times 480\) (using a random horizontal flip). At test time, all images are resized to \(480\times 480\). At both pre-computing and test time, images are normalized as in [21]. We perform all experiments using TPUv2-\(8\)s on Google Colab. **Evaluation Metrics:** We measure the following metrics commonly used in OOD detection: (1) the false positive rate of OOD examples when the true positive rate of in-distribution examples is at \(95\)% (FPR95); (2) the area under the receiver operating characteristic curve (AUROC); and (3) the area under the precision-recall curve (AUPR). Note that we run each experiment \(10\) times and report the mean and standard deviation of the measurements. We note that the KL matching method described by [13] calculated the posterior distribution templates using the in-distribution dataset labels. This can be problematic if we consider an in-distribution dataset with one image per class. In such a case, the KL divergence of each image's predicted distribution versus the distribution templates would be minimal (as they would be the same values). To address this issue, we calculate the posterior distribution templates using the training data itself. We believe this to be an accurate representation of the KL matching method as [13] noted that the in-distribution dataset labels are not necessary for the KL matching method. For the above reasons, we recommend future researchers calculate KL matching without using the in-distribution test dataset labels. * CPEPR: see Equation 3 * CPEPR-10: average of Equation 3 via \(10\) regressors * EPOW: \(S(x)=\psi\frac{1}{n}\sum_{i}^{n}(z_{i})^{2}\) * KLM: KL Matching [13] * MLGT: Max Logit [13] * MOS: Minimum Other Score [18] * MOS-10: Average of MOS across \(10\) classifiers * MSP: Maximum Softmax Probability [15] * PEPR: see Equation 2 * PEPR-10: average of Equation 2 via \(10\) regressors ## 5 Results ### CPEPR versus Existing Methods: We summarize mean results with variance across runs in Table 1. We compare with competitive methods in the literature that also do not rely on auxiliary outlier data. We include methods tested on large datasets, including MSP [15], MOS [18], KL matching [13], and Max Logit [13]. These methods, except for MOS, are trained using a flat, non-grouped, softmax. MOS is trained using grouped softmax, which uses the 8-class groups of [18]. Desirably, C-PEPR outperforms all other methods in terms of AUROC and AUPR by at least \(1\) standard deviation. It does perform worse than MOS in terms of FPR95, but it should be noted that FPR95 measures the false positive rate at an arbitrary recall level. The AUROC curve provides a better representation of the false positive rate at all recall levels. C-PEPR also maintains the same execution speed characteristics as MOS as it only needs to compute a few extra fully-connected layers. Using the embedding FNN resulted in a slightly lower validation classification accuracy compared with MOS (\(\approx 75.1\) versus \(\approx 77.4\)). However, this does not seem to negatively affect the OOD detection performance of the model. ### On the Bias-Variance Tradeoff: Compared with MOS, PEPR and CPEPR appear to benefit more from a \(10\)-fold ensemble. PEPR-10 is 1 standard deviation better than MOS-10, but PEPR is not statistically different from MOS, in terms of AUROC. We believe that this is due to the greater standard deviation across all metrics for PEPR and CPEPR. Compared with other OOD methods, our method appears to reduce bias at the expense of increased variance. This is much more noticeable when analyzing results for specific datasets. However, when we consider variance across datasets, PEPR and CPEPR have lower AUROC (11.0 vs 13.2) and AUPR (2.7 vs 5.8) standard deviation than MOS. This suggests that the our method performs more consistently across datasets. Overall, this is an advantage that allows users to choose between efficiency and accuracy. ### On Issues with Benchmark Selection: We can clearly observe that OOD methods vary greatly in performance across different datasets. For example, in Table 2, KL Matching outperforms MOS by a significant margin for amphibians, fish, and mammals. If [18] decided to include these three species benchmarks in their paper, MOS would perform worse than KL Matching in terms of AUROC (\(79.6\) versus \(80.0\)). If we only evaluated our methods (PEPR & C-PEPR) using only the four benchmarks provided by [18], then \begin{table} \begin{tabular}{|l|c c c|} \hline & AUROC & AUPR & FPR95 \\ Method & Mean \(\pm\sigma\) & Mean \(\pm\sigma\) & Mean \(\pm\sigma\) \\ \hline C-PEPR (ours) & **84.2**\(\pm\)0.9 & **96.2**\(\pm\)0.3 & 56.9 \(\pm\)1.3 \\ \hline C-PEPR-10 (ours) & **84.6**\(\pm\)0.8 & **96.3**\(\pm\)0.3 & 57.0 \(\pm\)1.1 \\ \hline EPOW & 74.9 \(\pm\)0.8 & 93.4 \(\pm\)0.2 & 75.7 \(\pm\)1.8 \\ \hline KLM & 78.9 \(\pm\)0.1 & 93.4 \(\pm\)0.0 & 70.0 \(\pm\)0.2 \\ \hline MLGT & 73.0 \(\pm\)0.1 & 92.7 \(\pm\)0.0 & 90.0 \(\pm\)0.1 \\ \hline MOS & 82.4 \(\pm\)0.1 & 93.7 \(\pm\)0.0 & **52.5**\(\pm\)0.2 \\ \hline MOS-10 & 82.6 \(\pm\)0.0 & 93.7 \(\pm\)0.0 & **52.1**\(\pm\)0.1 \\ \hline MSP & 78.3 \(\pm\)0.1 & 94.0 \(\pm\)0.0 & 77.7 \(\pm\)0.2 \\ \hline PEPR (ours) & 82.5 \(\pm\)0.9 & 95.8 \(\pm\)0.3 & 57.4 \(\pm\)1.0 \\ \hline PEPR-10 (ours) & 83.5 \(\pm\)0.8 & 96.1 \(\pm\)0.2 & 57.3 \(\pm\)0.9 \\ \hline \end{tabular} \end{table} Table 1: Summary of key statistics across runs. Mean and standard deviation (\(\sigma\)) reported across experimental runs. Values are calculated by first taking the mean metric across all datasets for a single experimental run, then calculating the mean and \(\sigma\) of the per-run metric across the \(10\) trials. Below is the legend for the method names. our AUROC performance would not exceed the state-of-the-art (\(81.2\) versus \(89.6\)). As such, evaluation of OOD detection methods may suffer from a definition problem. While researchers agree on the definition of a single out-of-distribution image, they may not agree on the relative weighting of out-of-distribution benchmarks. Should a researcher consider anomalous species [13] as one dataset, equal in weight to the Places benchmark? We elected to treat anomalous species as multiple out-of-distribution datasets in order to the follow precedent set by [13], but we note that this more heavily weights the images in the anomalous species dataset. The choice of out-of-distribution datasets greatly affects the performance of OOD detection methods. This stands in contrast to other machine learning research areas, such as image classification, where improvements in one dataset often correlate with improvements in another. The issue of benchmark selection is not only limited to selecting the out-of-distribution datasets. Recent work by [10] showed that they have achieved effectively the ideal AUROC on CIFAR10 versus CIFAR100. It is unclear, however, whether or not their approach would outperform contemporary large-scale OOD methods. In light of the above issues, we encourage further research into OOD benchmark design so that the community may progress towards a consensus on which benchmarks should be used (and the nature in how they are used) for evaluation. ### Poor Performance on the Textures Dataset: Detailed results for each dataset are presented in the Table 2. CPEPR and PEPR perform very poorly (\(<\)60 AUROC) on the textures dataset, which may be explained by the poor performance of the EPOW method. This would suggest that PEPR is limited/hindered by EPOW performance; the usefulness of the estimate of the embeddings is affected by the very embeddings themselves. However, this hypothesis does not apply for the micro-organisms dataset, in which PEPR achieves more than \(80\) AUROC and EPOW achieves less than \(55\) AUROC. This phenomenon warrants further investigation by future researchers. ### On Efficient Evaluation and Reproduction: We compare our results for contemporary methods with those from [18] using their four datasets. Our pre-computed backbone outputs setup (see Section 4.3) differs from previous approaches/setups but reduces training time by \(80\)%, which results in a significant reduction in electricity usage and thus green house gas emissions. We see that our reproduction of MOS method achieves an average AUROC across the four datasets (Textures, SUN, Places, iNaturalist) of \(89.6\) versus the \(90.1\) reported in the original paper [18]. We also observe a better average FPR95 of \(38.8\) versus \(40.0\) on the same four datasets for the MOS method. We believe that our experimental setup faithfully represents MOS and other contemporary methods while significantly reducing the carbon footprint via pre-computed backbone outputs and fewer require training steps. ## 6 Conclusion In this work, we proposed a novel OOD detection method, predicted embedding power regression (PEPR), motivated by the properties of intermediate neural layer embeddings. Our experimental results indicate that PEPR performs well for the case of large-scale OOD detection. We train PEPR and other contemporary methods using pre-computed outputs, significantly reducing compute costs. Our experiments across a wide array of datasets shows the PEPR performs better than the state of the art in a statistically significant way. We hope that our study encourages further research in large-scale OOD detection and provides machine learning practitioners with new tools to improve artificial intelligence safety. ## Acknowledgements This material is based upon work supported by the United States National Science Foundation under grant #2225354 \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Method & & & & & & & & & & & & & \\ Method & & & & & & & & & & & & & \\ Dataset AUROC & & & & & & & & & & & & & \\ \hline Places & \(85.9\pm 1.9\) & \(86.1\pm 1.9\) & \(83.7\pm 0.8\) & \(78.4\pm 0.1\) & \(78.2\pm 0.1\) & \(\mathbf{89.7\pm 0.1}\) & \(\mathbf{89.8\pm 0.1}\) & \(78.2\pm 0.1\) & \(83.9\pm 1.7\) & \(84.2\pm 1.7\) \\ \hline SUN & \(87.9\pm 2.0\) & \(88.1\pm 2.0\) & \(84.3\pm 0.8\) & \(81.5\pm 0.1\) & \(78.3\pm 0.1\) & \(\mathbf{92.5\pm 0.1}\) & \(\mathbf{92.6\pm 0.0}\) & \(80.2\pm 0.1\) & \(86.2\pm 1.9\) & \(86.5\pm 1.8\) \\ \hline Textures & \(56.0\pm 2.8\) & \(55.7\pm 2.4\) & \(50.0\pm 1.6\) & \(\mathbf{83.2\pm 0.1}\) & \(68.0\pm 0.1\) & \(78.7\pm 0.4\) & \(\mathbf{78.9\pm 0.1}\) & \(76.9\pm 0.1\) & \(57.8\pm 3.2\) & \(58.1\pm 2.4\) \\ \hline amphibians & \(\mathbf{72.2\pm 4.7}\) & \(72.1\pm 4.4\) & \(61.3\pm 2.7\) & \(72.1\pm 0.2\) & \(68.6\pm 0.1\) & \(59.1\pm 0.3\) & \(59.2\pm 0.1\) & \(\mathbf{75.0\pm 0.1}\) & \(69.9\pm 4.8\) & \(68.5\pm 4.9\) \\ \hline arcachnids & \(\mathbf{82.0\pm 3.7}\) & \(\mathbf{83.2\pm 2.5}\) & \(76.2\pm 1.5\) & \(69.5\pm 0.1\) & \(59.0\pm 0.2\) & \(67.5\pm 0.3\) & \(67.7\pm 0.2\) & \(76.7\pm 0.1\) & \(78.7\pm 5.5\) & \(\mathbf{82.1\pm 3.4}\) \\ \hline fish & \(\mathbf{84.1\pm 2.4}\) & \(\mathbf{85.3\pm 2.2}\) & \(78.8\pm 1.7\) & \(79.5\pm 0.2\) & \(73.2\pm 0.1\) & \(72.9\pm 0.4\) & \(72.9\pm 0.2\) & \(78.4\pm 0.1\) & \(80.1\pm 2.4\) & \(83.2\pm 2.5\) \\ \hline fungi & \(\mathbf{96.7\pm 0.4}\) & \(\mathbf{96.7\pm 0.3}\) & \(86.6\pm 1.5\) & \(73.4\pm 0.2\) & \(67.0\pm 0.2\) & \(93.1\pm 0.1\) & \(93.4\pm 0.1\) & \(73.4\pm 0.2\) & \(96.1\pm 0.5\) & \(96.0\pm 0.4\) \\ \hline iNaturalist & \(94.8\pm 0.4\) & \(94.8\pm 0.3\) & \(82.5\pm 1.8\) & \(89.9\pm 0.1\) & \(81.5\pm 0.1\) & \(\mathbf{97.5\pm 0.1}\) & \(\mathbf{97.5\pm 0.0}\) & \(86.5\pm 0.1\) & \(94.0\pm 0.5\) & \(94.2\pm 0.3\) \\ \hline insects & \(\mathbf{82.4\pm 3.3}\) & \(\mathbf{83.3\pm 2.8}\) & \(78.7\pm 1.4\) & \(67.7\pm 0.1\) & \(63.5\pm 0.1\) & \(73.3\pm 0.3\) & \(73.3\pm 0.1\) & \(72.5\pm 0.1\) & \(77.2\pm 4.4\) & \(78.5\pm 3.6\) \\ \hline mammals & \(76.1\pm 2.5\) & \(76.2\pm 2.8\) & \(65.6\pm 1.2\) & \(75.4\pm 0.1\) & \(71.5\pm 0.1\) & \(66.9\pm 0.2\) & \(67.2\pm 0.1\) & \(77.5\pm 0.1\) & \(74.0\pm 2.7\) & \(73.7\pm 2.6\) \\ \hline microorganisms & \(86.4\pm 4.1\) & \(87.0\pm 1.3\) & \(54.1\pm 4.1\) & \(89.0\pm 0.6\) & \(80.9\pm 0.6\) & \(\mathbf{93.5\pm 0.2}\) & \(93.7\pm 0.1\) & \(81.7\pm 0.7\) & \(87.4\pm 1.5\) & \(88.8\pm 1.3\) \\ \hline mollusks & \(82.5\pm 2.8\) & \(\mathbf{84.3\pm 1.9}\) & \(74.0\pm 1.6\) & \(69.8\pm 0.2\) & \(69.2\pm 0.2\) & \(75.7\pm 0.3\) & \(75.8\pm 0.1\) & \(69.5\pm 0.1\) & \(79.9\pm 3.5\) & \(\mathbf{84.4\pm 1.9}\) \\ \hline plants & \(95.7\pm 0.4\) & \(95.7\pm 0.5\) & \(88.6\pm 1.2\) & \(91.5\pm 0.1\) & \(83.0\pm 0.1\) & \(\mathbf{98.0\pm 0.0}\) & \(\mathbf{98.0\pm 0.0}\) & \(88.5\pm 0.1\) & \(94.6\pm 0.6\) & \(94.7\pm 0.7\) \\ \hline protozoa & \(\mathbf{96.1\pm 0.3}\) & \(\mathbf{96.2\pm 0.2}\) & \(84.7\pm 1.8\) & \(83.7\pm 0.1\) & \(80.8\pm 0.1\) & \(95.7\pm 0.1\) & \(95.9\pm 0.0\) & \(81.5\pm 0.1\) & \(95.5\pm 0.4\) & \(95.7\pm 0.3\) \\ \hline Dataset AUPR & & & & & & & & & & & \\ \hline Places & \(96.2\pm 0.6\) & \(96.3\pm 0.5\) & \(96.0\pm 0.2\) & \(94.3\pm 0.0\) & \(94.9\pm 0.0\) & \(\mathbf{97.0\pm 0.0}\) & \(\mathbf{97.1\pm 0.0}\) & \(94.5\pm 0.0\) & \(95.4\pm 0.5\) & \(95.6\pm 0.5\) \\ \hline SUN & \(96.7\pm 0.6\) & \(96.8\pm 0.5\) & \(95.8\pm 0.3\) & \(95.0\pm 0.0\) & \(95.0\pm 0.0\) & \(\mathbf{98.0\pm 0.0}\) & \(98.0\pm 0.0\) & \(95.0\pm 0.0\) & \(96.1\pm 0.6\) & \(96.3\pm 0.5\) \\ \hline Textures & \(90.9\pm 0.7\) & \(90.7\pm 0.5\) & \(89.2\pm 0.6\) & \(97.2\pm 0.0\) & \(95.1\pm 0.0\) & \(95.9\pm 0.0\) & \(96.0\pm 0.0\) & \(96.3\pm 0.0\) & \(92.0\pm 0.8\) & \(92.2\pm 0.5\) \\ \hline amphibians & \(92.4\pm 1.6\) & \(92.4\pm 1.6\) & \(86.4\pm 1.4\) & \(90.2\pm 0.1\) & \(89.9\pm 0.1\) & \(83.0\pm 0.1\) & \(85.1\pm 0.1\) & \(92.6\pm 0.0\) & \(91.9\pm 1.6\) & \(91.5\pm 1.6\) \\ \hline \hline arachnids & \(95.6\pm 1.2\) & \(\mathbf{96.0\pm 0.9}\) & \(92.7\pm 0.6\) & \(90.3\pm 0.0\) & \(87.7\pm 0.1\) & \(88.0\pm 0.1\) & \(88.8\pm 0.1\) & \(
2307.00684
A Proximal Algorithm for Network Slimming
As a popular channel pruning method for convolutional neural networks (CNNs), network slimming (NS) has a three-stage process: (1) it trains a CNN with $\ell_1$ regularization applied to the scaling factors of the batch normalization layers; (2) it removes channels whose scaling factors are below a chosen threshold; and (3) it retrains the pruned model to recover the original accuracy. This time-consuming, three-step process is a result of using subgradient descent to train CNNs. Because subgradient descent does not exactly train CNNs towards sparse, accurate structures, the latter two steps are necessary. Moreover, subgradient descent does not have any convergence guarantee. Therefore, we develop an alternative algorithm called proximal NS. Our proposed algorithm trains CNNs towards sparse, accurate structures, so identifying a scaling factor threshold is unnecessary and fine tuning the pruned CNNs is optional. Using Kurdyka-{\L}ojasiewicz assumptions, we establish global convergence of proximal NS. Lastly, we validate the efficacy of the proposed algorithm on VGGNet, DenseNet and ResNet on CIFAR 10/100. Our experiments demonstrate that after one round of training, proximal NS yields a CNN with competitive accuracy and compression.
Kevin Bui, Fanghui Xue, Fredrick Park, Yingyong Qi, Jack Xin
2023-07-02T23:34:12Z
http://arxiv.org/abs/2307.00684v2
# A Proximal Algorithm for Network Slimming+ ###### Abstract As a popular channel pruning method for convolutional neural networks (CNNs), network slimming (NS) has a three-stage process: (1) it trains a CNN with \(\ell_{1}\) regularization applied to the scaling factors of the batch normalization layers; (2) it removes channels whose scaling factors are below a chosen threshold; and (3) it retrains the pruned model to recover the original accuracy. This time-consuming, three-step process is a result of using subgradient descent to train CNNs. Because subgradient descent does not exactly train CNNs towards sparse, accurate structures, the latter two steps are necessary. Moreover, subgradient descent does not have any convergence guarantee. Therefore, we develop an alternative algorithm called proximal NS. Our proposed algorithm trains CNNs towards sparse, accurate structures, so identifying a scaling factor threshold is unnecessary and fine tuning the pruned CNNs is optional. Using Kurdyka-Lojasiewicz assumptions, we establish global convergence of proximal NS. Lastly, we validate the efficacy of the proposed algorithm on VGGNet, DenseNet and ResNet on CIFAR 10/100. Our experiments demonstrate that after one round of training, proximal NS yields a CNN with competitive accuracy and compression. Keywords:channel pruning nonconvex optimization convolutional neural networks neural network compression. ## 1 Introduction In the past decade, convolutional neural networks (CNNs) have revolutionized computer vision in various applications, such as image classification [12, 32, 37] and object detection [10, 16, 26]. CNNs are able to internally generate diverse, various features through its multiple hidden layers, totaling millions of weight parameters to train and billions of floating point operations (FLOPs) to execute. Consequently, highly accurate CNNs are impractical to store and implement on resource-constrained devices, such as mobile smartphones. To compress CNNs into lightweight models, several directions, including weight pruning [1, 11], have been investigated. Channel pruning [23, 33] is currently a popular direction because it can significantly reduce the number of weights needed in a CNN by removing any redundant channels. One straightforward approach to channel pruning is network slimming (NS) [23], which appends an \(\ell_{1}\) norm on the scaling factors of the batch normalization layers to the loss function being optimized. Being a sparse regularizer, the \(\ell_{1}\) norm pushes the scaling factors corresponding to the channels towards zeroes. The original optimization algorithm used for NS is subgradient descent [31], but it has theoretical and practical issues. Subgradient descent does not necessarily decrease the loss function value after each iteration, even when performed exactly with full batch of data [4]. Moreover, unless with some additional modifications, such as back-tracking line search, subgradient descent may not converge to a critical point [25]. When implemented in practice, barely any of the scaling factors have values exactly at zeroes by the end of training, resulting in two issues. First, a threshold value needs to be determined in order to remove channels whose scaling factors are below it. Second, pruning channels with nonzero scaling factors can deteriorate the CNNs' accuracy since these channels are still relevant to the CNN computation. As a result, the pruned CNN needs to be retrained to recover its original accuracy. Therefore, as a suboptimal algorithm, subgradient descent leads to a time-consuming, three-step process. In this paper, we design an alternative optimization algorithm based on proximal alternating linearized minimization (PALM) [5] for NS. The algorithm has more theoretical and practical advantages than subgradient descent. Under certain conditions, the proposed algorithm does converge to a critical point. When used in practice, the proposed algorithm enforces the scaling factors of insignificant channels to be exactly zero by the end of training. Hence, there is no need to set a scaling factor threshold to identify which channels to remove. Because the proposed algorithm trains a model towards a truly sparse structure, the model accuracy is preserved after the insignificant channels are pruned, so fine tuning is unnecessary. The only trade-off of the proposed algorithm is a slight decrease in accuracy compared to the original baseline model. Overall, the new algorithm reduces the original three-step process of NS to only one round of training with fine tuning as an optional step, thereby saving the time and hassle of obtaining a compressed, accurate CNN. ## 2 Related Works Early pruning methods focus on removing redundant weight parameters in CNNs. Han _et al_.[11] proposed to remove weights if their magnitudes are below a certain threshold. Aghasi _et al_.[2] formulated a convex optimization problem to determine which weight parameters to retain while preserving model accuracy. Creating irregular sparsity patterns, weight pruning is not implementation friendly since it requires special software and hardware to accelerate inference [20, 40]. An alternative to weight pruning is pruning group-wise structures in CNNs. Many works [3, 8, 19, 24, 29, 33] have imposed group regularization onto various CNN structures, such as filters and channels. Li _et al_.[20] incorporated a sparsity-inducing matrix corresponding to each feature map and imposed row-wise and column-wise group regularization onto this matrix to determine which filters to remove. Lin _et al_.[21] pruned filters that generate low-rank feature maps. Hu _et al_.[13] devised network trimming that iteratively removes zero-activation neurons from the CNN and retrains the compressed CNN. Rather than regularizing the weight parameters, Liu _et al_.[23] developed NS, where they applied \(\ell_{1}\) regularization on the scaling factors in the batch normalization layers in a CNN to determine which of their corresponding channels are redundant to remove and then they retrained the pruned CNN to restore its accuracy. Bui _et al_.[6, 7] investigated nonconvex regularizers as alternatives to the \(\ell_{1}\) regularizer for NS. On the other hand, Zhao _et al_.[40] applied probabilistic learning onto the scaling factors to identify which redundant channels to prune with minimal accuracy loss, making retraining unnecessary. Lin _et al_.[22] introduced an external soft mask as a set of parameters corresponding to the CNN structures (e.g., filters and channels) and regularized the mask by adversarial learning. ## 3 Proposed Algorithm In this section, we develop a novel PALM algorithm [5] for NS that consists of two straightforward, general steps per epoch: stochastic gradient descent on the weight parameters, including the scaling factors of the batch normalization layers, and soft thresholding on the scaling factors. ### Batch Normalization Layer Most modern CNNs have batch normalization (BN) layers [17] because these layers speed up their convergence and improve their generalization [28]. These benefits are due to normalizing the output feature maps of the preceding convolutional layers using mini-batch statistics. Let \(z\in\mathbb{R}^{B\times C\times H\times W}\) denote an output feature map, where \(B\) is the mini-batch size, \(C\) is the number of channels, and \(H\) and \(W\) are the height and width of the feature map, respectively. For each channel \(i=1,\ldots,C\), the output of a BN layer on each channel \(z_{i}\) is given by \[z_{i}^{\prime}=\gamma_{i}\frac{z_{i}-\mu_{B}}{\sqrt{\sigma_{B}^{2}+\epsilon}}+ \beta_{i}, \tag{1}\] where \(\mu_{B}\) and \(\sigma_{B}\) are the mean and standard deviation of the inputs across the mini-batch \(B\), \(\epsilon\) is a small constant for numerical stability, and \(\gamma_{i}\) and \(\beta_{i}\) are trainable weight parameters that help restore the representative power of the input \(z_{i}\). The weight parameter \(\gamma_{i}\) is defined to be the scaling factor of channel \(i\). The scaling factor \(\gamma_{i}\) determines how important channel \(i\) is to the CNN computation as it is multiplied to all pixels of the same channel \(i\) within the feature map \(z\). ### Numerical Optimization Let \(\{(x_{i},y_{i})\}_{i=1}^{N}\) be a given dataset, where each \(x_{i}\) is a training input and \(y_{i}\) is its corresponding label or value. Using the dataset \(\{(x_{i},y_{i})\}_{i=1}^{N}\), we train a CNN with \(c\) total channels, where each of their convolutional layers is followed by a BN layer. Let \(\gamma\in\mathbb{R}^{c}\) be the vector of trainable scaling factors of the CNN, where for \(i=1,\ldots,c\), each entry \(\gamma_{i}\) is a scaling factor of channel \(i\). Moreover, let \(W\in\mathbb{R}^{n}\) be a vector of all \(n\) trainable weight parameters, excluding the scaling factors, in the CNN. NS [23] minimizes the following objective function: \[\min_{W,\gamma}\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(h(x_{i},W,\gamma),y_{i})+ \lambda\|\gamma\|_{1}, \tag{2}\] where \(h(x_{i},W,\gamma)\) is the output of the CNN predicted on the data point \(x_{i}\); \(\mathcal{L}(h(x_{i},W,\gamma),y_{i})\) is the loss function between the prediction \(h(x_{i},W,\gamma)\) and ground truth \(y_{i}\), such as the cross-entropy loss function; and \(\lambda>0\) is the regularization parameter for the \(\ell_{1}\) penalty on the scaling factor vector \(\gamma\). In [23], (2) is solved by a gradient descent scheme with step size \(\delta^{t}\) for each epoch \(t\): \[W^{t+1} =W^{t}-\delta^{t}\nabla_{W}\tilde{\mathcal{L}}(W^{t},\gamma^{t}), \tag{3a}\] \[\gamma^{t+1} =\gamma^{t}-\delta^{t}\left(\nabla_{\gamma}\tilde{\mathcal{L}}(W ^{t},\gamma^{t})+\lambda\partial\|\gamma^{t}\|_{1}\right), \tag{3b}\] where \(\tilde{\mathcal{L}}(W,\gamma)\coloneqq\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(h( x_{i},W,\gamma),y_{i})\) and \(\partial\|\cdot\|_{1}\) is the subgradient of the \(\ell_{1}\) norm. By (3), we observe that \(\gamma\) is optimized by subgradient descent, which can lead to practical issues. When \(\gamma_{i}=0\) for some channel \(i\), the subgradient needs to be chosen precisely. Not all subgradient vectors at a non-differentiable point decrease the value of (2) in each epoch [4], so we need to find one that does among the infinite number of choices. In the numerical implementation of NS 1, the subgradient \(\zeta^{t}\) is selected such that \(\zeta^{t}_{i}=0\) by default when \(\gamma^{t}_{i}=0\), but such selection is not verified to decrease the value of (2) in each epoch \(t\). Lastly, subgradient descent only pushes the scaling factors of irrelevant channels to be near zero in value but not exactly zero. For this reason, when pruning a CNN, the user needs to determine the appropriate scaling factor threshold to remove its channels where no layers have zero channels and then fine tune it to restore its original accuracy. However, if too many channels are pruned that the fine-tuned accuracy is significantly less than the original, the user may waste time and resources by iterating the process of decreasing the threshold and fine tuning until the CNN attains acceptable accuracy and compression. Footnote 1: [https://github.com/Eric-mingjie/network-slimming](https://github.com/Eric-mingjie/network-slimming) To develop an alternative algorithm that does not possess the practical issues of subgradient descent, we reformulate (2) as a constrained optimization problem by introducing an auxiliary variable \(\xi\), giving us \[\min_{W,\gamma,\xi}\quad\tilde{\mathcal{L}}(W,\gamma)+\lambda\|\xi\|_{1}\quad \text{s.t.}\quad\xi=\gamma. \tag{4}\] However, we relax the constraint by a quadratic penalty with parameter \(\beta>0\), leading to a new unconstrained optimization problem: \[\min_{W,\gamma,\xi}\quad\tilde{\mathcal{L}}(W,\gamma)+\lambda\|\xi\|_{1}+\frac {\beta}{2}\|\gamma-\xi\|_{2}^{2}. \tag{5}\] In (2), the scaling factor vector \(\gamma\) is optimized for both model accuracy and sparsity, which can be difficult to balance when training a CNN. However, in (5), \(\gamma\) is optimized for only model accuracy because it is a variable of the overall loss function \(\tilde{\mathcal{L}}(W,\gamma)\) while \(\xi\) is optimized only for sparsity because it is penalized by the \(\ell_{1}\) norm. The quadratic penalty enforces \(\gamma\) and \(\xi\) to be similar in values, thereby ensuring \(\gamma\) to be sparse. Let \((W,\gamma)\) be a concatenated vector of \(W\) and \(\gamma\). We minimize (5) via alternating minimization, so for each epoch \(t\), we solve the following subproblems: \[(W^{t+1},\gamma^{t+1}) \in\operatorname*{arg\,min}_{W,\gamma}\tilde{\mathcal{L}}(W, \gamma)+\frac{\beta}{2}\|\gamma-\xi^{t}\|_{2}^{2} \tag{6a}\] \[\xi^{t+1} \in\operatorname*{arg\,min}_{\xi}\lambda\|\xi\|_{1}+\frac{\beta} {2}\|\gamma^{t+1}-\xi\|_{2}^{2}. \tag{6b}\] Below, we describe how to solve each subproblem in details. #### 2.0.1 (\(W,\gamma\))-subproblem The \((W,\gamma)\)-subproblem given in (6a) cannot be solved in closed form because the loss function \(\tilde{\mathcal{L}}(W,\gamma)\) is a composition of several nonlinear functions. Typically, when training a CNN, this subproblem would be solved by (stochastic) gradient descent. To formulate (6a) as a gradient descent step, we follow a prox-linear strategy as follows: \[(W^{t+1},\gamma^{t+1})\in\operatorname*{arg\,min}_{W,\gamma}\tilde {\mathcal{L}}(W^{t},\gamma^{t})+\langle\nabla_{W}\tilde{\mathcal{L}}(W^{t}, \gamma^{t}),W-W^{t}\rangle \tag{7}\] \[+\langle\nabla_{\gamma}\tilde{\mathcal{L}}(W^{t},\gamma^{t}), \gamma-\gamma^{t}\rangle+\frac{\alpha}{2}\|W-W^{t}\|_{2}^{2}+\frac{\alpha}{2} \|\gamma-\gamma^{t}\|_{2}^{2}+\frac{\beta}{2}\|\gamma-\xi^{t}\|_{2}^{2},\] where \(\alpha>0\). By differentiating with respect to each variable, setting the partial derivative equal to zero, and solving for the variable, we have \[W^{t+1} =W^{t}-\frac{1}{\alpha}\nabla_{W}\tilde{\mathcal{L}}(W^{t}, \gamma^{t}) \tag{8a}\] \[\gamma^{t+1} =\frac{\alpha\gamma^{t}+\beta\xi^{t}}{\alpha+\beta}-\frac{1}{ \alpha+\beta}\nabla_{\gamma}\tilde{\mathcal{L}}(W^{t},\gamma^{t}). \tag{8b}\] We see that (8a) is gradient descent on \(W^{t}\) with step size \(\frac{1}{\alpha}\) while (8b) is gradient descent on a weighted average of \(\gamma^{t}\) and \(\xi^{t}\) with step size \(\frac{1}{\alpha+\beta}\). These steps are straightforward to implement in practice when training a CNN because the gradient \((\nabla_{W}\tilde{\mathcal{L}}(W^{t},\gamma^{t}),\nabla_{\gamma}\tilde{ \mathcal{L}}(W^{t},\gamma^{t}))\) can be approximated by backpropagation. #### 2.0.2 \(\xi\)-subproblem To solve (6b), we perform a proximal update by minimizing the following subproblem: \[\xi^{t+1}\in\operatorname*{arg\,min}_{\xi}\lambda\|\xi\|_{1}+\frac{\alpha}{2} \|\xi-\xi^{t}\|_{2}^{2}+\frac{\beta}{2}\|\gamma^{t+1}-\xi\|_{2}^{2}. \tag{9}\] Expanding it gives \[\xi^{t+1}=\operatorname*{arg\,min}_{\xi}\|\xi\|_{1}+\frac{1}{2\left(\frac{\lambda }{\beta+\alpha}\right)}\left\|\xi-\frac{\alpha\xi^{t}+\beta\gamma^{t+1}}{\alpha +\beta}\right\|_{2}^{2}=\mathcal{S}\left(\frac{\alpha\xi^{t}+\beta\gamma^{t+1} }{\alpha+\beta},\frac{\lambda}{\beta+\alpha}\right),\] where \(\mathcal{S}(x,\lambda)\) is the soft-thresholding operator defined by \((\mathcal{S}(x,\lambda))_{i}=\operatorname*{sign}(x_{i})\max\{0,|x_{i}|- \lambda\}\) for each entry \(i\). Therefore, \(\xi\) is updated by performing soft thresholding on the weighted average between \(\xi^{t}\) and \(\gamma^{t+1}\). We summarize the new algorithm for NS in Algorithm 1 as proximal NS. ``` 0: Regularization parameter \(\lambda\), proximal parameter \(\alpha\), penalty parameter \(\beta\) 0: Initialize \(W^{1},\xi^{1}\) with random values. 0: Initialize \(\gamma^{1}\) such that \(\gamma_{i}=0.5\) for each channel \(i\). 1:for each epoch \(t=1,\ldots,T\)do 2:\(W^{t+1}=W^{t}-\frac{1}{\alpha}\nabla_{W}\tilde{\mathcal{L}}(W^{t},\gamma^{t})\) by stochastic gradient descent or variant. 3:\(\gamma^{t+1}=\frac{\alpha\gamma^{t}+\beta\xi^{t}}{\alpha+\beta}-\frac{1}{ \alpha+\beta}\nabla_{\gamma}\tilde{\mathcal{L}}(W^{t},\gamma^{t})\) by stochastic gradient descent or variant. 4:\(\xi^{t+1}=\mathcal{S}\left(\frac{\alpha\xi^{t}+\beta\gamma^{t+1}}{\alpha+ \beta},\frac{\lambda}{\beta+\alpha}\right).\) 5:endfor ``` **Algorithm 1** Proximal NS: proximal algorithm for minimizing (5) ## 4 Convergence Analysis To establish global convergence of proximal NS, we present relevant definitions and assumptions. Definition 1 ([5]): A proper, lower-semicontinuous function \(f:\mathbb{R}^{m}\rightarrow(-\infty,\infty]\) satisfies the Kurdyka-Lojasiewicz (KL) property at a point \(\bar{x}\in\text{dom}(\partial f)\coloneqq\{x\in\mathbb{R}^{m}:\partial f(x) \neq\varnothing\}\) if there exist \(\eta\in(0,+\infty]\), a neighborhood \(U\) of \(\bar{x}\), and a continuous concave function \(\phi:[0,\eta)\rightarrow[0,\infty)\) with the following properties: (i) \(\phi(0)=0\); (ii) \(\phi\) is continuously differentiable on \((0,\eta)\); (iii) \(\phi^{\prime}(x)>0\) for all \(x\in(0,\eta)\); and (iv) for any \(x\in U\) with \(f(\bar{x})<f(x)<f(\bar{x})+\eta\), it holds that \(\phi^{\prime}(f(x)-f(\bar{x}))\text{dist}(0,\partial f(x))\geq 1\). If \(f\) satisfies the KL property at every point \(x\in\text{dom}(\partial f)\), then \(f\) is called a KL function. **Assumption 1**: _Suppose that_ 1. \(\tilde{\mathcal{L}}(W,\gamma)\) _is a proper, differentiable, and nonnegative function._ 2. \(\nabla\tilde{\mathcal{L}}(W,\gamma)\) _is Lipschitz continuous with constant_ \(L\)_._ 3. \(\tilde{\mathcal{L}}(W,\gamma)\) _is a KL function._ Remark 1: Assumption 1 (a)-(b) are common in nonconvex analysis (e.g., [5]). For Assumption 1, most commonly used loss functions for CNNs are verified to be KL functions [38]. Some CNN architectures do not satisfy Assumption 1(a) when they contain nonsmooth functions and operations, such as the ReLU activation functions and max poolings. However, these functions and operations can be replaced with their smooth approximations. For example, the smooth approximation of ReLU is the softplus function \(\frac{1}{c}\log(1+\exp(cx))\) for some parameter \(c>0\) while the smooth approximation the max function for max pooling is the softmax function \(\sum_{i=1}^{n}\frac{x_{i}e^{cx_{i}}}{\sum_{i=1}^{n}e^{cx_{i}}}\) for some parameter \(c>0\). Besides, Fu _et al_.[9] made a similar assumption to establish convergence for their algorithm designed for weight and filter pruning. Regardless, our numerical experiments demonstrate that our proposed algorithm still converges for CNNs containing ReLU activation functions and max pooling. For brevity, we denote \[F(W,\gamma,\xi)\coloneqq\tilde{\mathcal{L}}(W,\gamma)+\lambda\|\xi\|_{1}+\frac{ \beta}{2}\|\gamma-\xi\|_{2}^{2}.\] Now, we are ready to present the main theorem: Theorem 4.1: _Under Assumption 1, if \(\{(W^{t},\gamma^{t},\xi^{t})\}_{t=1}^{\infty}\) generated by Algorithm 1 is bounded and we have \(\alpha>L\), then \(\{(W^{t},\gamma^{t},\xi^{t})\}_{t=1}^{\infty}\) converges to a critical point \((W^{*},\gamma^{*},\xi^{*})\) of \(F\)._ The proof is delayed to the appendix. It requires satisfying the sufficient decrease property in \(F\) and the relative error property of \(\partial F\)[5]. ## 5 Numerical Experiments We evaluate proximal NS on VGG-19 [32], DenseNet-40 [15, 14], and ResNet-110/164 [12] trained on CIFAR 10/100 [18]. The CIFAR 10/100 dataset [18] consists of 60,000 natural images of resolution \(32\times 32\) with 10/100 categories. The dataset is split into two sets: 50,000 training images and 10,000 test images. As done in recent works [12, 23], standard augmentation techniques (e.g., shifting, mirroring, and normalization) are applied to the images before training and testing. The code for proximal NS is available at [https://github.com/kbui1993/Official-Proximal-Network-S](https://github.com/kbui1993/Official-Proximal-Network-S) ### Implementation Details For CIFAR 10/100, the implementation is mostly the same as in [23]. Specifically, we train the networks from scratch for 160 epochs using stochastic gradient descent with initial learning rate at 0.1 that reduces by a factor of 10 at the 80th and 120th epochs. Moreover, the models are trained with weight decay \(10^{-4}\) and Nesterov momentum of 0.9 without damping. The training batch size is 64. However, the parameter \(\lambda\) is set differently. In our numerical experiments, using Algorithm 1, we set \(\xi\sim\text{Unif}[0.47,0.50]\) for all networks while \(\lambda=0.0045\) and \(\beta=100\) for VGG-19, \(\lambda=0.004\) and \(\beta=100\) for DenseNet-40, and \(\lambda=0.002\) and \(\beta=1.0,0.25\) for ResNet-110 and ResNet-164, respectively. We have initially \(\alpha=10\), the reciprocal of the learning rate, and it changes accordingly to the learning rate schedule. A model is trained five times on NVIDIA GeForce RTX 2080 for each network and dataset to obtain the average statistics. ### Results We apply proximal NS to train VGG-19, DenseNet-40, and ResNet-164 on CIFAR 10/100. According to Table 1, proximal NS drives a significant number of scaling factors to be exactly zeroes for each trained CNN. In particular, for VGG-19 and DenseNet-40, at least 55% of the scaling factors are zeroes while for ResNet-164, at least 58% are zeroes. We can safely remove the channels with zero scaling factors because they are unnecessary for inference. Unlike the original NS [23], proximal NS does not require us to select a scaling factor threshold based on how many channels to remove and how much accuracy to sacrifice. We compare proximal NS with the original NS [23] and variational CNN pruning (VCP) [40], a Bayesian version of NS. To evaluate the effect of regularization and pruning on accuracy, we include the baseline accuracy, where the architecture is trained without any regularization on the scaling factors. For completeness, the models trained with original NS and proximal NS are fine tuned with the same setting as the first time training but without \(\ell_{1}\) regularization on the scaling factors. The results are reported in Tables 1(a)-1(b). After the first round of training, proximal NS outperforms both the original NS and VCP in test accuracy while reducing a significant amount of parameters and FLOPs. Because proximal NS trains a model towards a sparse structure, the model accuracy is less than the baseline accuracy by at most 1.56% and it remains the same between before and after pruning, a property that the original NS does not have. Although VCP is designed to preserve test accuracy after pruning, it does not compress as well as proximal NS for all architectures. With about the same proportion of channels pruned as the original NS, proximal NS saves more FLOPs for both VGG-19 and ResNet-164 and generally more parameters for all networks. To potentially improve test accuracy, the pruned models from the original and proximal NS are fine tuned. For proximal NS, test accuracy of the pruned models improve slightly by at most 0.42% for DenseNet-40 and ResNet-164 while worsen for VGGNet-19. Moreover, proximal NS is outperformed by the original NS in fine-tuned test accuracy for all models trained on CIFAR 100. A more accurate model from original NS might be preferable. However, the additional fine tuning step requires a few more training hours to obtain an accuracy that is up to 1.5% higher than the accuracy of a pruned model trained once by proximal NS. For example, for ResNet-164 trained on CIFAR 100, proximal NS takes about 7 hours to attain an average accuracy of 75.26% while the origi \begin{table} \begin{tabular}{|l||c||c||c|} \hline & & CIFAR 10 & CIFAR 100 \\ Architecture & Total Channels/\(\gamma_{i}\) & Avg. Number & Avg. Number \\ & & of \(\gamma_{i}=0\) & of \(\gamma_{i}=0\) \\ \hline VGG-19 & 5504 & 4105.2 & 3057.0 \\ \hline DenseNet-40 & 9360 & 6936.4 & 6071.6 \\ \hline ResNet-164 & 12112 & 8765.4 & 7115.8 \\ \hline \end{tabular} \end{table} Table 1: The average number of scaling factors equal to zero at the end of training. Each architecture is trained five times per dataset. nal NS requires about 12 hours to achieve 1.42% higher accuracy. Therefore, the amount of time and resources spent training for an incremental improvement may not be worthwhile. Finally, we compare proximal NS with other pruning methods applied to Densenet-40 and ResNet-110 trained on CIFAR 10. The other pruning methods, which may require fine tuning, are L1 [19], GAL [22], and Hrank [21]. For DenseNet-40, proximal NS prunes the most parameters and the second most FLOPs while having comparable accuracy as the fine-tuned Hrank and postp pruned GAL-0.05. For ResNet-110, proximal NS has better compression than L1, GAL-0.5, and Hrank with its post-pruned accuracy better than GAL-0.5's fine-tuned accuracy and similar to L1's fine-tuned accuracy. Although GAL or Hrank might be advantageous to use to obtain a sparse, accurate CNN, they have additional requirements besides fine tuning. GAL [22] requires an accurate baseline model available for knowledge distillation. For Hrank [21], the compression ratio needs to be specified for each convolutional layer, thereby making hyperparameter tuning more complicated. \begin{table} \end{table} Table 2: Results between the different NS methods on CIFAR 10/100. Average statistics are obtained by training the baseline architectures and original NS five times, while the results for variational NS are originally reported from [40]. Overall, proximal NS is a straightforward algorithm that yields a generally more compressed and accurate model than the other methods in one training round. Although its test accuracy after one round is slightly lower than the baseline accuracy, it is expected because of the sparsity-accuracy trade-off and being a prune-while-training algorithm (which automatically identifies the insignificant channels during training) as discussed in [30]. Lastly, the experiments show that fine tuning the compressed models trained by proximal NS marginally improves the test accuracy, which makes fine tuning wasteful. ## 6 Conclusion We develop a channel pruning algorithm called proximal NS with global convergence guarantee. It trains a CNN towards a sparse, accurate structure, making fine tuning optional. In our experiments, proximal NS can effectively compress CNNs with accuracy slightly less than the baseline. Because fine tuning CNNs trained by proximal NS marginally improves test accuracy, we will investigate modifying the algorithm to attain significantly better fine-tuned accuracy. For future direction, we shall study proximal cooperative neural architecture search [34, 35] and include nonconvex, sparse regularizers, such as \(\ell_{1}-\ell_{2}\)[36] and transformed \(\ell_{1}\)[39]. ## Appendix 0.A Appendix First, we introduce important definitions and lemmas from variational analysis. Definition 2 ([27]): Let \(f:\mathbb{R}^{n}\rightarrow(-\infty,+\infty]\) be a proper and lower semicontinuous function. 1. The Frechet subdifferential of \(f\) at the point \(x\in\text{dom}\ f\coloneqq\{x\in\mathbb{R}^{n}:f(x)<\infty\}\) is the set \[\hat{\partial}f(x)=\left\{v\in\mathbb{R}^{n^{2}}:\liminf_{y\neq x,y\to x }\frac{f(y)-f(x)-\langle v,y-x\rangle}{\|y-x\|}\geq 0\right\}.\] 2. The limiting subdifferential of \(f\) at the point \(x\in\text{dom}\ f\) is the set \[\partial f(x)=\left\{v\in\mathbb{R}^{n^{2}}:\exists\{(x^{t},y^{t})\}_{t=1}^{ \infty}\text{ s.t. }x^{t}\to x,f(x^{t})\to f(x),\hat{\partial}f(x^{t}) \ni y^{t}\to y\right\}.\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline Architecture & Method & \% Param./FLOPs & Test Accuracy (\%) \\ & & Pruned & Post Pruned/Fine Tuned \\ \hline \multirow{3}{*}{DenseNet-40} & Hrank [21] & 53.80/61.00 & —/93.68 \\ & GAL-0.05 [22] & 56.70/54.70 & 93.53/94.50 \\ & Proximal NS (Ours) & 67.70/57.54 & 93.58/93.64 \\ \hline \multirow{3}{*}{ResNet-110} & L1 [19] & 32.60/38.70 & —/93.30 \\ & GAL-0.5[22] & 44.80/48.50 & 92.55/92.74 \\ \cline{1-1} & Hrank [21] & 39.40/41.20 & —/94.23 \\ \cline{1-1} & Proximal NS (Ours) & 50.70/48.54 & 93.25/93.27 \\ \hline \end{tabular} \end{table} Table 3: Comparison of Proximal NS with other pruning methods on CIFAR 10. Lemma 1 (Strong Convexity Lemma [4]): _A function \(f(x)\) is called strongly convex with parameter \(\mu\) if and only if one of the following conditions holds:_ 1. \(g(x)=f(x)-\frac{\mu}{2}\|x\|_{2}^{2}\) _is convex._ 2. \(f(y)\geq f(x)+\langle\nabla f(x),y-x\rangle+\frac{\mu}{2}\|y-x\|_{2}^{2},\; \forall x,y\)_._ Lemma 2 (Descent Lemma [4]): _If \(\nabla f(x)\) is Lipschitz continuous with parameter \(L>0\), then_ \[f(y)\leq f(x)+\langle\nabla f(x),y-x\rangle+\frac{L}{2}\|x-y\|_{2}^{2},\; \forall x,y.\] For brevity, denote \(\tilde{W}\coloneqq(W,\gamma)\), the overall set of weights in a CNN, and \(Z\coloneqq(\tilde{W},\xi)=(W,\gamma,\xi)\). Before proving Theorem 1.1, we prove some necessary lemmas. Lemma 3 (Sufficient Decrease): _Let \(\{Z^{t}\}_{t=1}^{\infty}\) be a sequence generated by Algorithm 1. Under Assumption 1, we have_ \[F(Z^{t+1})-F(Z^{t})\leq\frac{L-\alpha}{2}\|Z^{t+1}-Z^{t}\|_{2}^{2}. \tag{10}\] _for all \(t\in\mathbb{N}\). In addition, when \(\alpha>L\), we have_ \[\sum_{t=1}^{\infty}\|Z^{t+1}-Z^{t}\|_{2}^{2}<\infty. \tag{11}\] Proof: First we define the function \[L_{t}(\tilde{W})=\tilde{\mathcal{L}}(\tilde{W}^{t})+\langle\nabla\tilde{ \mathcal{L}}(\tilde{W}^{t}),\tilde{W}-\tilde{W}^{t}\rangle+\frac{\alpha}{2}\| \tilde{W}-\tilde{W}^{t}\|_{2}^{2}+\frac{\beta}{2}\|\gamma-\xi^{t}\|_{2}^{2}. \tag{12}\] We observe that \(L_{t}\) is strongly convex with respect to \(\tilde{W}\) with parameter \(\alpha\). Because \(\nabla L_{t}(\tilde{W}^{t+1})=0\) by (7), we use Lemma 1 to obtain \[\begin{split} L_{t}(\tilde{W}^{t})&\geq L_{t}( \tilde{W}^{t+1})+\langle\nabla L_{t}(\tilde{W}^{t+1}),\tilde{W}^{t}-\tilde{W}^ {t+1}\rangle+\frac{\alpha}{2}\|\tilde{W}^{t+1}-\tilde{W}^{t}\|_{2}^{2}\\ &\geq L_{t}(\tilde{W}^{t+1})+\frac{\alpha}{2}\|\tilde{W}^{t+1}- \tilde{W}^{t}\|_{2}^{2},\end{split} \tag{13}\] which simplifies to \[\begin{split}\tilde{\mathcal{L}}(\tilde{W}^{t})+\frac{\beta}{2}\| \gamma^{t}-\xi^{t}\|_{2}^{2}-\alpha\|\tilde{W}^{t+1}-\tilde{W}^{t}\|_{2}^{2} \geq&\tilde{\mathcal{L}}(\tilde{W}^{t})+\langle\nabla\tilde{ \mathcal{L}}(\tilde{W}^{t}),\tilde{W}^{t+1}-\tilde{W}^{t}\rangle\\ &+\frac{\beta}{2}\|\gamma^{t+1}-\xi^{t}\|_{2}^{2}.\end{split} \tag{14}\] Since \(\nabla\tilde{\mathcal{L}}(\tilde{W})\) is Lipschitz continuous with constant \(L\), we have \[\tilde{\mathcal{L}}(\tilde{W}^{t+1})\leq\tilde{\mathcal{L}}(\tilde{W}^{t})+ \langle\nabla\mathcal{L}(\tilde{W}^{t+1}),\tilde{W}^{t+1}-\tilde{W}^{t} \rangle+\frac{L}{2}\|\tilde{W}^{t+1}-\tilde{W}^{t}\|_{2}^{2} \tag{15}\] by Lemma 2. Combining the previous two inequalities gives us \[\tilde{\mathcal{L}}(\tilde{W}^{t})+\frac{\beta}{2}\|\gamma^{t}-\xi^{t}\|_{2}^{2}+ \frac{L-2\alpha}{2}\|\tilde{W}^{t+1}-\tilde{W}^{t}\|_{2}^{2}\geq\tilde{\mathcal{ L}}(\tilde{W}^{t+1})+\frac{\beta}{2}\|\gamma^{t+1}-\xi^{t}\|_{2}^{2}.\] Adding the term \(\lambda\|\xi\|_{1}\) on both sides and rearranging the inequality give us \[F(\tilde{W}^{t+1},\xi^{t})-F(Z^{t})\leq\frac{L-2\alpha}{2}\|\tilde{W}^{t+1}- \tilde{W}^{t}\|_{2}^{2} \tag{16}\] By (9), we have \[\lambda\|\xi^{t+1}\|_{1}+\frac{\beta}{2}\|\gamma^{t+1}-\xi^{t+1}\|_{2}^{2}+ \frac{\alpha}{2}\|\xi^{t+1}-\xi^{t}\|_{2}^{2}\leq\lambda\|\xi^{t}\|_{1}+\frac {\beta}{2}\|\gamma^{t+1}-\xi^{t}\|_{2}^{2}.\] Adding \(\tilde{\mathcal{L}}(\tilde{W}^{t+1})\) on both sides and rearranging the inequality give \[F(Z^{t+1})-F(\tilde{W}^{t+1},\xi^{t})\leq-\frac{\alpha}{2}\|\xi^{t+1}-\xi^{t} \|_{2}^{2} \tag{17}\] Summing up (16) and (17) and rearranging them, we have \[F(Z^{t+1})-F(Z^{t})\leq\frac{L-2\alpha}{2}\|\tilde{W}^{t+1}-\tilde{W}^{t}\|_{2 }^{2}-\frac{\alpha}{2}\|\xi^{t+1}-\xi^{t}\|_{2}^{2}\leq\frac{L-\alpha}{2}\|Z ^{t+1}-Z^{t}\|_{2}^{2}. \tag{18}\] Summing up the inequality for \(t=1,\ldots,N-1\), we have \[\sum_{t=1}^{N-1}\frac{\alpha-L}{2}\|Z^{t+1}-Z^{t}\|_{2}^{2}\leq F(Z^{1})-F(Z^{ N})\leq F(Z^{1}).\] Because \(\alpha>L\), the left-hand side is nonnegative, so as \(N\to\infty\), we have (11). Lemma 4 (Relative error property): _Let \(\{Z^{t}\}_{t=1}^{\infty}\) be a sequence generated by Algorithm 1. Under Assumption 1, for any \(t\in\mathbb{N}\), there exists some \(w^{t+1}\in\partial F(Z^{t+1})\) such that_ \[\|w^{t+1}\|_{2}\leq(3\alpha+2L+\beta)\left\|Z^{t+1}-Z^{t}\right\|_{2}. \tag{19}\] Proof: We note that \[\nabla_{W}\tilde{\mathcal{L}}(\tilde{W}^{t+1}) \in\partial_{W}F(Z^{t+1}), \tag{20a}\] \[\nabla_{\gamma}\tilde{\mathcal{L}}(\tilde{W}^{t+1})+\beta(\gamma^ {t+1}-\xi^{t+1}) \in\partial_{\gamma}F(Z^{t+1}),\] (20b) \[\lambda\partial_{\xi}\|\xi^{t+1}\|_{1}-\beta(\gamma^{t+1}-\xi^{t+ 1}) \in\partial_{\xi}F(Z^{t+1}). \tag{20c}\] By the first-order optimality conditions of (7) and (9), we obtain \[\nabla_{W}\tilde{\mathcal{L}}(\tilde{W}^{t})+\alpha(W^{t+1}-W^{t }) =0, \tag{21a}\] \[\nabla_{\gamma}\tilde{\mathcal{L}}(\tilde{W}^{t})+\alpha(\gamma^ {t+1}-\gamma^{t})+\beta(\gamma^{t+1}-\xi^{t}) =0,\] (21b) \[\lambda\partial_{\xi}\|\xi^{t+1}\|_{1}+\alpha(\xi^{t+1}-\xi^{t})- \beta(\gamma^{t+1}-\xi^{t+1}) \ni 0. \tag{21c}\] Combining (20a) and (21a), (20b) and (21b), and (20c) and (21c), we obtain \[\nabla_{W}\tilde{\mathcal{L}}(\tilde{W}^{t+1})-\nabla_{W}\tilde{ \mathcal{L}}(\tilde{W}^{t})-\alpha(W^{t+1}-W^{t})=w_{1}^{t+1}\in\partial_{W}F(Z ^{t+1}), \tag{22a}\] \[\nabla_{\gamma}\tilde{\mathcal{L}}(\tilde{W}^{t+1})-\nabla_{ \gamma}\tilde{\mathcal{L}}(\tilde{W}^{t})-\alpha(\gamma^{t+1}-\gamma^{t})- \beta(\xi^{t+1}-\xi^{t})=w_{2}^{t+1}\in\partial_{\gamma}F(Z^{t+1}),\] (22b) \[-\alpha(\xi^{t+1}-\xi^{t})=w_{3}^{t+1}\in\partial_{\xi}F(Z^{t+1}), \tag{22c}\] where \(w^{t+1}=(w_{1}^{t+1},w_{2}^{t+1},w_{3}^{t+1})\in\partial F(Z^{t+1})\). As a result, by triangle inequality and Lipschitz continuity of \(\nabla\tilde{\mathcal{L}}\), we have \[\|w_{1}^{t+1}\|_{2}\leq\alpha\|W^{t+1}-W^{t}\|_{2}+\|\nabla_{W} \tilde{\mathcal{L}}(\tilde{W}^{t+1})-\nabla_{W}\tilde{\mathcal{L}}(\tilde{W} ^{t})\|_{2}\] \[\leq\alpha\|W^{t+1}-W^{t}\|+L\|\tilde{W}^{t+1}-\tilde{W}^{t}\|_{2 }\leq(\alpha+L)\|Z^{t+1}-Z^{t}\|_{2},\] \[\|w_{2}^{t+1}\|_{2} \leq\alpha\|\gamma^{t+1}-\gamma^{t}\|_{2}+\beta\|\xi^{t+1}-\xi^{t }\|_{2}+\|\nabla_{\gamma}\tilde{\mathcal{L}}(\tilde{W}^{t+1})-\nabla_{\gamma} \tilde{\mathcal{L}}(\tilde{W}^{t})\|_{2}\] \[\leq(\alpha+L)\|\tilde{W}^{t+1}-\tilde{W}^{t}\|_{2}+\beta\|\xi^{t +1}-\xi^{t}\|_{2}\leq(\alpha+\beta+L)\|Z^{t+1}-Z^{t}\|_{2},\] and \[\|w_{3}^{t+1}\|_{2}\leq\alpha\|\xi^{t+1}-\xi^{t}\|_{2}\leq\alpha\|Z^{t+1}-Z^{ t}\|_{2}.\] Therefore, for all \(t\in\mathbb{N}\), we have \[\|w^{t+1}\|_{2}\leq\|w_{1}^{t+1}\|_{2}+\|w_{2}^{t+1}\|_{2}+\|w_{3}^{t+1}\|_{2 }\leq(3\alpha+2L+\beta)\left\|Z^{t+1}-Z^{t}\right\|_{2}.\] Proof (Proof of Theorem 1): The result follows from Lemmas 3-4 combined with [5, Theorem 1]
2303.12777
Viscous heat backflow and temperature resonances in extreme thermal conductors
We demonstrate that non-diffusive, fluid-like heat transport, such as heat backflowing from cooler to warmer regions, can be induced, controlled, and amplified in extreme thermal conductors such as graphite and hexagonal boron nitride. We employ the viscous heat equations, i.e. the thermal counterpart of the Navier-Stokes equations in the laminar regime, to show with first-principles quantitative accuracy that a finite thermal viscosity yields steady-state heat vortices, and governs the magnitude of transient temperature waves. Finally, we devise strategies that exploit devices' boundaries and resonance to amplify and control heat hydrodynamics, paving the way for novel experiments and applications in next-generation electronic and phononic technologies.
Jan Dragašević, Michele Simoncelli
2023-03-22T17:44:57Z
http://arxiv.org/abs/2303.12777v5
# Viscous heat backflow and temperature resonances in extreme thermal conductors ###### Abstract We demonstrate that non-diffusive, fluid-like heat transport, such as heat backflowing from cooler to warmer regions, can be induced, controlled, and amplified in extreme thermal conductors such as graphite and hexagonal boron nitride. We employ the viscous heat equations, _i.e._ the thermal counterpart of the Navier-Stokes equations in the laminar regime, to show with first-principles quantitative accuracy that a finite thermal viscosity yields steady-state heat vortices, and governs the magnitude of transient temperature waves. Finally, we devise strategies that exploit devices' boundaries and resonance to amplify and control heat hydrodynamics, paving the way for novel experiments and applications in next-generation electronic and phononic technologies. _Introduction.--_Crystals with ultrahigh thermal conductivity, such as graphite[1; 2; 3; 4] and monoisotopic layered hexagonal boron nitride (h\({}^{11}\)BN) [25; 27], are critical for copious thermal-management applications in e.g. electronics and phononics [7; 8]. These materials are also of fundamental scientific interest, since they can host heat-transport phenomena that violate Fourier's diffusive law [9; 10; 14; 16]. For example, striking hydrodynamic-like phenomena such as temperature waves--where heat transiently backflows from cooler to warmer regions--have recently been observed in graphite up to \(\sim\)200 K [15][14]. While these phenomena hold great potential for heat-management technologies [7; 9], they are weak and challenging to observe; thus, exploiting them requires unraveling the fundamental physics determining their emergence, and understanding how to amplify them. Hitherto, the theoretical investigation of heat hydrodynamics has been done relying on the linearized Peierls-Boltzmann equation (LBTE) [15] and on first-principles simulations [7; 9; 16; 18; 19; 20; 22]. These works have provided microscopic insights on hydrodynamic-like heat transport in layered [3; 15; 21; 22; 23; 24; 25] and two-dimensional [3; 19; 20; 25; 26; 27; 28; 29; 30] materials, quantitatively discussing how the predominance of momentum-conserving (normal) phonons' collisions over momentum-relaxing (Umklapp) phonons' collisions can give rise to a non-diffusive, fluid-like behavior for heat, with hallmarks such as second sound (temperature oscillations) [15; 16; 25] and Poiseuille-like heat flow [19; 23; 31; 32; 10]. However, the complexity of the microscopic LBTE makes it unpractical to explore how to induce and control macroscopic hallmarks of heat hydrodynamics. Recent research has been focused on developing and testing mesoscopic models (partial-differential equations having reduced complexity compared to the integro-differential microscopic LBTE) for heat hydrodynamics [9; 34; 35; 36; 38] that can be parametrized from first-principles. Here, we employ the mesoscopic viscous heat equations (VHE)--the thermal counterpart of the Navier-Stokes equations in the laminar regime [9]--to shed light on the necessary and sufficient conditions to induce viscous-heat-hydrodynamic phenomena, thus to devise strategies to amplify and control them. Specifically, we discuss temperature inversion in steady-state heat vortices and viscous temperature waves, showing that these can be amplified by engineering device's boundary conditions or exploiting resonance. We demonstrate analytically that viscosity has been neglected in all the past mesoscopic studies on temperature waves based on the dual-phase-lag equation (DPLE) [1; 2] (which also encompasses Cattaneo's second-sound [41] equation as a special case). Thus, we show--with first-principles quantitative accuracy--that it is necessary to account for such viscosity to rationalize the relaxation timescales [14] and lengthscales [15; 16] observed in recent, pioneering experiments in graphite. Finally, we discuss how these results inspire novel experimental setups and applications in thermal-management technologies for electronics, also predicting from first principles the appearance of viscous heat hydrodynamics in h\({}^{11}\)BN. _Viscous heat equations.--_We start by summarizing the salient features of the VHE, a set of partial differential equations for the temperature, \(T(\mathbf{r},t)\), and phonon drift-velocity, \(\mathbf{u}(\mathbf{r},t)\)[9]: \[C\frac{\partial T(\mathbf{r},t)}{\partial t}+\sum_{i,j=1}^{3}\alpha^ {ij}\frac{\partial u^{j}(\mathbf{r},t)}{\partial r^{i}}-\sum_{i,j=1}^{3}\kappa^{ ij}\frac{\partial^{2}T(\mathbf{r},t)}{\partial r^{i}\partial r^{j}}=\dot{q}(\mathbf{r},t), \tag{1}\] \[A^{i}\frac{\partial u^{i}(\mathbf{r},t)}{\partial t}+\sum_{j=1}^{3} \beta^{ij}\frac{\partial T(\mathbf{r},t)}{\partial r^{j}}-\sum_{j,k,l=1}^{3}\mu^ {ijkl}\frac{\partial^{2}u^{k}(\mathbf{r},t)}{\partial r^{j}\partial r^{l}}=- \sum_{j=1}^{3}\gamma^{ij}u^{j}(\mathbf{r},t). \tag{2}\] In these equations, \(T(\mathbf{r},t)\) and \(\mathbf{u}(\mathbf{r},t)\) emerge from the conservation of energy and quasi-conservation of crystal momentum in microscopic phonon collisions in the hydrodynamic regime, respectively [9]. The terminology "quasi-conservation" is used because momentum-dissipating Umklapp collisions are always present in real materials at finite temperature--in practice the magnitude of hydrodynamic effects depends on the relative strength between normal and Umklapp collisions--and the presence of Umklapp collisions is taken into account by the dissipative term \(-\gamma^{ij}u^{j}(\mathbf{r},t)\). The term \(\dot{q}(\mathbf{r},t)\) accounts for the space- and time-dependent energy exchange with an external heat source. The thermal conductivity \(\kappa^{ij}\) and viscosity \(\mu^{ijkl}\) quantify the response of the crystal to a perturbation of temperature and drift-velocity, respectively [9]. The coupling coefficients \(\alpha^{ij}\) and \(\beta^{ij}\)[42] originate from the relation between energy and crystal momentum for phonons. All these parameters are determined exactly from the LBTE (_i.e._ accounting for the actual phonon band structure and full collision matrix) and with first-principles accuracy, details are reported in the Supplementary Material (SM). Finally, we recall that in the VHE framework the total heat flux (\(\mathbf{Q}^{TOT}\)) is determined by temperature gradient and drift velocity [9], _i.e._, \(\mathbf{Q}^{TOT}\)=\(\mathbf{Q}^{\delta}\)+\(\mathbf{Q}^{D}\) where \(Q^{\delta i}\)=\(-\sum_{j}\kappa^{ij}\nabla^{j}T\) and \(Q^{D,i}\)=\(\sum_{j}\alpha^{ij}u^{j}\). The VHE encompass Fourier's law and temperature waves [1; 2] as special limiting cases. Specifically, it can be shown that in the limit of strong crystal-momentum dissipation and negligible viscous effects (\(\mu_{\rm max}\)\(\rightarrow\)0 and \([\gamma_{\rm max}]^{-1}\)\(\rightarrow\)0, where \(\mu_{\rm max}\) and \(\gamma_{\rm max}\) are the maximum component of the viscosity and Umklapp dissipation tensors) the VHE yield Fourier's diffusive behavior [9], which is trivially irrotational. In contrast, we show in Sec. I of the SM that in the time-dependent regime and inviscid limit (\(\mu\)=0) the VHE reduce to the DPDE [1; 2] for temperature waves. Both the known Fourier and DPLE special limiting cases are obtained when viscous effects are negligible, the former in the steady-state and the latter in the transient domain. In the following we use first-principles calculations to parametrize the VHE for natural and isotopically purified graphite, as well in h\({}^{11}\)BN [43]; thus we explore how viscosity affects the emergence of non-diffusive, hydrodynamic behavior for heat both in the steady-state and transient regimes. _Steady-state viscous heat backflow._--In Fig. 1 we investigate how viscosity affects steady-state thermal transport by comparing the numerical solution of Fourier's (inviscid) equation (panel **a**) with that of the viscous VHE (panel **b**). We consider a graphitic device having tunnel-chamber geometry, _i.e._ a form that promotes vortical hydrodynamic behavior [44]. We highlight that the VHE temperature profile in the chamber is reversed compared to the temperature profile in the tunnel, a behavior completely opposite to that predicted by Fourier's law. Panel **c** shows that this temperature inversion--which in principle can be detected in thermal-imaging experiments [45; 46; 47; 11]--occurs in the presence of viscous vortical flow, as a consequence of heat backflowing against the temperature gradient. Importantly, in SM II we show that heat vortices are not limited to graphite, predicting their appearance also in h\({}^{11}\)BN around 60 K. To see how the vortex and consequent heat backflow in Fig. 1 require the presence of a finite thermal viscosity to emerge, we start by noting that in general the behavior of the device is described by Eqs. (1,2) in the steady-state. Then, if we consider the inviscid limit (\(\mu^{ijkl}\)=0 \(\forall\)\(i,j,k,l\)), Eq. (2) for an isotropic material such Figure 1: **Signature of viscous heat backflow in graphite.** In-plane (\(x\)\(-\)\(y\)) heat flux (streamlines) and temperature profile (colormap) for a tunnel-chamber device made of graphite. Panel **a** (**b**) shows the solution of Fourier’s equation (VHE) in the presence of a temperature gradient applied at the tunnel’s boundaries (70\(\pm\)25 K at \(y\)=\(\pm\)2.5\(\mu m\)), and considering the other boundaries as adiabatic (_i.e._\(\nabla T\)\(\cdot\)\(\hat{\mathbf{n}}\)=0, where \(\hat{\mathbf{n}}\) is the versor orthogonal to the boundary) and, in the VHE, ”slipping” (\(\mathbf{u}\)\(\cdot\)\(\hat{\mathbf{n}}\)=0). In Fourier’s case (**a**), the direction of the temperature gradient in the chamber mirrors that in the tunnel (\(T_{A}\)\(<\)\(T_{B}\)). In contrast, the VHE (**b**) account for an additional viscous component for the heat flux—not directly related to the temperature gradient, see text—allowing the emergence of viscous backflow and temperature gradient in the chamber reversed compared to the tunnel (\(T_{A}\)\(>\)\(T_{B}\)). Panel **c**), vorticity of the VHE heat flux, \(\nabla\)\(\times\)\(\mathbf{Q}^{\rm TOT}\); the vorticity for Fourier’s flux (not reported) is trivially zero. as graphite and h\({}^{11}\)BN in the in-plane direction (hereafter tensor indexes will be omitted for tensors that are proportional to the identity in the in-plane directions, see SM. VIII) reduces to \(\beta\nabla T(\mathbf{r},t)\)=\(-\gamma\mathbf{u}(\mathbf{r},t)\). This equation can be inserted into Eq. (1) to readily show that this inviscid limit is governed by a Fourier-like irrotational equation, where the total heat flux is solely determined by the gradient of the temperature field and thus backflow and vorticity cannot emerge [50]. In contrast, when a non-zero thermal viscosity tensor is considered in Eq. (2), the drift velocity is no longer proportional to the temperature gradient, and thus the total heat flux \(\mathbf{Q}^{TOT}\)=\(\mathbf{Q}^{\delta}+\mathbf{Q}^{D}\)=\(-\kappa\nabla T\)+\(\alpha\mathbf{u}\) cannot be simplified to an irrotational expression depending only on the gradient of a scalar temperature field. This demonstrates that the presence of a non-zero viscosity is a necessary condition to have non-zero vorticity and observe steady-state viscous heat backflow. However, having non-zero thermal viscosity is necessary but not sufficient to observe heat backflow; in fact, one also needs a device's geometry and boundary conditions that ensure the presence of non-zero second derivative of the drift velocity, _i.e._, of a total heat flux with non-zero vorticity. In this regard, the tunnel-chamber geometry promotes non-homogeneities in the drift-velocity field, and we show in SM III that having at least partially'slipping' boundaries (_i.e._, corresponding to reflective phonon-boundary scattering [51; 18; 52], [53]) is also necessary to observe temperature inversion due to viscous heat backflow. We note that the simulations in Fig. 1 have been performed at conditions where the Fourier Deviation Number (FDN) [9] predicted steady-state hydrodynamic deviations from Fourier law to be largest in graphite (at natural isotopic concentration), _i.e._, in a device having size 10 \(\mu\)m and around 70 K. SM IV discusses how viscous backflow depends on device's size, average temperature, and isotopic disorder in graphite. _Transient viscous heat backflow.--_Recent experiments in graphite have observed heat backflowing against the temperature gradient only in the time-dependent domain, in the form of second sound [15; 16] or lattice cooling [14]. The pioneering theoretical analyses have been performed relying on the microscopic LBTE [14; 15; 54; 16; 55] without resolving effects induced by a macroscopic (experimentally observable) thermal viscosity, or relying on the inviscid mesoscopic DPLE [1; 2; 3; 4; 5; 6; 7; 8]. It is therefore natural to wonder how the viscous heat backflow emerging from the VHE behaves in the time domain, and more precisely if there is a relationship between temperature waves and transient viscous heat backflow. Therefore, we perform the time-dependent simulation shown in Fig. 2. We consider a rectangular device that is at equilibrium at \(t\)=0 ns (\(T\)=80 K [62] and \(\mathbf{u}\)=0 everywhere); we perturb it with a heater localized at \((x_{c},0)\) for \(0\)\(<\)\(t\)\(<\)\(t_{\text{heat}}\)=0.4 ns (\(\dot{q}(\mathbf{r},t)\)=\(\mathcal{H}\theta(t_{\text{heat}}-t)\exp\left[-\frac{(x+x_{c})^{2}}{2 \sigma_{z}^{2}}-\frac{y^{2}}{2\sigma_{z}^{2}}\right]\), see Eq. (1) and note [63]); at \(t\)=\(t_{\text{heat}}\) we switch off the heater and monitor the relaxation to equilibrium. The device is always thermalised at the boundaries (\(T\)=\(80K\) and \(\mathbf{u}\)=0), see Ref. [11] for an experimental example of this boundary condition, and SM V for details on how boundary conditions (average temperature and thermalisation lengthscale), size, and isotopic-mass disorder affect the relaxation. The evolution of the temperature field (first column in Fig. 2) shows an oscillatory behavior, also termed "lattice cooling" [14] because of the transient and local appearance of temperature values lower than the initial equilibrium temperature. Such an oscillatory behavior is in sharp contrast with that predicted by Fourier's diffusive equation (see SM VI), whose smoothing property [17] implies that the evolution of a smooth, positive temperature perturbation relaxes to equilibrium remaining non-negative with respect to the initial equilibrium values. The appearance of lattice cooling from the VHE can be understood by inspecting the time evolution of the two aforementioned components of the VHE's heat flux (\(\mathbf{Q}^{\delta}\) and \(\mathbf{Q}^{D}\)). The heat-flow streamlines in the second and third column of Fig. 2 show that the Figure 2: **Transient hydrodynamic heat backflow and lattice cooling.** We show the VHE predictions for the relaxation in time of a temperature perturbation (obtained applying a localized heater to the device for \(0.0<t<0.4ns\)) in a graphitic device thermalised at 80 K the boundaries (thermalisation occurs in shaded regions, see SM V). Rows show different instants in time for temperature (left column), temperature-gradient heat-flux component (\(\mathbf{Q}^{\delta}\), central column), and drifting heat-flux component (\(\mathbf{Q}^{D}\), right column). The emergence of transient heat backflow (temperature waves) originates from the lagged coupled evolution of \(\mathbf{Q}^{\delta}\) and \(\mathbf{Q}^{D}\), and is quantitatively affected by thermal viscosity, as discussed in the text. In the 2D plots, the heat-flux streamlines are shown in white, while the colormap shows the magnitude of the heat flux. heat fluxes \(\mathbf{Q}^{\delta}\) and \(\mathbf{Q}^{D}\) can assume opposite directions during the relaxation, with the drifting flux \(\mathbf{Q}^{D}\propto\)\(\mathbf{u}\) backflowing against the temperature-gradient flux \(\mathbf{Q}^{\delta}\propto\)\(-\nabla T\). This is a consequence of the lagged (delayed) coupling between \(\mathbf{Q}^{\delta}\) and \(\mathbf{Q}^{D}\) in the VHE (1,2), which in the inviscid limit (\(\mu\)=0) reduces exactly to the lagged relationship between heat flux and temperature gradient discussed in the context of the dual-phase-lag model [2]. Specifically, we show in SM I that in the inviscid limit \(\mathbf{Q}^{TOT}(\mathbf{r},t+\tau_{Q})\)=\(-\kappa\nabla T(\mathbf{r},t+\tau_{T})\), where \(\tau_{Q}\)=\(A/\gamma\) is the delay between the application of a temperature gradient and the appearance of a heat flux, and \(\tau_{T}\)=\(\kappa A/(\alpha\beta+\kappa\gamma)\) is the time needed to create temperature gradient from an established heat flux (here tensor/vector indexes are omitted because in-plane transport in graphite is isotropic [3; 9]). Thus, heat backflow can be observed in the time-dependent domain as a consequence of different characteristic timescales for the evolution of \(\mathbf{Q}^{\delta}\) and \(\mathbf{Q}^{D}\), which allow these two heat-flux components to flow in opposite directions and yield in the inviscid limit a behavior analytically equivalent to the lagged DPLE. This also shows that while steady-state hydrodynamic heat backflow can emerge exclusively as a consequence of a finite thermal viscosity, time-dependent hydrodynamic backflow (_i.e._, temperature oscillations) do not necessarily require viscosity to appear (see SM VI for details). Nevertheless, we show in the following and in SM VII that accounting for a finite thermal viscosity is necessary to obtain quantitative agreement between the hydrodynamic relaxation lengthscales and timescales predicted from theory and observed in experiments. _Resonant amplification of temperature waves.--_The temperature waves observed in Fig. 2 have a small amplitude and consequently are difficult to be detected in experiments. Nevertheless, they are expected to exhibit resonant amplification when driven with a perturbation periodic in time, and this could be exploited to facilitate their experimental detection. Therefore, we investigate quantitatively the behavior of the device in Fig. 2 when driven with a periodic perturbation. Considering the analogies between temperature and mechanical waves, we applied to the device in Fig. 2 a perturbation mathematically similar to the \((1,2)\) mode of a loaded rectangular membrane, _i.e._\(\dot{q}(\mathbf{r},t)\)=\(\mathcal{H}[\sin(\omega t)+1]\exp\left[-\frac{(x+x_{x})^{2}}{2\sigma_{x}^{2}}- \frac{y^{2}}{2\sigma_{y}^{2}}\right]+\mathcal{H}[\sin(\omega t+\pi)+1]\exp \left[-\frac{(x-x_{x})^{2}}{2\sigma_{x}^{2}}-\frac{y^{2}}{2\sigma_{y}^{2}}\right]\). This perturbation is always non-negative, representing the laser heating employed in experiments [14; 15; 16]. Then, we monitored how the amplitude of the temperature oscillation, \(a\), varies as a function of frequency, \(f\). Fig. 3 shows that the solution of the periodically driven VHE in natural graphite displays resonant behavior, _i.e._ plotting the oscillation amplitude as a function of frequency, \(a(f)\), we see a peak reminiscent of that observed in the frequency response of a driven underdamped oscillator. We highlight how the resonant behavior obtained from the viscous VHE is weaker than that obtained from the inviscid DPLE, while Fourier's law completely lacks resonant response (analogously to an overdamped mechanical oscillator). We also note that the analysis in Fig. 3 improves upon previous studies based on the inviscid DPLE [3; 7; 10] by providing insights into how the resonant behavior of temperature waves is affected by thermal viscosity. Finally, the inset shows that reducing isotopic-mass disorder in graphite yields a stronger VHE resonant response, and that analogous signatures are expected to emerge in h\({}^{11}\)BN around \(T\)=\(60K\) and in slightly smaller (\(15\mu m\)-long) devices. Next, we systematically investigate how the maximum resonant amplification vary as a function of device size, average temperature, and type of material. We performed simulations analogous to Fig. 3 varying device's size [68] and equilibrium temperature, computing for every simulation the maximum resonant amplification as \(\max_{f}[a(f)]/a_{0}\) (where \(a_{0}\)=\(\lim_{f\to 0}a(f)\), see Fig. 3). In Fig 4**a,b** we compare the inviscid DPLE (**a**) with the VHE (**b**) in natural graphite. We see that the temperatures and lengthscales at which the VHE predict the emergence of hydrodynamic resonant behavior in natu Figure 3: **Resonant amplification of temperature waves.** Top, two-spots periodic perturbation (\(\dot{q}(\mathbf{r},t)\) in Eq. 1) applied to rectangular device made of graphite and having dimension and boundary conditions as in Fig. 2. Bottom, resonant amplification, of temperature oscillations around \(T\)=\(80K\) predicted by the VHE (green) and DPLE (blue). Red is Fourier’s law, lacking amplification. The main plot refers to graphite at natural-abundance isotopic-mass disorder (\(98.9\%\)\({}^{12}C\), \(1.1\%\)\({}^{13}C\)). Inset, the VHE predict that isotopically pure samples of graphite (\(99.9\%\)\({}^{12}C\), \(0.1\%\)\({}^{13}C\), dashed dark green) feature a stronger resonant amplification compared to natural graphite (green, same as in the main plot); in addition, analogous signatures are predicted to appear in h\({}^{11}\)BN, around \(T\)=\(60K\) and in slightly smaller (\(15\mu m\)-long) devices. ral samples are in broad agreement with the temperatures and lengthscales for the emergence of heat hydrodynamics discussed by Huberman _et al._[16]; in contrast, the DPLE fails to capture the reduction of hydrodynamic behavior as temperature is decreased below 100 K. Turning our attention to isotopically pure graphite (Fig. 4**c**), we see that here the VHE resonant response is stronger than in natural graphite, and it also persists up to larger lengthscales and higher temperatures, in broad agreement with the experiments by Ding _et al._[15][69]. Finally, Fig. 4**d** predicts that resonant behavior for viscous temperature waves occurs also in h\({}^{11}\)BN, with a magnitude slightly weaker than in natural graphite. _Conclusion.--_We have shed light on the fundamental physics determining the emergence of viscous heat hydrodynamics, discussing with quantitative first-principles accuracy how to induce temperature inversion in steady-state heat vortices, and viscous temperature waves in extreme thermal conductors such as graphite and layered h\({}^{11}\)BN. We have demonstrated that these phenomena can be amplified by engineering the device's boundary conditions or exploiting resonance, paving the way for applications in next-generation electronic and phononic technologies. We have provided novel, fundamental insights on temperature waves, showing that the viscous temperature waves emerging from the VHE differ fundamentally from the inviscid DPLE heat waves [1, 2, 3, 7, 10]. Most importantly, we have quantitatively demonstrated that viscous effects determine the hydrodynamic relaxation timescales [14] and lengthscales [15, 16] measured in pioneering experiments. These results share fundamental common underpinnings with other quasiparticle's fluid-like transport phenomena in solids--involving, e.g., electron-phonon bifluids [70, 71, 72, 73, 74, 75, 76], magnons [77], skyrmions [78]--thus will potentially inspire analogous developments and applications. Finally, our findings may also directly translate to fluids flowing in porous media, and are thus relevant for soil science, groundwater hydrology, and petroleum engineering [79][80]. We thank Dr Miguel Beneitez and Dr Gareth Conduit for the useful discussions. M. S. acknowledges support from Gonville and Caius College, and from the SNSF project P500PT_203178. J. D. thanks Prof Hrvoje Jasak for his hospitality in Cambridge. The first-principles calculations of conductivity and viscosity were performed on the Sulis Tier 2 HPC platform, funded by EPSRC Grant EP/T022108/1 and the HPC Midlands+consortium.
2302.05122
Anomalous Hall effect from a non-Hermitian viewpoint
Non-Hermitian descriptions often model open or driven systems away from the equilibrium. Nonetheless, in equilibrium electronic systems, a non-Hermitian nature of an effective Hamiltonian manifests itself as unconventional observables such as a bulk Fermi arc and skin effects. We theoretically reveal that spin-dependent quasiparticle lifetimes, which signify the non-Hermiticity of an effective model in the equilibrium, induce the anomalous Hall effect, namely the Hall effect without an external magnetic field. We first examine the effect of nonmagnetic and magnetic impurities and obtain a non-Hermitian effective model. Then, we calculate the Kubo formula from the microscopic model to ascertain a non-Hermitian interpretation of the longitudinal and Hall conductivities. Our results elucidate the vital role of the non-Hermitian equilibrium nature in the quantum transport phenomena.
Hiroki Isobe, Naoto Nagaosa
2023-02-10T08:59:01Z
http://arxiv.org/abs/2302.05122v2
# Anomalous Hall Effect from a Non-Hermitian Viewpoint ###### Abstract Non-Hermitian descriptions often model open or driven systems away from the equilibrium. Nonetheless, in equilibrium electronic systems, a non-Hermitian nature of an effective Hamiltonian manifests itself as unconventional observables such as a bulk Fermi arc and skin effects. We theoretically reveal that spin-dependent quasiparticle lifetimes, which signify the non-Hermiticity of a model, induce the anomalous Hall effect, namely the Hall effect without an external magnetic field. We first examine the effect of nonmagnetic and magnetic impurities and obtain a non-Hermitian effective model. Then, we calculate the Kubo formula from the microscopic model to ascertain a non-Hermitian interpretation of the longitudinal and Hall conductivities. Our results elucidate the vital role of the non-Hermitian equilibrium nature in the quantum transport phenomena. A description of a material relies on a Hamiltonian. For an electronic system, it describes the quantum-mechanical motion of electrons under a crystalline potential. The wavefunction in a clean system thus has a Bloch form, consisting of a plane wave and a short-range modulation by an underlying crystal. As a wave without decay, a Bloch function represents a current with the probability conserved, which is a consequence of the Hermiticity of the Hamiltonian. In reality, however, a Bloch wave is not an exact solution in the presence of impurities or disorder. It decays during propagation, which we can effectively describe by a _non-Hermitian_ Hamiltonian [1; 2; 3; 4]. Examples of non-Hermitian effective models for quantum electronic systems include the electron-phonon coupling [5], disorder [6; 7], or strong correlation [8; 9; 10]. In those systems, the non-Hermiticity causes a Fermi arc terminating with exceptional points or a drumhead-like flat band encircled by an exceptional ring [11; 12]. At an exceptional point, the non-Hermitian Hamiltonian is nondiagonalizable, which never appears from a Hermitian Hamiltonian [13; 14]. Despite such observable spectral features, little has been known about the role of non-Hermiticity in quantum transport phenomena in solids [15; 16; 17]. Non-Hermitian models appear in a variety of fields other than quantum systems [18; 19; 20; 21], such as photonics [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], electrical circuits [40; 41; 42; 43], and mechanical systems [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56]. In classical open or driven systems, non-Hermiticity arises from gain and loss, which accompanies the energy flow in and out of the system in focus. It causes unusual features in spectrum, resonance, and propagation that never appear in a Hermitian model; e.g., sharp resonance and unidirectional transparency. Those resonance and wave propagation properties bring about advantages for measurements and detection through response of the system. Therefore, they are easily observable as opposed to the spectrum features of quantum materials. We investigate the linear response of a two-dimensional Dirac material with impurities. We consider magnetic impurities in general, which induce _spin-dependent_ scattering, and derive an effective Hamiltonian from impurity averaging. It reveals spin-dependent lifetimes leading to the non-Hermiticity. Independently, we evaluate the Kubo formula using the Dirac model with impurities by means of the conventional Feynman diagram technique. Our detailed calculations give the analytical expressions of the longitudinal and Hall conductivities, the latter of which emerges either from a uniform magnetization or randomly distributed spin-dependent impurities, along with the spin-orbit coupling embedded in the model. We reveal that the linear response properties manifest the non-Hermitian nature of the model; the spin-dependent lifetimes appearing the effective Hamiltonian well approximate the longitudinal and Hall conductivities obtained from the Kubo formula. We also discuss the effect of skew scattering and the anomalous Hall effect induced by magnetic impurities without a uniform magnetization. _Model:_ We consider the Dirac Hamiltonian in two dimensions \[H_{0}(\mathbf{k})=v\mathbf{k}\cdot\mathbf{\sigma}+m\sigma_{z}, \tag{1}\] where the Pauli matrices \(\sigma_{x}\), \(\sigma_{y}\), \(\sigma_{z}\) represent the electron's spin, and \(v\) and \(m\) are the Dirac velocity and mass, respectively. We set \(\hbar=1\) unless otherwise noted. This Hamiltonian describes e.g., a surface state of a topological insulator with the mass \(m\) corresponding to a uniform magnetization perpendicular to the plane induced by doping or deposition. For \(m=0\), \(H_{0}\) is invariant under time reversal \(\mathcal{T}=i\sigma_{y}\mathcal{K}\) with the complex conjugation \(\mathcal{K}\) as the mass \(m\) renders a uniform background magnetization coupled to Dirac electrons via the exchange coupling. We add impurities to the clean Dirac Hamiltonian. While it is common to consider nonmagnetic potential impurities [57; 58; 59; 60; 61], impurities generally have potential and magnetic couplings concurrently [62; 63]. We assume here that each impurity has a magnetic moment perpendicular to the plane, which results in the impurity potential \[H_{\rm imp}(\mathbf{r})=V(\mathbf{r})\eta,\quad\eta=\eta_{0}\sigma_{0}+\eta_{z}\sigma_ {z}=\begin{pmatrix}\eta_{11}&0\\ 0&\eta_{22}\end{pmatrix}. \tag{2}\] Finite \(\eta_{z}\) describes the magnetic component of impurities. It breaks time-reversal symmetry microscopically while the rotational symmetry in the \(xy\) plane remain preserved. For the impurity potential \(V(\mathbf{r})\), we consider the moments of the spatial distribution \[\begin{split}\langle V(\mathbf{r})V(\mathbf{r}^{\prime})\rangle& =\frac{n_{i}V_{2}}{(2\pi)^{2}}\delta(\mathbf{r}-\mathbf{r}^{\prime}),\\ \langle V(\mathbf{r})V(\mathbf{r}^{\prime})V(\mathbf{r}^{\prime\prime})\rangle &=\frac{n_{i}V_{3}}{(2\pi)^{2}}\delta(\mathbf{r}-\mathbf{r}^{ \prime})\delta(\mathbf{r}^{\prime}-\mathbf{r}^{\prime\prime}),\end{split} \tag{3}\] where \(\langle\ \rangle\) denotes impurity averaging and \(n_{i}\) is the impurity concentration. \(V_{p}\) (\(p=2,3\)) represents a \(p\)-th order moment per single atomic potential. We set \(\langle V(\mathbf{r})\rangle=0\) as the uniform component merely renormalizes the chemical potential and the mass. _Self-energy with impurity averaging_: We regard the impurity potential \(H_{\text{imp}}(\mathbf{r})\) as a perturbation to the clean system \(H_{0}(\mathbf{k})\). We calculate the impurity average to obtain the self-energy \(\Sigma^{s}(\epsilon)\), where \(s=\text{R}(\text{A})\) labels the retarded (advanced) function. The self-energy follows the self-consistent equation [Fig. 1(a)], where the solid line represents the full Green's function \(G^{s}(\mathbf{k},\epsilon)=[\epsilon-H_{0}(\mathbf{k})-\Sigma^{s}(\epsilon)]^{-1}\) and a cross denotes an impurity. In the following, we focus on retarded functions as Hermitian conjugation gives the corresponding advanced functions. Without spontaneous symmetry breaking of the rotational symmetry, the retarded self-energy should have the form \[\Sigma^{\text{R}}(\epsilon)=[\Sigma(\epsilon)-i\Gamma(\epsilon)]\sigma_{0}+[ \delta m(\epsilon)-i\gamma(\epsilon)]\sigma_{z}, \tag{4}\] where \(\Sigma(\epsilon)\), \(\delta m(\epsilon)\), \(\Gamma(\epsilon)\), and \(\gamma(\epsilon)\) are real functions. We henceforth refer to \(\Sigma(\epsilon)\) and \(\delta m(\epsilon)\) as real parts, and \(\Gamma(\epsilon)\) and \(\gamma(\epsilon)\) as imaginary parts. The real parts renormalize the energy and the mass as \(\bar{\epsilon}(\epsilon)=\epsilon-\Sigma(\epsilon)\) and \(\bar{m}(\epsilon)=m+\delta m(\epsilon)\), respectively. We obtain the explicit form of the self-energy later. _Non-Hermitian effective Hamiltonian_: We define the retarded effective Hamiltonian after impurity averaging as \[H_{\text{eff}}^{\text{R}}(\mathbf{k},\epsilon)=H_{0}(\mathbf{k})+\Sigma^{\text{R}}( \epsilon). \tag{5}\] The effective Hamiltonian recovers translational symmetry, which the microscopic model \(H_{0}(\mathbf{k})+H_{\text{imp}}(\mathbf{r})\) breaks. In compensation, the imaginary parts \(\Gamma(\epsilon)\) and \(\gamma(\epsilon)\) violate Hermiticity to describe the decay of Bloch waves. We note that Hermitian conjugation relates the retarded effective Hamiltonian not to itself but to the advanced one: \(H_{\text{eff}}^{\text{A}}(\mathbf{k},\epsilon)=[H_{\text{eff}}^{\text{R}}(\mathbf{k},\epsilon)]^{\dagger}\neq H_{\text{eff}}^{\text{R}}(\mathbf{k},\epsilon)\). The Dirac Hamiltonian is known to host the anomalous Hall effect with a finite mass, which requires time-reversal symmetry breaking. It is beneficial to examine how the time-reversal operation \(\mathcal{T}=i\sigma_{y}\mathcal{K}\) acts on the effective Hamiltonian. Considering that retarded and advanced functions describe forward and backward time evolutions, a necessary condition for the microscopic model to have time-reversal symmetry is \(\mathcal{T}H_{\text{eff}}^{\text{R}}(\mathbf{k},\epsilon)\mathcal{T}^{-1}=H_{ \text{eff}}^{\text{A}}(-\mathbf{k},\epsilon)\), which we dub _statistical_ time-reversal symmetry for brevity; see Supplemental Material (SM) for details [64]. In the effective Hamiltonian (5), \(m(\epsilon)\) and \(\gamma(\epsilon)\) break statistical time-reversal symmetry, where the former arises from a uniform magnetization and the latter from spin-dependent impurity scattering. We will see that both of them contribute to the anomalous Hall effect. We note that \(\Gamma(\epsilon)\), which describes the spin-independent part of the quasiparticle lifetimes, does not break statistical time-reversal symmetry. Despite the non-Hermitian effective Hamiltonian, we can write the Green's function in the eigenstate basis. As the effective Hamiltonian Eq. (5) is non-Hermitian, we have distinct left and right eigenvectors \(\mathbf{L}_{s}(\mathbf{k},\epsilon)\) and \(\mathbf{R}_{s}(\mathbf{k},\epsilon)\) with \(s=\pm\) corresponding to the two complex eigenvalues \(E_{s}^{\text{R}}(\mathbf{k},\epsilon)=\Sigma(\epsilon)-i\Gamma(\epsilon)+s\sqrt{v^{2 }k^{2}+[m+\delta m(\epsilon)-i\gamma(\epsilon)]^{2}}\)[64]. The projection operator on an eigenstate is \(P_{s}(\mathbf{k},\epsilon)=\mathbf{R}_{s}^{T}(\mathbf{k},\epsilon)\mathbf{L}_{s}(\mathbf{k},\epsilon)\). It is non-Hermitian and satisfies the completeness \(\sum_{s=\pm}P_{s}(\mathbf{k},\epsilon)=\sigma_{0}\). Then, we obtain the Green's function \[G^{\text{R}}(\mathbf{k},\epsilon)=\frac{P_{+}(\mathbf{k},\epsilon)}{\epsilon-E_{+}^{ \text{R}}(\mathbf{k},\epsilon)}+\frac{P_{-}(\mathbf{k},\epsilon)}{\epsilon-E_{-}^{ \text{R}}(\mathbf{k},\epsilon)}. \tag{6}\] _Spin-dependent lifetimes_: Though a self-consistent solution requires a numerical calculation, we can analytically find a perturbative solution of the self-energy for weak impurities. As we will see later, the imaginary parts play an important role in the transport properties. The perturbative solutions of the imaginary parts are \[\Gamma(\epsilon) \approx\frac{\alpha_{2}\pi}{2}[(\eta_{11}^{2}+\eta_{22}^{2})| \epsilon|+(\eta_{11}^{2}-\eta_{22}^{2})m\operatorname{sgn}(\epsilon)] \tag{7a}\] \[\quad-\frac{\pi\alpha_{3}\Delta\Lambda}{\epsilon_{0}}[(\eta_{11}^{3 }+\eta_{22}^{3})|\epsilon|+(\eta_{11}^{3}-\eta_{22}^{3})m\operatorname{sgn}( \epsilon)],\] \[\gamma(\epsilon) \approx\frac{\alpha_{2}\pi}{2}[(\eta_{11}^{2}-\eta_{22}^{2})| \epsilon|+(\eta_{11}^{2}+\eta_{22}^{2})m\operatorname{sgn}(\epsilon)]\] \[\quad-\frac{\pi\alpha_{3}\Delta\Lambda}{\epsilon_{0}}[(\eta_{11}^{3 }-\eta_{22}^{3})|\epsilon|+(\eta_{11}^{3}+\eta_{22}^{3})m\operatorname{sgn}( \epsilon)], \tag{7b}\] Figure 1: Diagrammatic representations for the conductivity. (a) The self-energy captures the effect of time-reversal breaking in the effective non-Hermitian Hamiltonian. A solid line represents the Green’s function \(G(\mathbf{k},\epsilon)\) and a cross and \(p(=2,3)\) dashed lines correspond to the potential with the \(p\)-th moment. (b) The Kubo formula calculates the longitudinal and anomalous Hall conductivities. (c) The vertex correction describes the corrections to the scattering time by impurities. for \(\epsilon^{2}>m^{2}\). Here, we introduce the dimensionless constant for the impurity strength \(\alpha_{p}=n_{i}^{p/2}V_{p}/(4\pi v)^{p/2}\) and the energy unit \(\epsilon_{0}=\sqrt{4\pi v^{2}n_{i}}\). Since we have obtained the Green's function in the eigenstate basis Eq. (6), we impose energy cutoffs separately for the conduction and valence bands, \(\Lambda_{+}\) and \(\Lambda_{-}\), respectively. The difference of the energy cutoffs \(\Delta\Lambda=\Lambda_{+}-\Lambda_{-}\) appear at order \(\alpha_{3}\). We retain the terms at order \(\alpha_{3}\) because \(\alpha_{3}\Delta\Lambda/\epsilon_{0}\) can be comparable to \(\alpha_{2}\) even for \(|\alpha_{3}|\ll\alpha_{2}\)[64]. Importantly, the present impurity model generates spin-dependent lifetimes \[\tau_{\uparrow}=\frac{1}{2(\Gamma+\gamma)},\quad\tau_{\downarrow}=\frac{1}{2 (\Gamma-\gamma)}. \tag{8}\] \(\gamma(\epsilon)\) represents their difference, and importantly, it arises regardless of a uniform magnetization but by impurities with \(\eta_{11}\neq\eta_{22}\), when the impurity scattering depends on spin. We note that the relation \(\Gamma(\epsilon)\geq|\gamma(\epsilon)|\) must hold as the system is not driven by an external force. In other words, the two lifetimes are positive and hence quasiparticles always decay. _Conductivity calculations_: Now we calculate the conductivity \(\sigma_{ab}\) (\(a,b=x,y\)) from the microscopic model \(H_{0}+H_{\text{imp}}\) using the Kubo formula. It is convenient to decompose the formula like as in the Kubo-Streda formula [65; 66], which leads to analytic solutions. At zero temperature [67], we write the electric conductivity [Fig. 1(b)] as \(\sigma_{ab}(\epsilon)=\sigma_{ab}^{\text{(Ia)}}(\epsilon)+\sigma_{ab}^{\text {(Ib)}}(\epsilon)+\sigma_{ab}^{\text{(II)}}(\epsilon)\), where the three terms are \(\sigma_{ab}^{\text{(Ia)}}(\epsilon)=\int_{\mathbf{k}}\text{tr}[j_{a}G^{\text{R}}( \epsilon)j_{b}G^{\text{A}}(\epsilon)]/(2\pi)\), \(\sigma_{ab}^{\text{(Ib)}}(\epsilon)=-\int_{\mathbf{k}}\text{tr}[j_{a}G^{\text{R}} (\epsilon)j_{b}G^{\text{R}}(\epsilon)+j_{a}G^{\text{A}}(\epsilon)j_{b}G^{ \text{A}}(\epsilon)]/(4\pi)\), \(\sigma_{ab}^{\text{(II)}}(\epsilon)=\int_{\mathbf{k}}\int_{-\infty}^{\epsilon}d \epsilon^{\prime}\,\text{tr}[j_{a}G^{\text{R}}(\epsilon^{\prime})j_{b}\partial _{\epsilon^{\prime}}G^{\text{R}}(\epsilon^{\prime})-j_{a}\partial_{\epsilon^{ \prime}}G^{\text{R}}(\epsilon^{\prime})j_{b}G^{\text{R}}(\epsilon^{\prime})+j _{a}\partial_{\epsilon^{\prime}}G^{\text{A}}(\epsilon^{\prime})j_{b}G^{ \text{A}}(\epsilon^{\prime})-j_{a}G^{\text{A}}(\epsilon^{\prime})j_{b}\partial _{\epsilon^{\prime}}G^{\text{A}}(\epsilon^{\prime})]/(4\pi)\) with \(\int_{\mathbf{k}}=\int d^{2}k/(2\pi)^{2}\). We omit \(\mathbf{k}\) in the Green's function and the trace acts on the Pauli matrices for spin. \(j_{a}\) is the current operator and its bare form without impurity scattering is \(j_{a}=-ev\sigma_{a}\). Since we include the effect of scattering in the Green's function as a self-energy, we need to incorporate the vertex correction for a self-consistent calculation [68]. In the following, we discuss the calculation of the conductivity at a low impurity concentration, i.e., consider an expansion with respect to \(n_{i}\). Then, we should retain the vertex correction in \(\sigma_{ab}^{\text{(Ia)}}\) while those in \(\sigma_{ab}^{\text{(Ib)}}\) and \(\sigma_{ab}^{\text{(II)}}\) gives higher-order corrections [58]. We should thus replace one current operator \(j_{a}\) in \(\sigma_{ab}^{\text{(Ia)}}\) with \(j_{a}=-ev\Gamma_{a}(\epsilon)\), which we should determine according to the self-consistent equation [Fig. 1(c)][64]. By evaluating the Kubo formula, we obtain the analytic expression of the conductivity [64] \[\sigma_{ab}^{\text{(Ia)}}(\epsilon)=\frac{e^{2}}{4\pi^{2}}\left[ \mathbf{\Gamma}(\epsilon)\mathbf{\Lambda}(\epsilon)\right]_{ab},\quad\sigma_{ab}^{ \text{(Ib)}}(\epsilon)=\frac{e^{2}}{4\pi^{2}}\delta_{ab}, \tag{9a}\] \[\sigma_{ab}^{\text{(II)}}(\epsilon)=-\frac{e^{2}}{4\pi^{2}} \varepsilon_{abz}\operatorname{Im}\log\frac{\bar{\epsilon}-\bar{m}+i\Gamma+i \gamma}{\bar{\epsilon}+\bar{m}+i\Gamma-i\gamma}. \tag{9b}\] The matrices \(\mathbf{\Gamma}(\epsilon)\) and \(\mathbf{\Lambda}(\epsilon)\) are related to the vertex and ladder functions: \[\mathbf{\Gamma} =\{\mathbf{1}-\alpha_{2}\eta_{11}\eta_{22}\mathbf{\Lambda}\] \[\quad-\alpha_{3}\eta_{11}\eta_{22}[(\eta_{11}-\eta_{22}) \operatorname{Re}I_{0}^{\text{R}}+(\eta_{11}-\eta_{22})\operatorname{Re}I_{z}^ {\text{R}}]\mathbf{\Lambda}\] \[\quad-\alpha_{3}\eta_{11}\eta_{22}[(\eta_{11}-\eta_{22}) \operatorname{Im}I_{0}^{\text{R}}+(\eta_{11}+\eta_{22})\operatorname{Im}I_{z}^ {\text{R}}]\mathbf{\Lambda}\mathbf{\varepsilon}^{-1}, \tag{10}\] \[\mathbf{\Lambda} =\frac{\operatorname{Im}\log\zeta}{\operatorname{Im}\zeta}[(\bar{ \epsilon}^{2}+\Gamma^{2}-\bar{m}^{2}-\gamma^{2})\mathbf{1}-2(\bar{m}\Gamma+\bar{ \epsilon}\gamma)\mathbf{\varepsilon}], \tag{11}\] where we use \((\mathbf{1})_{ab}=\delta_{ab}\) and \((\mathbf{\varepsilon})_{ab}=\varepsilon_{zab}\) with the Levi-Civita symbol \(\varepsilon_{abc}\). We also define the functions \(\zeta(\epsilon)=(\bar{m}-i\gamma)^{2}-(\bar{\epsilon}+i\Gamma)^{2}\) and \[I_{0}^{\text{R}}(\epsilon) =-\frac{\Delta\Lambda}{\epsilon_{0}}-\frac{\bar{\epsilon}+i\Gamma }{\epsilon_{0}}\log\frac{\Lambda_{+}\Lambda_{-}}{\zeta}, \tag{12a}\] \[I_{z}^{\text{R}}(\epsilon) =-\frac{\bar{m}-i\gamma}{\epsilon_{0}}\log\frac{\Lambda_{+} \Lambda_{-}}{\zeta}. \tag{12b}\] We note that \(\mathbf{\Gamma}(\epsilon)\), \(\mathbf{\Lambda}(\epsilon)\), and \(I_{0,z}^{\text{R}}(\epsilon)\) are dimensionless functions. \(\sigma_{ab}^{\text{(Ib)}}\) contributes only to the longitudinal conductivity and \(\sigma_{ab}^{\text{(II)}}\) to the Hall conductivity. Roughly speaking, the Hall conductivity inside the band gap (\(\bar{\epsilon}^{2}<\bar{m}^{2}\)) comes from \(\sigma_{ab}^{\text{(II)}}\) to give \(\sigma_{xy}\approx-e^{2}/(4\pi)=-e^{2}/(2h)\) with the Planck constant \(h\) recovered. For large doping \(|\epsilon|\gg|m|,\epsilon_{0}\), \(\sigma_{ab}^{\text{(Ia)}}\) predominantly contributes to the conductivity. For \(|\alpha_{3}|\ll\alpha_{2}\) when skew scattering is not dominant, we find the approximate forms \[\sigma_{xx}(\epsilon)\approx\frac{e^{2}}{8\pi}\frac{|\epsilon|}{ \Gamma(\epsilon)}\phi, \tag{13a}\] \[\sigma_{xy}(\epsilon)\approx-\frac{e^{2}}{4\pi}\frac{\gamma( \epsilon)}{\Gamma(\epsilon)}\phi^{2}\operatorname{sgn}(\epsilon). \tag{13b}\] The constant \(\phi=[1-\eta_{11}\eta_{22}/(\eta_{11}^{2}+\eta_{22}^{2})]^{-1}\) originates from the vertex correction, characterizing transport quantities. From \(\sigma_{xx}(\epsilon)\), we can identify \(\tau_{\text{tr}}(\epsilon)=\phi/[2\Gamma(\epsilon)]\) as the transport scattering time [69; 70]. On the other hand, it is worth emphasizing that the approximate form of the anomalous Hall conductivity \(\sigma_{xy}\) relies on \(\gamma(\epsilon)\). Therefore, spin-dependent lifetimes (\(\tau_{\uparrow}\neq\tau_{\downarrow}\)) manifest time-reversal symmetry breaking, leading to the anomalous Hall conductivity \(\sigma_{xy}\propto(\tau_{\uparrow}-\tau_{\downarrow})/(\tau_{\uparrow}+\tau_{ \downarrow})\) along with the spin-orbit coupling embedded in the Dirac model. The approximate forms Eq. (13) provide a non-Hermitian interpretation of the electric conductivity for both longitudinal and transverse components. _Numerical results_: We show the longitudinal conductivity \(\sigma_{xx}\) and the Hall conductivity \(\sigma_{xy}\) in Fig. 2. We evaluate the analytic expressions of the conductivity Eq. (9) with the self-energy numerically obtained from the self-consistent equation [Fig. 1(a)]. We use the conductivity unit \(e^{2}/(2\pi\hbar)=e^{2}/h\) with the Planck constant \(h\) recovered. We present the results for various impurity types, masses, and energy cutoffs. We also depict the approximate results Eq. (13) in the same figure using the dashed lines, revealing a good agreement at relatively large doping from the Dirac point. It corroborates the non-Hermitian interpretation of the longitudinal and Hall conductivities. The dimensionless parameter \(\alpha_{3}\) characterizes the skewness of the impurity potential distribution whereas \(\alpha_{2}\) does the impurity potential strength. We note that \(\alpha_{3}\) is a major source of skew scattering [71; 72]. With the symmetric energy cutoffs for the conduction and valence bands (\(\Delta\Lambda=0\)), the effect of skewness is tiny with the ratio \(\alpha_{3}/\alpha_{2}^{2}=0.04\); compare the green and gray lines in Fig. 2. Its dependence is as weak as logarithmic, which we can infer from Eq. (12). However, asymmetric cutoffs (\(\Delta\Lambda\neq 0\)) enhance the effect of skewness (red and blue lines in Fig. 2) as it appears with a potentially large factor \(\Delta\Lambda/\epsilon_{0}\) in the self-energy Eq. (7) and the vertex correction Eq. (10). They modify the conductivity through the quasiparticle lifetime and the scattering time, respectively. Now we discuss the effect of the magnetic properties of impurities. For nonmagnetic impurities [Fig. 2(a)], our result coincides with the previous result with symmetric energy cutoffs [60; 61; 58; 64]. In this case, a uniform magnetization that yields a finite mass breaks time-reversal symmetry to induce finite anomalous Hall effect. The skewness \(\alpha_{3}\) makes the conductivity asymmetric about the charge neutrality \(\epsilon=0\) as it breaks electron-hole symmetry. We observe a larger conductivity in the conduction band where \(\alpha_{3}\eta_{0}\epsilon>0\), because the scattering amplitude by an impurity is smaller when the impurity potential is repulsive [61; 73]. Also, we tend to observe a larger conductivity for \(\Lambda_{+}>\Lambda_{-}\), when the band in which the impurity potential is repulsive has a wider energy range. The peak structure of the conductivity implies broad resonance of scattering [61], which is contained in the vertex correction Eq. (10) in the present analysis. For magnetic impurities [Fig. 2(b)], the conductivity is reduced compared to the nonmagnetic impurity case with the same potential strength. Here we observe certain electron-hole symmetry, which we will discuss later. In reality, a magnetic impurity induces both potential and magnetic scatterings at a single site (\(\eta_{0},\eta_{z}\neq 0\)); in other words, impurity scattering becomes spin-dependent [Fig. 2(c)]. Then, the anomalous Hall effect appears even without a uniform magnetization \(m=0\) [Figs. 2(d)]. The magnetism of impurities imparts time-reversal symmetry breaking, giving rise to \(\delta m\) and \(\gamma\); see Eq. (7) and SM [64]. The effect is prominent and realistic for \(\eta_{0},\eta_{z}\neq 0\) while purely magnetic impurities (\(\eta_{0}=0\), \(\eta_{z}\neq 0\)) without magnetization (\(m=0\)) can generate finite \(\sigma_{xy}\) with \(\alpha_{3}\neq 0\)[64]. The longitudinal conductivity shows a weak dependence on the Fermi level with a sharp dip at \(\epsilon=0\) to \(\sigma_{xx}\simeq e^{2}/(2\pi^{2})\)[64]. _Symmetries_: Some numerical results are symmetric or antisymmetric about the charge neutrality (\(\epsilon=0\)), which we can understand from the symmetries of the model. We consider the following three symmetry operations: (i) time reversal \(\mathcal{T}=i\sigma_{y}\mathcal{K}\), (ii) charge conjugation \(\mathcal{C}=\sigma_{x}\mathcal{K}\), and (iii) their product \(\mathcal{S}=\mathcal{T}\mathcal{C}=\sigma_{z}\). For convenience, we refer to \(\mathcal{S}\) as "sublattice" symmetry [74]. \(\mathcal{S}\) is a local operation acting on a spin, which we may view as reflection (\(z\mapsto-z\)) about the two-dimensional system Figure 2: Fermi level dependence of the conductivity. The upper panels show the longitudinal conductivity \(\sigma_{xx}\) and the lower panes the Hall conductivity \(\sigma_{xy}\). (a)-(d) correspond to different impurity spin components and masses: (a) \(\eta=\sigma_{0}\) (nonmagnetic), \(m=\epsilon_{0}\) (massive); (b) \(\eta=\sigma_{z}\) (magnetic), \(m=\epsilon_{0}\) (massive); (c) \(\eta_{11}=\sqrt{3/2}\), \(\eta_{22}=\sqrt{1/2}\) (spin-dependent), \(m=\epsilon_{0}\) (massive); and (d) \(\eta_{11}=\sqrt{3/2}\), \(\eta_{22}=\sqrt{1/2}\) (spin-dependent), \(m=0\) (massless). We keep \(\eta_{11}^{2}+\eta_{22}^{2}\) constant for all cases. We choose the dimensionless constants for the impurity strength as \(\alpha_{2}=0.01\), \(\alpha_{3}=3\times 10^{-6}\) for the colored lines, while the gray lines correspond to the cases without skew scattering \(\alpha_{2}=0.01\), \(\alpha_{3}=0\). Different colors represent different energy cutoffs; see the legend. The cutoff dependence is as weak as logarithmic for \(\alpha_{3}=0\). The solid lines represent the exact solutions Eq. (9) and the dashed lines the approximate results with a non-Hermitian interpretation Eq. (13). embedded in a three-dimensional space for the present model. The clean Hamiltonian \(H_{0}(\mathbf{k})\) has electron-hole symmetry \[\mathcal{C}H_{0}(\mathbf{k})\mathcal{C}^{-1}=-H_{0}(-\mathbf{k}),\] (14a) while the impurity potential transforms as \[\mathcal{C}V(\mathbf{r})(\eta_{0}\sigma_{0}+\eta_{z}\sigma_{z})\mathcal{C}^{-1}=- V(\mathbf{r})(-\eta_{0}\sigma_{0}+\eta_{z}\sigma_{z}). \tag{14b}\] If \(\eta_{0}=0\), the entire system preserves electron-hole symmetry \(\mathcal{C}(H_{0}+H_{\text{imp}})\mathcal{C}^{-1}=-(H_{0}+H_{\text{imp}})\). Therefore, the conductivity with magnetic impurities is symmetric about the charge neutrality as we have seen in Fig. 2(b). The operator \(\mathcal{C}\) swaps the energy cutoffs for the conduction and valence bands as well. For \(\eta_{0}\neq 0\) and \(\eta_{z}=0\), the conductivity remains electron-hole symmetric if \(\alpha_{3}=0\), since the distribution of the impurity potential retains electron-hole symmetry; see gray lines in Fig. 2(a). In other words, converting \(V(\mathbf{r})\) to \(-V(\mathbf{r})\) does not change \(\alpha_{2}\); i.e., electron-hole symmetry is statistically preserved. In the gapless case, on the other hand, \(\mathcal{S}\) transforms the Hamiltonian as \[\mathcal{S}H_{0}(\mathbf{k})\mathcal{S}^{-1}=-H_{0}(\mathbf{k})\quad(m=0), \tag{15a}\] \[\mathcal{S}H_{\text{imp}}(\mathbf{r})\mathcal{S}^{-1}=H_{\text{imp}}( \mathbf{r}). \tag{15b}\] If \(\alpha_{3}=0\), the model statistically preserves the "electron-hole" symmetry imposed by \(\mathcal{S}\). As \(\mathcal{S}\) is virtually a reflection about the plane, the Hall conductivity changes sign under \(\mathcal{S}\) and hence it becomes antisymmetric about the charge neutrality whereas the longitudinal conductivity remains symmetric [Fig. 2(d)]. See SM for more details [64]. _Scaling_: Figure 3 shows the scaling plots by varying the Fermi level \(\epsilon\) and the impurity concentration \(n_{i}\). For other parameters, we use the same values as those for Fig. 2(c). When we increase the Fermi level from the band edge [Fig. 3(a)], the longitudinal conductivity \(\sigma_{xx}\) gradually increases while the Hall conductivity \(\sigma_{xy}\) remains around \(e^{2}/(2\hbar)\). For \(e^{2}/\hbar\lesssim\sigma_{xx}\lesssim 10e^{2}/h\), there seems a scaling region with \(|\sigma_{xy}|\propto\sigma_{xx}^{0.2}\)[64]. As \(\sigma_{xx}\) grows, it begins to saturate because of the artifact of short-range impurities [70], but \(\sigma_{xy}\) keeps growing linearly with the energy owing to skew scattering [64], resulting in a rapid upturn in the scaling plot. On the other hand, the scaling plot by varying the impurity concentration reveals the known behavior. As \(\sigma_{xx}\) increases with smaller \(n_{i}\), we observe the side-jump, intrinsic, and skew-scattering regions, where we find the approximate scaling relations \(|\sigma_{xy}|\propto\sigma_{xx}^{1.6}\), \(|\sigma_{xy}|\sim\text{const.}\), and \(|\sigma_{xy}|\propto\sigma_{xx}^{1}\), respectively [57]. _Discussions_: In Fig. 2(d), we observed the anomalous Hall effect in the absence of a uniform magnetization (\(m=0\)). It relies on the spin-dependent scattering (\(\eta_{0},\eta_{z}\neq 0\)), leading to spin-dependent lifetimes \(\tau_{\uparrow}\neq\tau_{\downarrow}\) and thus non-Hermiticity of the effective model. In reality, random magnetic impurities may have a finite uniform magnetization considering that the magnetic component \(\eta_{z}\) arises from the exchange coupling. We note that the gapped and gapless cases [Figs. 2(c), (d)] are continuously connected. In addition, one might concern the violation of the Onsager reciprocal relation, when the Hall conductivity is finite without a uniform magnetization. However, finite anomalous Hall effect requires magnetic impurities, which microscopically break time-reversal symmetry, so that our results comply with the Onsager reciprocal relation. Lastly, it is worth pointing out that the gapless Dirac model does not have a finite Berry curvature, so that it is natural to attribute finite \(\sigma_{xy}\) to scattering-related phenomena rather than the intrinsic origin. _Acknowledgment_: This work was supported by JST CREST Grant No. JPMJCR1874, Japan, and JSPS KAKENHI Grant No. 18H03676.
2308.10212
Modified Friedmann equations from fractional entropy
Based on the fractional black hole entropy (Jalalzadeh S. et al., Eur. Phys. J. C, 81 (2021) 632), we derive the modified Friedmann equations from two different frameworks. First, we consider the modifications of Friedmann equations from the first law of thermodynamics at the apparent horizon. We show that the generalized second law (GSL) of thermodynamics always holds in a region bounded by the apparent horizon. Then, we obtain Friedmann equations from Verlinde's entropic gravity framework. We also compute the fractional corrections to the deceleration parameter $q$ in the flat case $k=0$ for both frameworks. Furthermore, we consider the time to reach the initial singularity for the two frameworks. The results indicate that the initial singularity is accessible for both frameworks. However, fractional effects may provide a constraint on the equation of state parameter in the entropic gravity scenario since the time is imaginary for $-2/3\alpha<\omega<-1/3$.
Zeynep Çoker, Özgür Ökcü, Ekrem Aydiner
2023-08-20T09:45:00Z
http://arxiv.org/abs/2308.10212v2
# Modified Friedmann equations from fractional entropy ###### Abstract Based on the fractional black hole entropy (Jalalzadeh S. et al., Eur. Phys. J. C, 81 (2021) 632), we derive the modified Friedmann equations from two different frameworks. First, we consider the modifications of Friedmann equations from the first law of thermodynamics at the apparent horizon. We show that the generalized second law (GSL) of thermodynamics always holds in a region bounded by the apparent horizon. Then, we obtain Friedmann equations from Verlinde's entropic gravity framework. We also compute the fractional corrections to the deceleration parameter \(q\) in the flat case \(k=0\) for both frameworks. Furthermore, we consider the time to reach the initial singularity for the two frameworks. The results indicate that the initial singularity is accessible for both frameworks. However, fractional effects may provide a constraint on the equation of state parameter in the entropic gravity scenario since the time is imaginary for \(-2/3\alpha<\omega<-1/3\). ## 1 Introduction Black holes thermodynamics is one of the most promising research fields in theoretical physics since it reveals the multicultural character of gravity, namely, the deep connection between gravitation, quantum mechanics, and thermodynamics [1, 2, 3, 4, 5, 6]. Black hole surface area \(A\) and surface gravity \(\kappa\) correspond to thermodynamic quantities entropy \(S\) and temperature \(T\), respectively. Inspired by black hole thermodynamics, Jacobson derived the Einstein field equations from the first law of thermodynamics [7]. He obtained the field equations by considering the entropy-area expression with Clausius relation, \(\delta Q=TdS\), under the assumption of the relation is valid for local Rindler causal horizons through each spacetime point. Here, \(\delta Q\) and \(T\) correspond to energy flux and Unruh temperature, respectively. Following Jacobson's seminal work, there have been many papers aimed to reveal the deep connection between gravitational dynamics and horizon thermodynamics [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]. For example, the studies handling the relation between Einstein field equations and the first law of thermodynamics can be found in refs. [8, 9, 10, 11, 12]. Motivated by Jacobson's study, Cai and Kim [13] derived higher-dimensional Friedmann equations from the first law in the form of \(-dE=T_{h}dS_{h}\) at apparent horizon. Here \(-dE\) corresponds to the energy flux passing through the apparent horizon for the infinitesimal time interval at a fixed horizon radius. The temperature and entropy at the apparent horizon are given by [13] \[T_{h}=\frac{1}{2\pi\tilde{r_{A}}},\qquad\qquad S_{h}=\frac{A}{4}, \tag{1}\] where \(A\) and \(\tilde{r_{A}}\) are area and apparent horizon, respectively 1. Moreover, they also obtained Friedmann equations from the entropy formulae of Gauss-Bonnet gravity and Lovelock gravity theories, where the entropy is not proportional to the horizon area. Based on ref. [13], Friedmann equations in the scalar\(-\)tensor and \(f(R)\) gravity theories were derived in ref. [14]. Although one can obtain the Friedmann equations from the above equations, this framework has some shortcomings such as the limitation on the equation\(-\)of\(-\)state only vacuum\(-\)energy\(-\)dominated or de Sitter spacetime. Besides, the horizon temperature is not proportional to the surface gravity \(\kappa\). Assuming the proportionality of temperature and surface gravity in addition to the entropy-area relation, one should update the first law of thermodynamics at the apparent horizon as follows [15]: Footnote 1: We use the units \(\hbar=c=G_{N}=L_{Pl}^{2}=1\) throughout the paper. \[dE=T_{h}dS_{h}+WdV, \tag{2}\] where \(W\) is the work density and \(E=\rho V\) is the total energy in volume \(V\) enclosed by the apparent horizon. Then, following ref. [15], thermodynamics of the apparent horizon and unified first law were also studied in refs. [16, 17]. Recently, Friedmann equations and apparent horizon thermodynamics have been widely studied in the literature [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46]. Another interesting aspect of thermodynamical gravitation is Verlinde's entropic gravity [47]. In 2010, Verlinde proposed that gravity is not a fundamental force since combining gravity with quantum mechanics is harder than other forces. He claimed that gravity is interpreted as an entropic force that emerged due to the entropy changes of bits on the holographic screen. Assuming a holographic screen with an Unruh temperature, he derived Newton's second law. Furthermore, using the holographic principle and the equipartition law of energy, he also derived Newton's gravitational law and Einstein field equations. Subsequently, many studies on the derivations of Newton's gravitational law, Einstein field equations and Friedmann equations in entropic gravity have been published [48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]. It is widely known that entropy can be modified in the context of various theories. The literature is rich with the applications of modified Friedmann equations for both the first law at the apparent horizon and entropic gravity. Motivated by various approaches, loop quantum gravity [18, 19], generalised uncertainty principle [20, 21, 22], rainbow gravity [58, 59] corrected Friedmann equations were derived. Moreover, Tsallis [27, 28, 29], Barrow [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42] and Kaniadakis [43, 44, 60] entropies modified Friedmann equations can be found in the literature. Recently, Jalalzadeh et al. investigated the effects of fractional quantum mechanics (FQM) on Schwazrschild black hole thermodynamics [63]. By using a space-fractional derivative of second order (Riesz derivative), they obtained fractional black hole entropy from a modified Wheeler-DeWitt equation. In order to obtain the modified Wheeler-DeWitt equation, they first considered the canonical quantization procedure of the Schawazschild black hole and obtained the corresponding Hamiltonian. Then, including the quantum Riesz derivative in the momentum operator of the Hamiltonian leads to the fractional Wheeler-deWitt equation. The fractional entropy is given by [63] \[S_{h}=\left(\pi{\dot{r_{A}}}^{2}\right)^{\frac{2+\alpha}{2\alpha}},\qquad 1 <\alpha\leq 2, \tag{3}\] where \(\alpha\) is the fractional parameter and the standard case is recovered for \(\alpha=2\). This equation implies that the entropy is a power\(-\)law function of its area. We note that this entropy resembles the Barrow [64] and Tsallis [65] entropies although they have different motivations and physical principles. In this work, we would like to investigate the modifications of the Friedmann equations for fractional entropy. In order to obtain the Friedmann equations, we consider two frameworks, namely, the first law of thermodynamics at the apparent horizon and entropic gravity. The fractional derivative extends the order of derivative reel or complex numbers. There are various kinds of fractional derivatives, Liouville, Riemann, Caputo, Riesz fractional derivatives, etc. [66]. None of these derivatives successfully explain the all experimental data. Instead, their agreements with the experiments depend on the specific problems. Fractional calculus especially finds many places in the applications of quantum mechanics. The fractional generalization of quantum mechanics is known as FQM [67, 68, 69, 70, 71, 72, 73, 74]. The interested reader may refer to Laskin's monograph on FQM in ref. [73] and review in ref. [74]. Moreover, recently many studies devoted to relativistic gravitation and cosmology have been considered in FQM [75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103]. The paper is organized as follows: In the next section, we obtain the fractional Friedmann equations from the first law of thermodynamics at the apparent horizon. We investigate the deceleration parameter and time to reach the initial singularity. Then, we check the validity of GSL. In the third section, we derived fractional Friedmann equations from the entropic gravity framework. Similarly, we study the deceleration parameter and calculate the time to reach the singularity. Finally, the conclusions are presented in the last section. Friedmann equations from the first law of thermodynamics Let us begin to take a quick glimpse at the basic elements of the Friedmann-Robertson-Walker (FRW) universe. The line element of the FRW universe in the compact form is defined by [13] \[ds^{2}=h_{ab}dx^{a}dx^{b}+\tilde{r}d\Omega^{2}, \tag{4}\] where \(\tilde{r}=a(t)r\), \(a(t)\) is the scale factor, \(x^{a}=(t,r)\), and \(h_{ab}=diag\,(-1,a^{2}/(1-kr^{2}))\) is the two\(-\)dimensional metric. \(k=-1\), \(0\), and \(1\) correspond to the open, flat, and closed universe, respectively. The apparent horizon is defined by [13] \[\tilde{r_{A}}=ar=\frac{1}{\sqrt{H^{2}+k/a^{2}}}, \tag{5}\] where \(H=\dot{a}/a\) is the Hubble parameter, and dot denotes the derivative with respect to time. The surface gravity of the horizon is given by [13, 104] \[\kappa=-\frac{1}{\tilde{r_{A}}}\left(1-\frac{\dot{\tilde{r_{A}}}}{2H\tilde{r_{ A}}}\right), \tag{6}\] and the corresponding temperature of the apparent horizon is given by [15] \[T_{h}=\frac{\kappa}{2\pi}=-\frac{1}{2\pi\tilde{r_{A}}}\left(1-\frac{\dot{ \tilde{r_{A}}}}{2H\tilde{r_{A}}}\right). \tag{7}\] We assume the matter and energy of the universe as an ideal fluid, thus the corresponding energy-momentum tensor is given by \[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{8}\] where \(\rho\), \(p\) and \(u^{\mu}\) are energy density, pressure, and four\(-\)velocity of the fluid, respectively. From the conservation of energy-momentum tensor, i.e., \(T_{;\beta}^{\alpha\beta}=0\), one obtains the continuity equation as \[\dot{\rho}+3H(\rho+p)=0. \tag{9}\] According to the arguments of ref. [104], the work density is defined by \[W=-\frac{1}{2}T^{ab}h_{ab}=\frac{1}{2}(\rho-p). \tag{10}\] At this point, we briefly mention eq. (10). Here, \(W\) is the work done by the volume change of the universe. Now, we start to calculate eq. (2) step by step. Employing the volume [15] \[V=\frac{4}{3}\pi\tilde{r_{A}}^{3}, \tag{11}\] the total energy of universe \(E=\rho V\) and eq. (9), one finds the differential of \(E\) as follows \[dE=\rho dV+Vd\rho=4\pi\rho\tilde{r_{A}}^{2}d\tilde{r_{A}}-4\pi(\rho+p)\tilde{r_{A }}^{3}Hdt, \tag{12}\] and from eqs. (10) and (11), \(WdV\) is given by \[WdV=2\pi(\rho-p)\tilde{r_{A}}^{2}d\tilde{r_{A}}. \tag{13}\] Differentiating the entropy in eq. (3), we obtain \(T_{h}dS_{h}\) as \[T_{h}dS_{h}=-\left(\frac{2+\alpha}{2\alpha}\right)\left(1-\frac{\dot{r_{A}}}{ 2H\tilde{r_{A}}}\right)\pi^{\frac{2-\alpha}{2\alpha}}\tilde{r_{A}}^{\frac{2- \alpha}{\alpha}}d\tilde{r_{A}}. \tag{14}\] Amalgamating eqs. (12), (13) and (14) in eq. (2) and employing the relation \[d\tilde{r_{A}}=-H\tilde{r_{A}}^{3}\left(\dot{H}-\frac{k}{a^{2}}\right)dt, \tag{15}\] we obtain \[4\pi(\rho+p)Hdt=\left(\frac{2+\alpha}{2\alpha}\right)\pi^{\frac{2-\alpha}{2 \alpha}}\tilde{r_{A}}^{\frac{2-4\alpha}{\alpha}}d\tilde{r_{A}}. \tag{16}\] Combining the continuity equation with the above equation and integrating the above equation gives the first Friedmann equation. Then using eq. (15) in the above equation yields the the second Friedmann equation. The modified Friedmann equations are given by \[\left(\frac{2+\alpha}{2\alpha}\right)\pi^{\frac{2-\alpha}{2\alpha }}\tilde{r_{A}}^{\frac{2-3\alpha}{\alpha}}=\frac{4\pi}{3}\left(\frac{3\alpha- 2}{\alpha}\right)\rho,\] \[\left(\frac{2+\alpha}{2\alpha}\right)\pi^{\frac{2-\alpha}{2 \alpha}}\tilde{r_{A}}^{\frac{2-\alpha}{\alpha}}\left(\dot{H}-\frac{k}{a^{2}} \right)=-4\pi(\rho+p), \tag{17}\] where we set the integration constant to zero in the first equation 2. These modified equations are reduced usual form in the limit \(\alpha\to 2\). Using eq. (5), Friedmann equations are expressed in terms of Hubble parameter, i.e., Footnote 2: Alternatively, deriving the first Friedmann equation and employing the continuity equation gives the second Friedmann equation. \[\left(H^{2}+\frac{k}{a^{2}}\right)^{\frac{3\alpha-2}{2\alpha}}= \frac{8\pi}{3}\left(\frac{3\alpha-2}{(2+\alpha)\pi^{\frac{2-\alpha}{2\alpha}} }\right)\rho,\] \[\left(\frac{2+\alpha}{2\alpha}\right)\pi^{\frac{2-\alpha}{2 \alpha}}\left(H^{2}+\frac{k}{a^{2}}\right)^{\frac{\alpha-2}{2\alpha}}\left( \dot{H}-\frac{k}{a^{2}}\right)=-4\pi(\rho+p). \tag{18}\] Now, we would like to investigate the fractional effects on the deceleration parameter \(q\) for \(k=0\). It is given by \[q=-\frac{a\ddot{a}}{\dot{a}^{2}}, \tag{19}\] where positivity and negativity of \(q\) mean decelerated and accelerated phases, respectively. From the continuity equation (9), we obtain \[\rho=\rho_{0}a^{-3(1+\omega)}, \tag{20}\] where we use \(p=\omega\rho\) as equation of state and \(\rho_{0}\) is a constant. Substituting eq. (20) in the first equation in (18) yields the following solution \[a(t)\propto t^{\frac{3\alpha-2}{3\alpha(\omega+1)}}. \tag{21}\] From the above solution, the deceleration parameter is given by \[q=\frac{2+3\alpha\omega}{3\alpha-2}. \tag{22}\] For radiation (\(\omega=1/3\)) and matter (\(\omega=0\)) dominated eras, deceleration parameters are given by \[q_{rad}=\frac{2+\alpha}{3\alpha-2},\qquad q_{m}=\frac{2}{3\alpha-2}, \tag{23}\] respectively. We recover the standard forms of eqs. (23) in the limit \(\alpha\to 2\). Since \(1<\alpha\leq 2\), both deceleration parameters are always positive, i.e., matter\(-\) and radiation-dominated eras correspond to decelerated phases. The results imply that fractional effects do not provide an alternative to dark energy. This is in contrast to Tsallis cosmology [27] while it is similar to Barrow cosmology [38]. At the late time acceleration, \(q\) is negative for \(\omega<-\frac{2}{3\alpha}\). It clearly shows a shift from the standard case, i.e., \(\omega<-1/3\). Now, we prospect whether the initial singularity of the universe is accessible or not. For this purpose, we use the analysis in refs. [105, 106]. Combining the continuity equation with the first Friedmann equation for \(k=0\), we get \[\dot{\rho}=\pm 3(\omega+1)\sqrt{\pi}\left(\frac{8(3\alpha-2)}{3(\alpha+2)} \right)^{\frac{\alpha}{3\alpha-2}}\rho^{\frac{4\alpha-2}{3\alpha-2}}. \tag{24}\] Integrating this equation from a finite density \(\rho^{*}\) to an infinite one, we find \[t=\pm\frac{1}{3\sqrt{\pi}(\omega+1)}\left(\frac{8(3\alpha-2)}{3(\alpha+2)} \right)^{\frac{\alpha}{2-3\alpha}}\int_{\rho^{*}}^{\infty}\rho^{\frac{4\alpha -2}{2-3\alpha}}d\rho, \tag{25}\] and the solution is given by \[t=\pm\frac{1}{3\sqrt{\pi}(\omega+1)}\left(\frac{8(3\alpha-2)}{3(\alpha+2)} \right)^{\frac{\alpha}{2-3\alpha}}\left(\frac{2-3\alpha}{\alpha}\right)\rho _{*}^{\frac{\alpha}{2-3\alpha}}. \tag{26}\] The result implies that the Big Bang singularity is accessible since the time to reach singularity is finite. Moreover, we can make the analysis more specific by substituting eq. (20) into the integral in eq. (25). We obtain \[t=\frac{\pm 1}{3(1+\omega)\sqrt{\pi}}\left(\frac{8\rho_{0}(3\alpha-2)}{3( \alpha+2)}\right)^{\frac{\alpha}{2-3\alpha}}\left(\frac{3\alpha-2}{\alpha} \right)a^{\frac{3\alpha(1+\omega)}{3\alpha-2}}\Bigg{|}_{a_{*}}^{0}, \tag{27}\] where \(a_{*}\) is a finite and nonzero scale factor. It is straightforward to find \(t=\infty\) for the condition \(\omega<-1\). However, this condition is independent of the fractional case since the standard case (\(\alpha=2\)) similarly, \[t=\frac{\pm 2}{3(1+\omega)}\left[\frac{8\pi\rho_{0}}{3}\right]^{-\frac{1}{2}}a^{ \frac{3(1+\omega)}{2}}\Bigg{|}_{a_{*}}^{0}, \tag{28}\] gives \(t=\infty\) for \(w<-1\). For \(\omega>-1\) and \(1<\alpha\leq 2\), eq. (27) yields \[t=\frac{\pm 1}{3(1+\omega)\sqrt{\pi}}\left(\frac{8\rho_{0}(3\alpha-2)}{3( \alpha+2)}\right)^{\frac{\alpha}{2-3\alpha}}\left(\frac{3\alpha-2}{\alpha} \right)a_{*}^{\frac{3\alpha(1+\omega)}{3\alpha-2}}. \tag{29}\] This result similarly implies that the time to reach singularity is finite. In fig. (1), we present the time to reach singularity with respect to scale factor 3. For the larger values of \(a\), the time to reach singularity increases while \(\alpha\) decreases. For smaller values of \(a\), the time to reach singularity increases while \(\alpha\) increases. Footnote 3: We only present the \(\omega=0\) case since similar effects can be seen for radiation\(-\)dominated and accelerated cases. ### Generalised second law GSL states that total entropies of fluid and horizon do not decrease with time. In order to check the validity of GSL, we begin to rearrange eq. (16) as follows: \[\dot{r_{A}}=4\pi^{\frac{3\alpha-2}{2\alpha}}\left(\frac{2\alpha}{2+\alpha} \right)(\rho+p)\dot{Hr_{A}}^{\frac{4\alpha-2}{\alpha}}. \tag{30}\] Figure 1: Time to reach singularity vs scale factor. Dashed\(-\)dotted blue line \(\alpha=1.1\), dotted green line \(\alpha=1.4\), dashed red line \(\alpha=1.7\) and black solid line \(\alpha=2\). We take \(\rho_{0}=1\) and \(\omega=0\). Combining eqs. (14) and (30) leads to the following expression \[T_{h}\dot{S_{h}}=4\pi(\rho+p)H\tilde{r_{A}}^{3}\left[1-2\pi^{\frac{3\alpha-2}{2 \alpha}}(\rho+p)\left(\frac{2\alpha}{2+\alpha}\right)\tilde{r_{A}}^{\frac{3 \alpha-2}{\alpha}}\right], \tag{31}\] which does not ensure the validity of the second law,.i.e., \(S_{h}\geq 0\), since \((\rho+p)\) is negative for the accelerated phase. Therefore, we must take into account the entropy of matter fields. To do so, we consider the Gibbs equation which is defined by [107] \[T_{f}dS_{f}=d(\rho V)+pdV=Vd\rho+(\rho+p)dV, \tag{32}\] where \(T_{f}\) and \(S_{f}\) correspond to the temperature and entropy of the matter fields inside the apparent horizon. Moreover, we consider the thermal equilibrium, i.e., \(T_{h}=T_{f}\), otherwise the spontaneous energy flow between fluid and horizon would cause the deformation of the FRW geometry [107]. Furthermore, in order to avoid non-equilibrium thermodynamics, we use the assumption of thermal equilibrium between horizon and fluid. Hence one can obtain the following expression from eqs. (9), (11), (30) and Gibbs equation \[T_{h}\dot{S_{f}}=-4\pi\tilde{r_{A}}^{3}(\rho+p)H\left[1-4\pi^{\frac{3\alpha-2 }{2\alpha}}(\rho+p)\left(\frac{2\alpha}{2+\alpha}\right)\tilde{r_{A}}^{\frac{ 3\alpha-2}{\alpha}}\right]. \tag{33}\] Finally, combining eqs. (31) and (33), we obtain \[T_{h}(\dot{S_{f}}+\dot{S_{h}})=8\pi^{\frac{5\alpha-2}{2\alpha}}\left(\frac{2 \alpha}{2+\alpha}\right)(\rho+p)^{2}H\tilde{r_{A}}^{\frac{6\alpha-2}{\alpha}}. \tag{34}\] It is clearly obvious that the total entropy change does not decrease with time. Therefore, the GSL always holds for all eras of the universe. ## 3 Friedmann equations from entropic gravity In this section, we are going to derive the Friedmann equations from Verlinde's entropic gravity. Following the arguments of ref. [47, 48], we begin to consider a compact spatial region \(\mathcal{V}\) with a holographic screen on the boundary \(\partial\mathcal{V}\). We assume that the number of bits on the screen is [47] \[N=A, \tag{35}\] where A is the area of the holographic screen. From entropy-area relation \(S=A/4\), N is given by \[N=4S_{h}. \tag{36}\] Moreover, we assume that the total energy of the screen obeys the equipartition law. So we have [47] \[E=\frac{1}{2}NT, \tag{37}\] where screen temperature \(T\) corresponds to Unruh temperature. It is defined by [47, 108] \[T=\frac{a_{r}}{2\pi}=-\frac{\ddot{a}r}{2\pi}, \tag{38}\] where \(a_{r}\) is the acceleration. It is given by [48] \[a_{r}=-\frac{d^{2}\tilde{r_{A}}}{dt^{2}}=-\ddot{a}r. \tag{39}\] In order to obtain the Friedmann equations, we should consider the active gravitational mass \(\mathcal{M}\) instead of the total mass \(M\) inside the region \(\mathcal{V}\). It is given by [48] \[\mathcal{M}=2\int_{\mathcal{V}}dV\left(T_{\mu\nu}-\frac{1}{2}Tg_{\mu\nu} \right)u^{\mu}u^{\nu}=\frac{4}{3}\pi(\rho+3p)\widetilde{r_{A}}^{-3}, \tag{40}\] where \(T_{\mu\nu}\) is defined as in eq. (8). Furthermore, we assume [47] \[E=\mathcal{M}. \tag{41}\] From the fractional entropy (3), number of bits \(N\) (36) is modified as \[N=4\left(\pi\widetilde{r_{A}}^{-2}\right)^{\frac{\alpha+2}{2\alpha}}=4\left( \pi a^{2}r^{2}\right)^{\frac{\alpha+2}{2\alpha}}. \tag{42}\] From eqs. (37)-(42), one can easily derive the second Friedmann equation, i.e., the acceleration equation, \[\pi^{\frac{2-\alpha}{2\alpha}}\left(\frac{\ddot{a}}{a}\right)\tilde{r_{A}}^{ \frac{2-\alpha}{\alpha}}=-\frac{4\pi}{3}(\rho+3p). \tag{43}\] Multiplying \(2\dot{a}a\) on both sides of this equation and using the continuity equation (9) yields \[\int\frac{d\dot{a}^{2}}{dt}dt=\frac{8\pi}{3}\int\frac{1}{(\pi r^{2})^{\frac{2- \alpha}{2\alpha}}a^{\frac{2-\alpha}{\alpha}}}\frac{d(\rho a^{2})}{dt}dt. \tag{44}\] Choosing again the equation of state as \(p=\omega\rho\), we obtain from eq. (20), \[d\left(\rho a^{2}\right)=-\rho_{0}(1+3\omega)a^{-2-3\omega}da. \tag{45}\] Using the above expression in integral (44), one can obtain \[\left(\frac{\dot{a}}{a}\right)^{2}+\frac{k}{a^{2}}=\frac{8\pi\rho_{0}}{3} \frac{\alpha(3\omega+1)}{\pi^{\frac{2-\alpha}{2\alpha}}(3\alpha\omega+2)} \frac{a^{-3(\omega+1)}}{\tilde{r_{A}}^{\frac{2-\alpha}{\alpha}}}, \tag{46}\] which is a modification of the first Friedmann equation. Using \(H=\dot{a}/a\) and eq. (5), the Friedmann equations are expressed in terms of the Hubble parameters as follows: \[\left(H^{2}+\frac{k}{a^{2}}\right)^{\frac{3\alpha-2}{2\alpha}}= \frac{8\pi\rho}{3}\frac{\alpha(3\omega+1)}{\pi^{\frac{2-\alpha}{2\alpha}}(3 \alpha\omega+2)},\] \[\pi^{\frac{2-\alpha}{2\alpha}}\left(\dot{H}+H^{2}\right)=-\frac{ 4\pi}{3}(\rho+3p)\left(H^{2}+\frac{k}{a^{2}}\right)^{\frac{2-\alpha}{2\alpha}}, \tag{47}\] where we used eq.(20) in the first Friedmann equation 4. Again, in the limit \(\alpha\to 2\), the usual Friedmann equations are recovered. Footnote 4: At this point, we give some comments on the first Friedmann equation. In order to evaluate the integral in eq. (44), we employ \(p=\omega\rho\). We obtain the first Friedmann equation depending on a specific equation of state. Let us again consider the deceleration parameter \(q\) for the flat case. From eqs. (19), (43) and (46), \(q\) is given by \[q=\frac{2+3\alpha\omega}{2\alpha}. \tag{48}\] Choosing \(\omega=1/3\) and \(\omega=0\) gives \[q_{rad}=\frac{2+\alpha}{2\alpha},\qquad q_{m}=\frac{1}{\alpha} \tag{49}\] for radiation\(-\) and matter-dominated eras, respectively. Needless to say, it is clear that the deceleration parameter is reduced to the standard form in the limit \(\alpha\to 2\). Similarly, radiation\(-\) and matter-dominated eras correspond to deceleration phases since \(q>0\). In order to explain the late time acceleration (\(q<0\)), we again find that \(\omega\) must satisfy the condition \(\omega<-\frac{2}{3\alpha}\). Lastly, we probe the singular and nonsingular beginning of the universe. To do so, we rearrange the first Friedmann equation in eq. (47) as \[t=\pm\left[\frac{8\pi^{\frac{3\alpha-2}{2\alpha}}(1+3\omega)\alpha\rho_{0}}{3 (2+3\alpha\omega)}\right]^{\frac{\alpha}{2-3\alpha}}\int_{a_{*}}^{0}a^{\frac{3 (1+\omega)\alpha}{3\alpha-2}-1}da \tag{50}\] for the flat case, \(k=0\). Calculation of this integral is given by \[t=\pm\left[\frac{8\pi^{\frac{3\alpha-2}{2\alpha}}(1+3\omega)\alpha\rho_{0}}{3 (2+3\alpha\omega)}\right]^{\frac{\alpha}{2-3\alpha}}\frac{3\alpha-2}{3\alpha( 1+\omega)}{a_{*}}^{\frac{3(1+\omega)\alpha}{3\alpha-2}}. \tag{51}\] One can see that the result of integral in eq. (50) is finite for \(\omega>-1\). Therefore, the result implies that the initial singularity is accessible. However, the prefactor of eq. (51) is imaginary in the interval \(-\frac{2}{3\alpha}<\omega<-\frac{1}{3}\). So time is also imaginary in this interval. In fig. (2), we present the time to reach singularity with respect to the scale factor and the equation\(-\)of\(-\)state parameter 5. As can be seen in the figure, time is imaginary for \(-4/9<\omega<-1/3\). Therefore, it is unphysical. In order to avoid imaginary time, we may interpret this result as a constraint on \(\omega\). We may exclude the values of \(\omega\) between \(-2/3\alpha\) and \(-1/3\) since time is not defined. We know that standard cosmology does not impose such a constraint on \(\omega\). This unexpected constraint arises with fractional modification. This interval may imply that the universe will not arise or cosmic beginning will not occur. Footnote 5: The effects of the fractional parameter resemble fig. (1). Finally, we give some comments on the consistency of the two methods since one may expect the two methods to have the same results. In contrast to the previous section, we derived the Friedmann equations depending on a specific equation of state in this section. Hence, eqs. (18) and (47) are not exactly the same. Nevertheless, if we absorb the extra terms in \(\rho\), the first Friedmann equations can be regarded as the same. ## 4 Conclusions In this paper, using fractional entropy [63], we obtained fractional modified Friedmann equations from two different approaches. First, we derived the Friedmann equations from the first law of thermodynamics at apparent horizon [15]. We checked the validity of GSL and showed that it is valid for all eras. Then, we investigated the effects of fractional parameters on the deceleration parameter and time to reach the initial singularity. We found that \(q\) is still positive for radiation and matter-dominated eras. Therefore, matter\(-\) and radiation-dominated eras correspond to the deceleration phase. The results indicate that fractional effects cannot provide an alternative to dark energy. So we still need dark energy to explain the late time acceleration. Moreover, \(\omega\) must obey the condition \(\omega<-\frac{2}{3\alpha}\) for \(q<0\) at late time acceleration. Based on the method in ref. [105, 106], we also calculated the time to reach the initial singularity from a finite initial density \(\rho_{*}\) to the Big Bang density \(\rho\rightarrow\infty\). Moreover, we repeat our investigation for the scale factor, namely we calculate the time to reach the initial singularity from a nonzero scale factor \(a_{*}\) to \(a\to 0\). Our analysis reveals that initial singularity is accessible since time is finite. Next, we derived modified Friedmann equations from Verlinde's entropic gravity approach [47, 48]. Again, we handle the deceleration parameter and time to reach the initial singularity. Just like the results in the previous section, we found similar results for the deceleration parameters. Finally, we investigated the time to reach the initial singularity in the entropic gravity case. The result indicates that the singularity is accessible. We found that time is imaginary, namely, unphysical for \(-\frac{2}{3\alpha}<\omega<-\frac{1}{3}\). Therefore, we may interpret the result as a constraint on \(\omega\). For the sake of completeness, we also mention the papers in refs. [109, 110] where the authors explored cosmological consequences and modified gravity for Barrow entropy. Figure 2: Time to reach singularity vs scale factor and equation of state parameter. We take \(\rho_{0}=1\) and \(\alpha=3/2\). ## Acknowledgments The authors thank the anonymous referees for their helpful and constructive comments. This work was supported by Istanbul University Post-Doctoral Research Project: MAB-2021-38032. Data availability statement: No new data were created or analysed in this study.
2307.10497
Integrable discretizations for a generalized sine-Gordon equation and the reductions to the sine-Gordon equation and the short pulse equation
In this paper, we propose fully discrete analogues of a generalized sine-Gordon (gsG) equation $u_{t x}=\left(1+\nu \partial_x^2\right) \sin u$. The bilinear equations of the discrete KP hierarchy and the proper definition of discrete hodograph transformations are the keys to the construction. Then we derive semi-discrete analogues of the gsG equation from the fully discrete gsG equation by taking the temporal parameter $b\rightarrow0$. Especially, one full-discrete gsG equation is reduced to a semi-discrete gsG equation in the case of $\nu=-1$ (Feng {\it et al. Numer. Algorithms} 2023). Furthermore, $N$-soliton solutions to the semi- and fully discrete analogues of the gsG equation in the determinant form are constructed. Dynamics of one- and two-soliton solutions for the discrete gsG equations are discussed with plots. We also investigate the reductions to the sine-Gordon (sG) equation and the short pulse (SP) equation. By introducing an important parameter $c$, we demonstrate that the gsG equation reduces to the sG equation and the SP equation, and the discrete gsG equation reduces to the discrete sG equation and the discrete SP equation, respectively, in the appropriate scaling limit. The limiting forms of the $N$-soliton solutions to the gsG equation also correspond to those of the sG equation and the SP equation.
Han-Han Sheng, Bao-Feng Feng, Guo-Fu Yu
2023-07-19T23:35:01Z
http://arxiv.org/abs/2307.10497v1
Integrable discretizations for a generalized sine-Gordon equation and the reductions to the sine-Gordon equation and the short pulse equation ###### Abstract In this paper, we propose fully discrete analogues of a generalized sine-Gordon (gsG) equation \(u_{tx}=\left(1+\nu\partial_{x}^{2}\right)\sin u\). The bilinear equations of the discrete KP hierarchy and the proper definition of discrete hodograph transformations are the keys to the construction. Then we derive semi-discrete analogues of the gsG equation from the fully discrete gsG equation by taking the temporal parameter \(b\to 0\). Especially, one full-discrete gsG equation is reduced to a semi-discrete gsG equation in the case of \(\nu=-1\) (Feng _et al.__Numer. Algorithms_ 2023). Furthermore, \(N\)-soliton solutions to the semi- and fully discrete analogues of the gsG equation in the determinant form are constructed. Dynamics of one- and two-soliton solutions for the discrete gsG equations are discussed with plots. We also investigate the reductions to the sine-Gordon (sG) equation and the short pulse (SP) equation. By introducing an important parameter \(c\), we demonstrate that the gsG equation reduces to the sG equation and the SP equation, and the discrete gsG equation reduces to the discrete sG equation and the discrete SP equation, respectively, in the appropriate scaling limit. The limiting forms of the \(N\)-soliton solutions to the gsG equation also correspond to those of the sG equation and the SP equation. **Keywords**: generalized sine-Gordon equation, short pulse equation, integrable discretization, Hirota's bilinear method ## 1 Introduction In this paper, we are concerned with the integrable discretizations of a generalized sine-Gordon (gsG) equation \[u_{tx}=\left(1+\nu\partial_{x}^{2}\right)\sin u, \tag{1}\] where \(u=u(x,t)\) is a scalar-valued function, \(\nu\) is a real parameter which can be normalized into \(\nu=\pm 1\), \(\partial_{x}^{2}\) and the subscripts \(t\) and \(x\) appended to \(u\) denote partial differentiation. The gsG equation (1) was first derived by Fokas in 1995 using bi-Hamiltonian methods [1]. For \(\nu=-1\), the integrability was established by the Lax pair, and the initial value problem for decaying initial data was solved by the inverse scattering method [2]. Eigenfunctions of the Lax pair and traveling-wave solutions were obtained through the Riemann-Hilbert formalism [2]. A variety of solutions, including kinks, loop solitons, and breathers, were recognized from the general soliton solution in parametric form constructed by Hirota's bilinear method [3]. The gsG equation (1) with \(\nu=1\) was solved by Hirota's bilinear method [4]. It should be commented here that the structure of the solutions to equation (1) with \(\nu=1\) is significantly different from that with \(\nu=-1\), as it does not admit multi-valued solutions like loop solitons [3, 4]. Quite recently, we constructed several semi-discrete analogues of the gsG equation with \(\nu=-1\) and presented the determinant formulae of \(N\)-soliton solutions both for the gsG equation and the semi-discrete gsg equation in [5]. The study of discrete integrable systems, which is connected to many other disciplines such as quantum field theory, numerical algorithms, random matrices, and orthogonal and biorthogonal polynomials, has recently received a lot of interest [6]. Compared to continuous integrable systems, there are far fewer examples of discrete integrable systems and analytical tools available for studying them. On the other hand, it is widely believed that discrete integrable systems are more fundamental and universal than their continuous counterparts. The authors have conducted a substantial amount of research in finding integrable discretizations of soliton equations, including the short pulse (SP) equation [7, 8], the (2+1)-dimensional Zakharov equation [9], the Camassa-Holm equation [10, 11], the Degasperis-Proceli equation [12], and the modified Camassa-Holm equation [13] via Hirota's bilinear method. Building upon the compatibility between an integrable system and its Backlund transformation, a systematic procedure was proposed for obtaining discrete versions of integrable PDEs using Hirota's bilinear method [14]. It was demonstrated that the gsG equation with \(\nu=\pm 1\) reduces to the SP equation in the short wave limit and to the sine-Gordon (sG) equation in the long wave limit [3, 4]. Thus, the gsG equation is an interesting soliton equation that lies between the SP equation and the sG equation. Since the semi-discrete and fully discrete sG equation has been well known [15, 16, 17, 18], and the integrable discretization of the SP equation was recently proposed by one of the authors in [7], it would be quite an interesting problem to construct the integrable discretizations of the gsG equation (1), which is indeed the main motivation of our present paper. Another challenging problem is looking for the reductions from the gsG equation to the sG equation and the SP equation, both in the continuous case and in the discrete case. Although the reductions from the gsG equation to the sG equation and the SP equation were proposed in [3, 4], the reductions in the discrete case differ substantially from those in the continuous case. The problem lies in the fact that we not only apply scaling transformations to the original variables in the equation, i.e., \(u\), \(x\), and \(t\), but also transform the new variables after hodograph transformation and parameters in the \(\tau\)-function such as \(y\), \(\tau\), and \(p\) in [3, 4]. It is difficult to find the correspondence in the discrete case. To obtain the reductions, we introduce an important parameter \(c\), which can be viewed as the coefficient of the Baclund transformation between two sets of bilinear equations of the two-dimensional Toda-lattice (2DTL) equation. In this paper, we construct integrable fully-discrete analogues of the gsG equation (1) with \(\nu=\pm 1\) from two sets of discrete bilinear 2DTL equations, and the corresponding determinant solutions are obtained. In addition, the semi-discrete gsG equation with \(\nu=\pm 1\) is constructed from the fully discrete gsG, of which the semi-discrete gsG equation with \(\nu=-1\) agrees with our previous result in [5]. Moreover, the connections of the discrete gsG equation to the discrete sG equation and the discrete SP equation are clarified by appropriate but different scaling limits. The remainder of the paper is organized as follows. In Section 2, we review the bilinear equations and determinant solutions of the gsG equation with \(\nu=1\), which can be reduced from the bilinear equations of 2DTL and their Backlund transformation. We demonstrate that equation (1) reduces to the sG equation and the SP equation with the scaling transformation on the original variables and the corresponding limits of \(c\). The limiting forms of the \(N\)-soliton solution also correspond to the known solutions of the sG and SP equations. In Section 3, starting with two sets of bilinear discrete 2DTL equations, we derive a fully integrable discrete analog of the gsG equation (1) with \(\nu=\pm 1\) and present its \(N\)-soliton solutions. Two conserved quantities of the fully discrete gsG equation are obtained. In Section 4, we propose the semi-discrete gsG equation from two approaches: the fully discrete gsG equation and the semi-discrete 2DTL equation, respectively. Reductions to the corresponding discrete analogues of the sG equation and the SP equation are also investigated. In Section 5, we present soliton solutions to the semi- and fully discrete gsG equation and investigate their properties, focusing mainly on one- and two-soliton solutions. Section 6 is devoted to a brief summary and discussion. Some detailed proofs are given in Appendices. ## 2 From the 2DTL equation and its Backlund transformation to the gsG equation with \(\nu=1\) In this section, we show that the bilinear equations and the multisoliton solution to the gsG equation (1) with \(\nu=1\) given by Matsuno [4] can be generated from two sets of bilinear equations of the 2DTL equation and its Backlund transformation between them and its determinant solution through a series of reductions and transformations, including the hodograph transformation and dependent variable transformation. ### A brief review of the gsG equation with \(\nu=1\) Firstly, we give a brief review of the results in [4] about the bilinear form of the gsG equation (1) with \(\nu=1\) \[u_{tx}=\left(1+\partial_{x}^{2}\right)\sin u. \tag{2}\] Through the new dependent variable \(r\) in accordance with the relation \[r^{2}=1-u_{x}^{2},\qquad(0<r<1), \tag{3}\] the gsG equation can be rewritten as \[r_{t}-(r\cos u)_{x}=0, \tag{4}\] which is exactly a conservation law of (2). Then we define the hodograph transformation \((x,t)\rightarrow(y,\tau)\) by \[\mathrm{d}y=\lambda r\mathrm{d}x+\lambda r\cos u\mathrm{d}t,\qquad\mathrm{d} \tau=\lambda^{-1}\mathrm{d}t, \tag{5}\] where \(\lambda\in\mathbb{R}\) is a constant. The derivatives for \(x\) and \(t\) are then rewritten in terms of \(y\) and \(\tau\) as \[\frac{\partial}{\partial x}=\lambda r\frac{\partial}{\partial y},\quad\frac{ \partial}{\partial t}=\lambda^{-1}\frac{\partial}{\partial\tau}+\lambda r\cos u \frac{\partial}{\partial y}. \tag{6}\] With the new variables \(y\) and \(\tau\), (3) and (4) are recast into the form \[r^{2}=1-\lambda^{2}r^{2}u_{y}^{2}, \tag{7}\] \[\lambda^{-1}\left(\frac{1}{r}\right)_{\tau}+\lambda(\cos u)_{y}=0, \tag{8}\] respectively. Further reduction is possible if one defines the variable \(\varphi\) by \[\lambda u_{y}=\sinh\varphi,\quad\varphi=\varphi(y,\tau). \tag{9}\] It follows from (7) and (9) that \[\frac{1}{r}=\cosh\varphi. \tag{10}\] Substituting (9) and (10) into equation (8), we find \[\varphi_{\tau}=\lambda\sin u. \tag{11}\] Let \(\sigma\) and \(\sigma^{\prime}\) be solutions of sG equation \[\sigma_{\tau y}=\sin\sigma,\quad\sigma=\sigma(\tau,y), \tag{12}\] \[\sigma^{\prime}_{\tau y}=\sin\sigma^{\prime},\quad\sigma^{\prime} =\sigma^{\prime}(\tau,y). \tag{13}\] Then we put \[u=\frac{1}{2}(\sigma+\sigma^{\prime}), \tag{14}\] \[\varphi=\frac{1}{2\mathrm{i}}(\sigma-\sigma^{\prime}). \tag{15}\] In terms of \(\sigma\) and \(\sigma^{\prime}\), equations (9) and (11) can be written as \[\frac{1}{2}(\sigma+\sigma^{\prime})_{y}=\lambda^{-1}\sinh\left( \frac{1}{2\mathrm{i}}(\sigma-\sigma^{\prime})\right)=-\mathrm{i}\lambda^{-1} \sin\frac{1}{2}(\sigma-\sigma^{\prime}), \tag{16}\] \[\frac{1}{2}(\sigma-\sigma^{\prime})_{\tau}=\mathrm{i}\lambda\sin \frac{1}{2}(\sigma+\sigma^{\prime}). \tag{17}\] Introducing the dependent variable transformation \[\sigma=2{\rm i}\ln\frac{\bar{g}}{\bar{f}}, \tag{18}\] \[\sigma^{\prime}=2{\rm i}\ln\frac{\bar{f}}{g}, \tag{19}\] where \(\bar{f}\) and \(\bar{g}\) denote the complex conjugate of \(f\) and \(g\), respectively. From (12) and (13), one can obtain \[D_{\tau}D_{y}f\cdot f=\frac{1}{2}(f^{2}-\bar{g}^{2}), \tag{20}\] \[D_{\tau}D_{y}g\cdot g=\frac{1}{2}(g^{2}-\bar{f}^{2}), \tag{21}\] and their complex conjugates. And from (16)-(17), we have \[{\rm i}\left(\ln\frac{\bar{f}\bar{g}}{fg}\right)_{y}=\frac{1}{2 \lambda}\left(\frac{g\bar{g}}{f\bar{f}}-\frac{f\bar{f}}{g\bar{g}}\right), \tag{22}\] \[\left(\ln\frac{g\bar{g}}{f\bar{f}}\right)_{\tau}=\frac{{\rm i} \lambda}{2}\left(\frac{\bar{f}\bar{g}}{fg}-\frac{fg}{\bar{f}\bar{g}}\right). \tag{23}\] By using \[\frac{D_{\tau}f\cdot g}{fg}=\left(\ln\frac{f}{g}\right)_{\tau}, \tag{24}\] Eqs. (22) and (23) can be represented as \[{\rm i}\lambda D_{y}f\cdot\bar{f}-\frac{1}{2}(f\bar{f}-g\bar{g}) =0, \tag{25}\] \[{\rm i}\lambda D_{y}g\cdot\bar{g}+\frac{1}{2}(g\bar{g}-f\bar{f}) =0,\] (26) \[{\rm i}\lambda^{-1}D_{\tau}f\cdot g+\frac{1}{2}(fg-\bar{f}\bar{g }) =0,\] (27) \[{\rm i}\lambda^{-1}D_{\tau}\bar{g}\cdot\bar{f}+\frac{1}{2}(\bar{ f}\bar{g}-fg) =0, \tag{28}\] here Eq. (28) is the complex conjugate of Eq. (27). In addition, the definition of \(u\) and \(\varphi\) can be expressed as \[u={\rm i}\ln\frac{\bar{f}\bar{g}}{fg}, \tag{29}\] \[\varphi=\ln\frac{g\bar{g}}{f\bar{f}}. \tag{30}\] ### From the 2DTL equation and its Backlund transformation to the bilinear form of the gsG equation (2) We give two sets of bilinear equations of the 2DTL equation with \(\tau\)-functions \(\tau_{n}\) and \(\tau_{n}^{\prime}\) and a Backlund transformation(BT) between them, respectively, \[\left(\frac{1}{2}D_{x_{-1}}D_{x_{1}}-1\right)\tau_{n}\cdot\tau_{ n}=-\tau_{n+1}\tau_{n-1}, \tag{31}\] \[\left(\frac{1}{2}D_{x_{-1}}D_{x_{1}}-1\right)\tau_{n}^{\prime} \cdot\tau_{n}^{\prime}=-\tau_{n+1}^{\prime}\tau_{n-1}^{\prime},\] (32) \[\left(c^{-1}D_{x_{-1}}-1\right)\tau_{n}\cdot\tau_{n}^{\prime}+ \tau_{n+1}\tau_{n-1}^{\prime}=0,\] (33) \[\left(cD_{x_{1}}-1\right)\tau_{n+1}\cdot\tau_{n}^{\prime}+\tau_{ n}\tau_{n+1}^{\prime}=0. \tag{34}\] Here \(c\) is a constant. As mentioned in [22], we can apply the BT recursively and denote the \(\tau\)-functions of the \(m\)-th 2DTL equation as \(\tau_{n,m}\). Then we rewrite Eqs. (31)-(34) as \[\left(\frac{1}{2}D_{x_{-1}}D_{x_{1}}-1\right)\tau_{n,m}\cdot\tau_ {n,m}=-\tau_{n+1,m}\tau_{n-1,m}, \tag{35}\] \[\left(c^{-1}D_{x_{-1}}-1\right)\tau_{n,m}\cdot\tau_{n,m+1}+\tau_ {n+1,m}\tau_{n-1,m+1}=0,\] (36) \[\left(cD_{x_{1}}-1\right)\tau_{n+1,m}\cdot\tau_{n,m+1}+\tau_{n,m} \tau_{n+1,m+1}=0, \tag{37}\] where we have the following correspondence: \(\tau_{n}\equiv\tau_{n,m}\), \(\tau_{n}^{\prime}\equiv\tau_{n,m+1}\). The above equations (35)-(37) have exact solutions in Casorati determinant form with arbitrary parameters as follows. **Lemma 2.1**.: The bilinear equations (35)-(37) have the following Casorati-type determinant solution \[\tau_{n,m}(x_{-1},x_{1})=\left|\begin{array}{cccc}\phi_{n}^{(1)}(m)&\phi_{n +1}^{(1)}(m)&\cdots&\phi_{n+N-1}^{(1)}(m)\\ \phi_{n}^{(2)}(m)&\phi_{n+1}^{(2)}(m)&\cdots&\phi_{n+N-1}^{(2)}(m)\\ \cdots&\cdots&\cdots&\cdots\\ \phi_{n}^{(N)}(m)&\phi_{n+1}^{(N)}(m)&\cdots&\phi_{n+N-1}^{(N)}(m)\\ \end{array}\right|, \tag{38}\] where \[\phi_{n}^{(i)}(m)=c_{i}p_{i}^{n}\left(1-cp_{i}\right)^{m}e^{\xi_ {i}}+d_{i}q_{i}^{n}\left(1-cq_{i}\right)^{m}e^{\eta_{i}}, \tag{39}\] \[\xi_{i}=p_{i}x_{1}+p_{i}^{-1}x_{-1}+\xi_{i0},\quad\eta_{i}=q_{i} x_{1}+q_{i}^{-1}x_{-1}+\eta_{i0}. \tag{40}\] Here, \(c_{i},d_{i},p_{i},q_{i},\xi_{i0}\) and \(\eta_{i0}\) are the arbitrary parameters that can take either real or complex values. In addition to the Casorati determinant solution, the \(\tau\)-functions can also be expressed by the Gram-type determinant, which is given by the following lemma. **Lemma 2.2**.: The following Gram-type determinants satisfy bilinear equations (35)-(37) \[\tau_{n,m}(x_{-1},x_{1})=\left|m_{ij}^{n,m}\right|_{N\times N}= \left|c_{ij}+\frac{1}{p_{i}+q_{j}}\left(-\frac{p_{i}}{q_{j}}\right)^{n}\left( \frac{1-cp_{i}}{1+cq_{j}}\right)^{m}e^{\xi_{i}+\eta_{j}}\right|_{N\times N}, \tag{41}\] where \[\xi_{i}=p_{i}x_{1}+p_{i}^{-1}x_{-1}+\xi_{i0},\quad\eta_{j}=q_{j}x _{1}+q_{j}^{-1}x_{-1}+\eta_{j0} \tag{42}\] Here, \(p_{i},q_{j},\xi_{i0}\) and \(\eta_{j0}\) are the arbitrary parameters that can take either real or complex values and \(N\in\mathbb{R}\). Next we are ready to obtain \(N\)-soliton solution of the gsG equation (2). Firstly, we set \(c=\lambda\mathrm{i}\) (\(\lambda\in\mathbb{R}\)). ### Case 1: for real \(p_{i},q_{i}\) We impose restrictions on the parameters \[p_{i}=-q_{i},\,d_{i}=\mathrm{i}\left(\frac{1-cp_{i}}{1-cq_{i}} \right)^{\frac{1}{2}}c_{i}, \tag{43}\] for Casorati-type solution or \[p_{i}=q_{i},\,c_{ij}=\mathrm{i}\left(\frac{1-cp_{i}}{1+cq_{j}} \right)^{\frac{1}{2}}\delta_{ij}, \tag{44}\] for Gram-type solution. In addition, we take variable transformations \(y=2x_{1},\,\tau=2x_{-1}\). As a result, for Casorati-type solution we have \[\phi_{n+1}^{(i)}(0) =c_{i}p_{i}^{n+1}e^{\xi_{i}}+d_{i}q_{i}^{n+1}e^{\eta_{i}}\] \[=c_{i}p_{i}^{n+1}e^{\xi_{i}}\left[1+\mathrm{i}(-1)^{n+1}\left( \frac{1-cp_{i}}{1-cq_{i}}\right)^{\frac{1}{2}}e^{\eta_{i}-\xi_{i}}\right]\] \[=c_{i}p_{i}^{n+1}e^{\xi_{i}}\left[1-\mathrm{i}(-1)^{n}\left( \frac{1-\mathrm{i}\lambda p_{i}}{1+\mathrm{i}\lambda p_{i}}\right)^{\frac{1}{2} }e^{-p_{i}y-\tau/p_{i}+\eta_{i0}-\xi_{i0}}\right],\] \[\phi_{n}^{(i)}(1) =c_{i}p_{i}^{n}(1-cp_{i})e^{\xi_{i}}+d_{i}q_{i}^{n}(1-cq_{i})e^{\eta_ {i}}\] \[=c_{i}p_{i}^{n}(1-cp_{i})e_{i}^{\xi}\left[1+\mathrm{i}(-1)^{n} \left(\frac{1+\mathrm{i}\lambda p_{i}}{1-\mathrm{i}\lambda p_{i}}\right)^{ \frac{1}{2}}e^{-p_{i}y-\tau/p_{i}+\eta_{i0}-\xi_{i0}}\right].\] Thus we know \[\phi_{n+1}^{(i)}(0)\asymp\bar{\phi}_{n}^{(i)}(1),\,\phi_{n+1}^{(i)}(1)\asymp \bar{\phi}_{n}^{(i)}(0), \tag{45}\] which imply the relations \[\tau_{n+1,0}\asymp\bar{\tau}_{n,1},\qquad\tau_{n+1,1}\asymp\bar{ \tau}_{n,0}. \tag{46}\] Relations (46) means both \(\tau_{n}(0)\) and \(\tau_{n}(1)\) are 2-period sequences. Here \(\asymp\) means two \(\tau\) functions are equivalent up to a constant multiple and \(\bar{\tau}\) denotes complex conjugate of \(\tau\). The same result appears for the relations between \(\tau\) functions with Gram-type determinant form, which we omit here. In this case, the kink and anti-kink solutions are obtained. ### Case 2: for complex \(p_{i},q_{i}\) We impose restrictions on the parameters of \(\tau\) functions \[p_{i}=-q_{i},\,d_{i}=\mathrm{i}\left(\frac{1-cp_{i}}{1-cq_{i}} \right)^{\frac{1}{2}}c_{i},\,p_{2i-1}=\bar{p}_{2i},\,N=2M, \tag{47}\] for Casorati-type solution or \[p_{i}=q_{i},\,c_{ij}=\mathrm{i}\left(\frac{1-cp_{i}}{1+cq_{j}} \right)^{\frac{1}{2}}\delta_{ij},\,\,p_{2i-1}=\bar{p}_{2i},\,N=2M, \tag{48}\] for Gram-type solution. By taking \(y=2x_{1},\,\tau=2x_{-1}\), from Case 1, we know that \[\phi_{n+1}^{(i)}(0) =p_{i}^{n+1}e^{\xi_{i}}\left[1-\mathrm{i}(-1)^{n}\sqrt{\frac{1- \mathrm{i}\lambda p_{i}}{1+\mathrm{i}\lambda p_{i}}}e^{-p_{i}y-\tau/p_{i}+\eta _{i0}-\xi_{i0}}\right],\] \[\phi_{n}^{(j)}(1) =p_{j}^{n}(1-cp_{j})e^{\xi_{j}}\left[1+\mathrm{i}(-1)^{n}\sqrt{ \frac{1+\mathrm{i}\lambda p_{j}}{1-\mathrm{i}\lambda p_{j}}}e^{-p_{j}y-\tau/p_ {j}+\eta_{j0}-\xi_{j0}}\right].\] Thus we can obtain \[\phi_{n+1}^{(2i-1)}(0)\asymp\bar{\phi}_{n}^{(2i)}(1),\,\phi_{n+1} ^{(2i)}(0)\asymp\bar{\phi}_{n}^{(2i-1)}(1), \tag{49}\] and \[\phi_{n+1}^{(2i-1)}(1)\asymp\bar{\phi}_{n}^{(2i)}(0),\,\phi_{n+1} ^{(2i)}(1)\asymp\bar{\phi}_{n}^{(2i-1)}(0), \tag{50}\] which also correspond to the relation (46). In this case, the breather solutions are obtained. **Remark 2.1**.: Reductions from (35)-(37) to the bilinear form of the gsG equation with \(\nu=1\) differ from the case with \(\nu=-1\) we proposed in [5], in which \(\tau_{n+1,0}\asymp\bar{\tau}_{n,0},\,\,\tau_{n+1,1}\asymp\bar{\tau}_{n,1}\). Actually, reductions we introduce here are relatively rare in other equations. Naturally, we can get kink-breather solutions by mixing Case 1 and Case 2. Moreover, by substitution \[\tau_{00}=f,\,\tau_{01}=g, \tag{51}\] equations (35)-(37) can be recast into \[D_{\tau}D_{y}f\cdot f=\frac{1}{2}(f^{2}-\bar{g}^{2}), \tag{52}\] \[D_{\tau}D_{y}g\cdot g=\frac{1}{2}(g^{2}-\bar{f}^{2}),\] (53) \[\mathrm{i}\lambda D_{y}f\cdot\bar{f}-\frac{1}{2}(f\bar{f}-g\bar{ g})=0,\] (54) \[\mathrm{i}\lambda D_{y}g\cdot\bar{g}+\frac{1}{2}(g\bar{g}-f\bar{ f})=0,\] (55) \[\mathrm{i}\lambda^{-1}D_{\tau}f\cdot g+\frac{1}{2}(fg-\bar{f}\bar {g})=0,\] (56) \[\mathrm{i}\lambda^{-1}D_{\tau}\bar{g}\cdot\bar{f}+\frac{1}{2}( \bar{f}\bar{g}-fg)=0, \tag{57}\] which are nothing but the bilinear equations of the gsG equation (2). From equation (54)-(57), we have \[\mathrm{i}\lambda\left(\ln\frac{f}{\bar{f}}\right)_{y}=\frac{1}{ 2}-\frac{1}{2}\frac{g\bar{g}}{f\bar{f}}, \tag{58}\] \[\mathrm{i}\lambda\left(\ln\frac{g}{\bar{g}}\right)_{y}=-\frac{1} {2}+\frac{1}{2}\frac{f\bar{f}}{g\bar{g}},\] (59) \[\mathrm{i}\lambda^{-1}\left(\ln\frac{f}{g}\right)_{\tau}=-\frac{ 1}{2}+\frac{1}{2}\frac{\bar{f}\bar{g}}{fg},\] (60) \[\mathrm{i}\lambda^{-1}\left(\ln\frac{\bar{f}}{\bar{g}}\right)_{ \tau}=\frac{1}{2}-\frac{1}{2}\frac{fg}{\bar{f}\bar{g}}, \tag{61}\] that lead to \[x_{y}=\frac{1}{\lambda r}=\lambda^{-1}\cosh\varphi=\frac{1}{2 \lambda}\left(\frac{g\bar{g}}{f\bar{f}}+\frac{f\bar{f}}{g\bar{g}}\right)= \lambda^{-1}+\mathrm{i}\left(\ln\frac{\bar{f}g}{f\bar{g}}\right)_{y}, \tag{62}\] \[x_{\tau}=-\lambda^{2}rx_{y}\cos u=-\lambda\cos u=-\frac{\lambda }{2}\left(\frac{\bar{f}\bar{g}}{fg}+\frac{fg}{f\bar{g}}\right)=-\lambda+ \mathrm{i}\left(\ln\frac{\bar{f}g}{f\bar{g}}\right)_{\tau}. \tag{63}\] Thus we obtain the expression for \(x\) by tau functions \[x=\lambda^{-1}y-\lambda\tau+\mathrm{i}\ln\frac{\bar{f}g}{f\bar{g}}, \tag{64}\] Summarizing the above results, the determinant (\(N\)-soliton) solution of the gsG equation is given by the following theorem. **Theorem 2.1**.: The parametric form for the soliton and breather solution of the gsG equation (2) is \[u=\mathrm{i}\ln\frac{\bar{f}\bar{g}}{fg}, \tag{65}\] \[x=\lambda^{-1}y-\lambda\tau+\mathrm{i}\ln\frac{\bar{f}g}{f\bar{g}},\;t= \lambda\tau,\;\lambda\in\mathbb{R}, \tag{66}\] where \(f\) and \(g\) are the determinants given by (51) and \(\tau_{n,m}\) can be written either as a Casorati-type determinant \[\tau_{n,m}=\left|\begin{array}{cccc}\phi_{n}^{(1)}(m)&\phi_{n+1}^{(1)}(m)& \cdots&\phi_{n+N-1}^{(1)}(m)\\ \phi_{n}^{(2)}(m)&\phi_{n+1}^{(2)}(m)&\cdots&\phi_{n+N-1}^{(2)}(m)\\ \cdots&\cdots&\cdots&\cdots\\ \phi_{n}^{(N)}(m)&\phi_{n+1}^{(N)}(m)&\cdots&\phi_{n+N-1}^{(N)}(m)\end{array} \right|, \tag{67}\] where \[\phi_{n}^{(i)}(m)=p_{i}^{n}\left(1-\mathrm{i}\lambda p_{i}\right)^{m}e^{\frac{p_{ i}}{2}y+\frac{1}{2p_{i}}\tau+\xi_{i0}}+\mathrm{i}\sqrt{\frac{1-\mathrm{i} \lambda p_{i}}{1+\mathrm{i}\lambda p_{i}}}(-p_{i})^{n}\left(1+\mathrm{i} \lambda p_{i}\right)^{m}e^{-\frac{p_{i}}{2}y-\frac{1}{2p_{i}}\tau+\eta_{i0}}, \quad i=1,2,\cdots,N, \tag{68}\] \[p_{2i-1}=\bar{p}_{2i},\quad i=1,2,\cdots,M, \tag{69}\] \[p_{i}\in\mathbb{R},\,i=2M+1,2M+2,\cdots,N, \tag{70}\] or a Gram-type determinant \[\tau_{n,m}=\left|m_{ij}^{n,m}\right|_{N\times N}=\left|\mathrm{i}\left(\frac{ 1-\mathrm{i}\lambda p_{i}}{1+\mathrm{i}\lambda p_{j}}\right)^{\frac{1}{2}} \delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{p_{j}}\right)^{n}\left( \frac{1-\mathrm{i}\lambda p_{i}}{1+\mathrm{i}\lambda p_{j}}\right)^{m}e^{ \xi_{i}+\eta_{j}}\right|_{N\times N}, \tag{71}\] where \[\xi_{i}=\frac{p_{i}}{2}y+\frac{1}{2p_{i}}\tau+\xi_{i0},\quad\eta _{j}=\frac{p_{j}}{2}y+\frac{1}{2p_{j}}\tau+\eta_{j0}, \tag{72}\] \[p_{2i-1}=\bar{p}_{2i},\quad i=1,2,\cdots,M,\] (73) \[p_{i}\in\mathbb{R},\,i=2M+1,2M+2,\cdots,N. \tag{74}\] **Remark 2.2**.: Similar to the deduction of the gsG equation with \(\nu=1\), one can generalize the Theorem 1 in [5]. The parametric form for the \(N\)-soliton solution of the gsG equation (1) with \(\nu=-1\) is \[u=\mathrm{i}\ln\frac{\bar{f}\bar{g}}{fg}, \tag{75}\] \[x=c^{-1}y+c\tau+\ln\frac{\bar{g}g}{\bar{f}f},\;t=c\tau,\;c\in\mathbb{R}, \tag{76}\] \[f=\tau_{00},\;g=\tau_{01}, \tag{77}\] and \(\tau_{n,m}\) can be written as a Casorati-type determinant \[\tau_{n,m}=\left|\begin{array}{cccc}\phi_{n}^{(1)}(m)&\phi_{n+1}^{(1)}(m)& \cdots&\phi_{n+N-1}^{(1)}(m)\\ \phi_{n}^{(2)}(m)&\phi_{n+1}^{(2)}(m)&\cdots&\phi_{n+N-1}^{(2)}(m)\\ \cdots&\cdots&\cdots&\cdots\\ \phi_{n}^{(N)}(m)&\phi_{n+1}^{(N)}(m)&\cdots&\phi_{n+N-1}^{(N)}(m)\\ \end{array}\right|, \tag{78}\] with \[\phi_{n}^{(i)}(m)=p_{i}^{n}\left(1-cp_{i}\right)^{m}e^{\frac{p_{i}}{2}y+\frac {1}{2p_{i}}\tau+\xi_{i0}}+\mathrm{i}(-p_{i})^{n}\left(1+cp_{i}\right)^{m}e^{- \frac{p_{i}}{2}y-\frac{1}{2p_{i}}\tau+\eta_{i0}},\quad i=1,2,\cdots,N. \tag{79}\] Obviously, the result in this remark is equivalent to that in Theorem 1 of [5] when \(c=1\). **Remark 2.3**.: The determinant solutions we obtained above are consistent with the solutions given in [4]. ### Reduction to the sG and SP equation In [3] and [4], Matsuno demonstrated that the gsG equation is reduced to the sG equation in the long wave limit and to the SP equation in the short wave limit. Here we introduce the scaling parameter \(c\) (or \(\lambda\)) in the hodograph transformation and the \(\tau\)-function, and give another kind of reduction. #### 2.3.1 Reduction to the sine-Gordon equation The sG equation \[u_{xt}=\sin u,\ \ \ \ u\equiv u(x,t)\in\mathbb{R},\ \ \ \ (x,t)\in\mathbb{R}^{2}, \tag{80}\] is a fundamental model in the integrable system, which appears in a number of disciplines of physics including magnetic flux propagation [23, 24], one-dimensional classical field theory [25, 26], and nonlinear optics [27]. In this part, we take into account the reductions from the gsG equation with \(\nu=\pm 1\) to the sG equation in the continuous case through some scaling transformations. **(I) From the gsG equation with \(\nu=-1\) to the sG equation**. In this part of reduction, we take \(c\in\mathbb{R}\) as a small parameter. The matrix elements of the \(\tau\)-function for the gsG equation we proposed in [5] can be written as \[\phi_{n}^{(i)}(1) =p_{i}^{n}(1-cp_{i})e^{\frac{p_{i}}{2}y+\frac{1}{2p_{i}}\tau+ \xi_{i0}}+\mathrm{i}(-p_{i})^{n}\left(1+cp_{i}\right)e^{-\frac{p_{i}}{2}y- \frac{1}{2p_{i}}\tau+\eta_{i0}}\] \[=\phi_{n}^{(i)}(0)-c\phi_{n+1}^{(i)}(0), \tag{81}\] which means \[g=\tau_{0}(1)=f+O(c),\ \bar{g}=\tau_{1}(1)=\bar{f}+O(c). \tag{82}\] Then we can rewrite the dependent variable transformations and introduce the scaling transformation \[u=\mathrm{i}\ln\frac{\bar{f}\bar{g}}{fg}=2\mathrm{i}\ln\frac{ \bar{f}}{f}+O(c), \tag{83}\] \[\phi=\mathrm{i}\ln\frac{\bar{f}g}{\bar{f}\bar{g}}=O(c),\] (84) \[\hat{x}=cx=y+c^{2}\tau+c\ln\frac{g\bar{g}}{f\bar{f}}=y+O(c^{2}),\] (85) \[\hat{t}=c^{-1}t=\tau. \tag{86}\] In addition, the gsG equation (1) with \(\nu=-1\) can be recast into \[u_{\hat{x}\hat{t}}=(1-c^{2}\partial_{\hat{x}}^{2})\sin u. \tag{87}\] With the scaling limit \(c\to 0\), equation (87) becomes \[u_{\hat{x}\hat{t}}=\sin u, \tag{88}\] which is the well-known sG equation. And the dependent variable transformation, as well as the \(\tau\)-function \(f\), also reduces to the usual form of the \(N\)-soliton solutions of the sG equation [28, 29, 30]. It should be point out that the transformation (83)-(86) are agree with the scaled variables introducing by Matsuno in [3]. The difference is that we introduce the parameter \(c\) in the hodograph transformation, thus the \(\tau\)-function can be transformed naturally without the scaling of variables \(y,\ \tau\) and the parameter \(p\). **(II) From the gsG equation with \(\nu=1\) to the sG equation**. In this part of reduction, we take \(\lambda\in\mathbb{R}\) as a small parameter. Similar to the case \(\nu=-1\), recall that \[\phi_{n}^{(i)}(1) =p_{i}^{n}\left(1-\mathrm{i}\lambda p_{i}\right)e^{\frac{p_{i}}{2 }y+\frac{1}{2p_{i}}\tau+\xi_{i0}}+\mathrm{i}\sqrt{\frac{1-\mathrm{i}\lambda p _{i}}{1+\mathrm{i}\lambda p_{i}}}(-p_{i})^{n}\left(1+\mathrm{i}\lambda p_{i} \right)e^{-\frac{p_{i}}{2}y-\frac{1}{2p_{i}}\tau+\eta_{i0}},\] \[=p_{i}^{n}e^{\frac{p_{i}}{2}y+\frac{1}{2p_{i}}\tau+\xi_{i0}}+ \mathrm{i}(-p_{i})^{n}e^{-\frac{p_{i}}{2}y-\frac{1}{2p_{i}}\tau+\eta_{i0}}+O( \lambda),\] \[=\hat{\phi}_{n}^{(i)}(0)+O(\lambda), \tag{89}\] \[\phi_{n}^{(i)}(0) =p_{i}^{n}e^{\frac{p_{i}}{2}y+\frac{1}{2p_{i}}\tau+\xi_{i0}}+ \mathrm{i}\sqrt{\frac{1-\mathrm{i}\lambda p_{i}}{1+\mathrm{i}\lambda p_{i}}}( -p_{i})^{n}e^{-\frac{p_{i}}{2}y-\frac{1}{2p_{i}}\tau+\eta_{i0}},\] \[=\hat{\phi}_{n}^{(i)}(0)+O(\lambda), \tag{90}\] which lead to \[f=\hat{f}+O(\lambda),\ g=\hat{f}+O(\lambda),\ \bar{f}=\bar{\hat{f}}+O(\lambda),\ \bar{g}=\bar{\hat{f}}+O(\lambda) \tag{91}\] here \(\hat{f}\) is the \(\tau\)-function of the sG equation. Thus we have \[u=2\mathrm{i}\ln\frac{\bar{\hat{f}}}{\bar{\hat{f}}}+O(\lambda),\ \varphi=O(\lambda),\ \hat{x}=\lambda x=y+O(\lambda^{2}),\ \hat{t}=\lambda^{-1}t=\tau, \tag{92}\] And the gsG equation (1) with \(\nu=1\) becomes \[u_{\dot{x}\dot{t}}=(1+\lambda^{2}\partial_{\dot{x}}^{2})\sin u. \tag{93}\] The gsG equation is converted into the sG equation with the scaling limit \(\lambda\to 0\), as well as its solutions. Here those transformations are agree with transformations introduced in [4]. #### 2.3.2 Reduction to the short pulse equation The short pulse (SP) equation \[u_{xt}=u-\frac{\sigma}{6}\left(u^{3}\right)_{xx}, \tag{94}\] was derived to describe the propagation of ultra-short optical pulses in nonlinear media by Schafer and Wayne when \(\sigma=-1\)[34]. Here, the real-valued function \(u=u(x,t)\) represent the magnitude of the electric field, and the subscripts \(t\) and \(x\) signify partial differentiation. The SP equation has also been developed as an integrable differential equation linked to pseudospherical surfaces outside of the context of nonlinear optics [35]. When \(\sigma=1\), equation (94) \[u_{xt}=u-\frac{1}{6}\left(u^{3}\right)_{xx}, \tag{95}\] was shown to model the evolution of ultra-short pulses in the band gap of nonlinear metamaterials [36]. Here, we show that applying the right scaling limit and variable transformations results in the gsG equation with \(\nu=\pm 1\) being reduced to the SP equation (94) with \(\sigma=\pm 1\) in the continuous case. **(I) From the gsG equation with \(\nu=-1\) to the SP equation with \(\sigma=-1\).** In this part of reduction, we take \(c\) as a big parameter (or \(\epsilon=\frac{1}{c}\) small). The matrix elements of the \(\tau\)-function for the gsG equation with \(\nu=-1\) in [5] are \[\phi_{n}^{(i)}(1) =p_{i}^{n}(1-cp_{i})e^{\frac{p_{i}}{p}y+\frac{1}{2p_{i}}\tau+ \xi_{i0}}+\mathrm{i}(-p_{i})^{n}\left(1+cp_{i}\right)e^{-\frac{p_{i}}{p}y- \frac{1}{2p_{i}}\tau+\eta_{i0}}\] \[\propto\phi_{n+1}^{(i)}(0)-2\epsilon\frac{\partial}{\partial\tau }\phi_{n+1}^{(i)}(0), \tag{96}\] from which we can obtain \[g=\tau_{0}(1)\propto\bar{f}-2\epsilon\frac{\partial}{\partial \tau}\bar{f}+O(\epsilon^{2}), \tag{97}\] \[\bar{g}\propto f-2\epsilon\frac{\partial}{\partial\tau}f+O( \epsilon^{2}). \tag{98}\] Then we rewrite the dependent variable transformations and introduce new variable as \[\epsilon\hat{u}=u=\mathrm{i}\ln\frac{\bar{f}\bar{g}}{\bar{f}g}= \mathrm{i}\ln\frac{\bar{f}(f-2\epsilon\frac{\partial}{\partial\tau}\bar{f})}{ f(f-2\epsilon\frac{\partial}{\partial\tau}\bar{f})}+O(\epsilon^{2}), \tag{99}\] \[\phi=\mathrm{i}\ln\frac{\bar{f}g}{f\bar{g}}=\mathrm{i}\ln\frac{ \bar{f}(\bar{f}-2\epsilon\frac{\partial}{\partial\tau}\bar{f})}{f(f-2\epsilon \frac{\partial}{\partial\tau}f)}+O(\epsilon^{2}),\] (100) \[\epsilon\hat{x}=x-t=\epsilon y+\ln\frac{g\bar{g}}{f\bar{f}}= \epsilon y+\ln\frac{(\bar{f}-2\epsilon\frac{\partial}{\partial\tau}\bar{f})(f -2\epsilon\frac{\partial}{\partial\tau}f)}{f\bar{f}}+O(\epsilon^{2}),\] (101) \[\hat{t}=\epsilon t=\tau. \tag{102}\] The gsG equation (1) with \(\nu=-1\) can be recast into \[\epsilon\hat{u}_{\hat{t}\hat{x}}=\epsilon\left(\hat{u}+\frac{1}{6}\left(\hat{u}^{ 3}\right)_{\hat{x}\hat{x}}\right)+O\left(\epsilon^{3}\right). \tag{103}\] Dividing both sides of (103) by \(\epsilon\) and taking \(\epsilon\to 0\) in (99)-(103), we arrive at \[\hat{u}=2\mathrm{i}\left(\ln\frac{\bar{f}}{\bar{f}}\right)_{\tau}, \ \phi=2\mathrm{i}\ln\frac{\bar{f}}{\bar{f}},\ \hat{x}=y-2(\ln(f\bar{f}))_{\tau},\ \hat{t}=\tau, \tag{104}\] \[\hat{u}_{\hat{t}\hat{x}}=\hat{u}+\frac{1}{6}\left(\hat{u}^{3} \right)_{\hat{x}\hat{x}}. \tag{105}\] Here the SP equation (94) is derived and its parametric representation \(\tau\)-function are equivalent to those in [7]. **(II) From the gsG equation with \(\nu=1\) to the SP equation with \(\sigma=1\).** Similar to the case with \(\nu=-1\), we take \(\epsilon=\frac{1}{\lambda}\) as a small parameter. The \(\tau\)-function in Theorem 2.1 can be written as \[\phi_{n}^{(i)}(0) =p_{i}^{n}e^{\frac{p_{i}}{2}y+\frac{1}{2p_{i}}+\xi_{i0}}+\mathrm{ i}\sqrt{\frac{1-\mathrm{i}\lambda p_{i}}{1+\mathrm{i}\lambda p_{i}}(-p_{i})^{n}e^{- \frac{p_{i}}{2}y-\frac{1}{2p_{i}}\tau+\eta_{i0}}},\] \[\propto\hat{\phi}_{n+1}^{(i)}-\mathrm{i}\epsilon\frac{\partial} {\partial\tau}\hat{\phi}_{n+1}^{(i)}+O(\epsilon^{2}), \tag{106}\] \[\phi_{n}^{(i)}(1) =p_{i}^{n}\left(1-\mathrm{i}\lambda p_{i}\right)e^{\frac{p_{i}}{2 }y+\frac{1}{2p_{i}}+\xi_{i0}}+\mathrm{i}\sqrt{\frac{1-\mathrm{i}\lambda p_{i}} {1+\mathrm{i}\lambda p_{i}}}(-p_{i})^{n}\left(1+\mathrm{i}\lambda p_{i}\right) e^{-\frac{p_{i}}{2}y-\frac{1}{2p_{i}}\tau+\eta_{i0}},\] \[\propto\hat{\phi}_{n}^{(i)}-\mathrm{i}\epsilon\frac{\partial}{ \partial\tau}\hat{\phi}_{n}^{(i)}+O(\epsilon^{2}), \tag{107}\] if we define \[\hat{\phi}_{n}^{(i)}=p_{i}^{n}e^{\frac{p_{i}}{2}y+\frac{1}{2p_{i}}\tau+\xi_{i0 }}+(-p_{i})^{n}e^{-\frac{p_{i}}{2}y-\frac{1}{2p_{i}}\tau+\eta_{i0}}, \tag{108}\] and \[\hat{f}=\left|\begin{array}{cccc}\hat{\phi}_{n+1}^{(1)}&\hat{\phi}_{n+2}^{( 1)}&\cdots&\hat{\phi}_{n+N}^{(1)}\\ \hat{\phi}_{n+1}^{(2)}&\hat{\phi}_{n+2}^{(2)}&\cdots&\hat{\phi}_{n+N}^{(2)}\\ \cdots&\cdots&\cdots&\cdots\\ \hat{\phi}_{n+1}^{(N)}&\hat{\phi}_{n+2}^{(N)}&\cdots&\hat{\phi}_{n+N}^{(N)} \end{array}\right|,\ \hat{g}=\left|\begin{array}{cccc}\hat{\phi}_{n}^{(1)}&\hat{\phi}_{n+1}^{(1 )}&\cdots&\hat{\phi}_{n+N-1}^{(1)}\\ \hat{\phi}_{n}^{(2)}&\hat{\phi}_{n+1}^{(2)}&\cdots&\hat{\phi}_{n+N-1}^{(2)}\\ \cdots&\cdots&\cdots&\cdots\\ \hat{\phi}_{n}^{(N)}&\hat{\phi}_{n+1}^{(N)}&\cdots&\hat{\phi}_{n+N-1}^{(N)} \end{array}\right|. \tag{109}\] Furthermore, one obtains \[f =\hat{f}-\mathrm{i}\epsilon\frac{\partial}{\partial\tau}\hat{f}+O (\epsilon^{2}),\ \bar{f}=\hat{f}+\mathrm{i}\epsilon\frac{\partial}{\partial\tau}\hat{f}+O( \epsilon^{2}), \tag{110}\] \[g =\hat{g}+\mathrm{i}\epsilon\frac{\partial}{\partial\tau}\hat{g}+O (\epsilon^{2}),\ \bar{g}=\hat{g}-\mathrm{i}\epsilon\frac{\partial}{\partial\tau}\hat{g}+O( \epsilon^{2}). \tag{111}\] Here we introduce new variables as \[\epsilon\hat{u}=u=\mathrm{i}\ln\frac{\bar{f}\bar{g}}{\bar{f}g}= \mathrm{i}\ln\frac{(\hat{f}+\mathrm{i}\epsilon\hat{f}_{\tau})(\hat{g}-\mathrm{ i}\epsilon\hat{g}_{\tau})}{(\hat{f}-\mathrm{i}\epsilon\hat{f}_{\tau})(\hat{g}+ \mathrm{i}\epsilon\hat{g}_{\tau})}+O(\epsilon^{2}), \tag{112}\] \[\varphi=\ln\frac{\bar{g}g}{f\bar{f}}=\ln\frac{(\hat{g}-\mathrm{i} \epsilon\hat{g}_{\tau})(\hat{g}+\mathrm{i}\epsilon\hat{g}_{\tau})}{(\hat{f}- \mathrm{i}\epsilon\hat{f}_{\tau})(\hat{f}+\mathrm{i}\epsilon\hat{f}_{\tau})}+O( \epsilon^{2}),\] (113) \[\epsilon\hat{x}=x+t=\epsilon y+\mathrm{i}\ln\frac{(\hat{f}+ \mathrm{i}\epsilon\hat{f}_{\tau})(\hat{g}+\mathrm{i}\epsilon\hat{g}_{\tau})}{( \hat{f}-\mathrm{i}\epsilon\hat{f}_{\tau})(\hat{g}-\mathrm{i}\epsilon\hat{g}_{ \tau})}+O(\epsilon^{2}),\] (114) \[\hat{t}=\tau, \tag{115}\] then the scaling limit \(\epsilon\to 0\) leads to \[\hat{u}=2\left(\ln\frac{\hat{g}}{\hat{f}}\right)_{\tau},\ \varphi=2\ln\frac{\hat{g}}{ \hat{f}},\ \hat{x}=y-2(\ln\hat{f}\hat{g})_{\tau},\ \hat{t}=\tau, \tag{116}\] and \[\hat{u}_{\hat{t}\hat{x}}=\hat{u}-\frac{1}{6}(\hat{u}^{3})_{\hat{x}\hat{x}}. \tag{117}\] Note that the \(N\)-soliton solutions of the SP equation with \(\sigma=1\) exhibits the singular nature since \(u\) diverges when \(|x|\rightarrow\infty\) as shown in [4]. ## 3 Integrable fully discretization of the gsG equation To construct a fully discrete analogue of the gsG equation, we introduce two discrete variables, \(k\) and \(l\), which correspond to the discrete partial and time variables, respectively. We start with the following fully discrete bilinear equations. \[\tau_{n}(k,l+1,m)\tau_{n}(k,l,m-1)-bc\tau_{n+1}(k,l,m-1)\tau_{n-1 }(k,l+1,m)\] \[=(1-bc)\tau_{n}(k,l,m)\tau_{n}(k,l+1,m-1), \tag{118}\] \[\tau_{n+1}(k,l,m-1)\tau_{n}(k+1,l,m)-ac^{-1}\tau_{n+1}(k+1,l,m) \tau_{n}(k,l,m-1)\] \[=(1-ac^{-1})\tau_{n}(k,l,m)\tau_{n+1}(k+1,l,m-1). \tag{119}\] Here \(n\), \(k\), \(l\), \(m\) are integers, \(a\), \(b\), \(c\) are parameters. **Proposition 1**.: _The bilinear equations (118)-(119) admit either the following Gram-type determinant solutions_ \[\tau_{n}(k,l,m)=\left|m_{ij}^{n}(k,l,m)\right|_{1\leqslant i,j\leqslant N},\] _with_ \[m_{ij}^{(n)}(k,l,m)=c_{ij}+\frac{d_{ij}}{p_{i}+q_{j}}\left(-\frac{p_{i}}{q_{j} }\right)^{n}\left(\frac{1-ap_{i}}{1+aq_{j}}\right)^{-k}\left(\frac{1-bp_{i}^{ -1}}{1+bq_{j}^{-1}}\right)^{-l}\left(\frac{1-cp_{i}}{1+cq_{j}}\right)^{m}, \tag{120}\] _or the Casorati-type determinant solutions_ \[\tau_{n}(k,l,m)=\left|\phi_{(n+j-1)}^{(i)}(k,l,m)\right|_{1\leqslant i,j \leqslant N}, \tag{121}\] _with_ \[\phi_{(n)}^{(i)}(k,l,m)=c_{i}p_{i}^{n}\left(1-ap_{i}\right)^{-k}\left(1-bp_{i }^{-1}\right)^{-l}\left(1-cp_{i}\right)^{m}+d_{i}q_{i}^{n}\left(1-aq_{i} \right)^{-k}\left(1+bq_{i}^{-1}\right)^{-l}\left(1-cq_{i}\right)^{m}.\] Proof.: Here we give a proof for the Gram-type determinant solution. The proof for the Casorati-type determinant solution is similar. The discrete Kadomtsev-Petviashvili (dKP) equation was proposed independently by Hirota [19] and Miwa [20] in early 1980s so it is also called Hirota-Miwa (HW) equation. Discrete KP hierarchy is an infinite number of bilinear equations with \((k_{i},k_{j},k_{m})\) taken from \((k_{1},k_{2},k_{3},\cdots)\), among which, let us choose two triples: \((k_{1},k_{3},k_{4})\), \((k_{1},k_{2},k_{4})\) so that the following two bilinear equations follow \[(a_{3}^{-1}-a_{4}^{-1})\tau(k_{1}+1,k_{2},k_{3},k_{4})\tau(k_{1}, k_{2},k_{3}+1,k_{4}+1)\] \[+(a_{4}^{-1}-a_{1}^{-1})\tau(k_{1},k_{2},k_{3}+1,k_{4})\tau(k_{1} +1,k_{2},k_{3},k_{4}+1)\] \[+(a_{1}^{-1}-a_{3}^{-1})\tau(k_{1},k_{2},k_{3},k_{4}+1)\tau(k_{1} +1,k_{2},k_{3}+1,k_{4})=0\,, \tag{122}\] \[(a_{2}^{-1}-a_{4}^{-1})\tau(k_{1}+1,k_{2},k_{3},k_{4})\tau(k_{1},k_{2}+1,k_{3},k_{4 }+1)\] \[+(a_{4}^{-1}-a_{1}^{-1})\tau(k_{1},k_{2}+1,k_{3},k_{4})\tau(k_{1}+1,k_{2},k_{3},k_{4}+1)\] \[+(a_{1}^{-1}-a_{2}^{-1})\tau(k_{1},k_{2},k_{3},k_{4}+1)\tau(k_{1}+ 1,k_{2}+1,k_{3},k_{4})=0\,. \tag{123}\] As is shown by Ohta _et al._[21], above two bilinear equations admit Gram type solution \[\tau(k_{1},\cdots,k_{4})=\left|c_{ij}+\frac{1}{p_{i}+q_{j}}\prod_{l=1}^{4} \left(\frac{1-a_{l}p_{i}}{1+a_{l}q_{j}}\right)^{-k}\right|_{1\leqslant i,j \leqslant N}. \tag{124}\] By taking \(a_{1}\to\infty\), and redefined \(a_{2}=a\), \(a_{3}=b^{-1}\), \(a_{4}=c\), \(k_{2}=k\), \(k_{3}=l,k_{4}=-m\), \(k_{1}+k_{3}=-n-1\) and \(\tau(k_{1},\cdots,k_{4})=\tau_{n}(k,l,m)\), we have the bilinear equation (118) together with its Gram-type determinant solution from (122). On the other hand, by taking the same limit \(a_{1}\to\infty\), and redefined \(a_{2}=a\), \(a_{3}=b^{-1}\), \(a_{4}=c\), \(k_{2}=k,k_{3}=l\), \(k_{4}=-m\), \(k_{1}=-n-1\) and \(\tau(k_{1},\cdots,k_{4})=\tau_{n}(k,l,m)\), we have the bilinear equation (119) together with its Gram-type determinant solution from (123). The proof is complete. **Remark 3.1**.: Eq. (118) is actually the fully discrete 2DTL equation, while eq. (119) is discrete modified KP equation. As shown from above proof, they are equivalent to discrete KP equation via reparameterization. As shown in this section, the discrete analog of the gsG equation is constructed from the combination of (118) and(119). By applying a 2-reduction condition: \(q_{i}=p_{i}\) for Gram-type determinant solution, or \(q_{i}=-p_{i}\) for Casorati-type, we have \(\tau_{n}\backsim\tau_{n+2}\). Here \(\backsim\) means two \(\tau\)-functions are equivalent up to a constant multiple. In addition, by defining \[f_{k}^{l}=\tau_{0}(k,l,0),\,\tilde{f}_{k}^{l}=\tau_{1}(k,l,0),\,g_{k}^{l}=\tau _{0}(k,l,1),\,\tilde{g}_{k}^{l}=\tau_{1}(k,l,1), \tag{125}\] we can obtain the following equations \[f_{k}^{l}g_{k}^{l+1}-bc\tilde{f}_{k}^{l}\tilde{g}_{k}^{l+1} = (1-bc)f_{k}^{l+1}g_{k}^{l}, \tag{126}\] \[\tilde{f}_{k}^{l}\tilde{g}_{k+1}^{l+1}-bc\tilde{f}_{k}^{l}g_{k+1} ^{l+1} = (1-bc)\tilde{f}_{k}^{l+1}\tilde{g}_{k}^{l},\] (127) \[f_{k}^{l}\tilde{g}_{k+1}^{l}-ac^{-1}\tilde{f}_{k}^{l}g_{k+1} ^{l+1} = (1-ac^{-1})f_{k+1}^{l}\tilde{g}_{k}^{l},\] (128) \[\tilde{f}_{k}^{l}g_{k+1}^{l}-ac^{-1}f_{k}^{l}\tilde{g}_{k+1} ^{l} = (1-ac^{-1})\tilde{f}_{k+1}^{l}g_{k}^{l}. \tag{129}\] Introducing four intermediate variable transformations \[\sigma_{k}^{l}=2\mathrm{i}\ln\frac{\tilde{f}_{k}^{l}}{f_{k}^{l}},\,\tilde{ \sigma}_{k}^{l}=2\mathrm{i}\ln\frac{\tilde{g}_{k}^{l}}{g_{k}^{l}}, \tag{130}\] \[\theta_{k}^{l}=\ln\frac{f_{k}^{l}}{g_{k}^{l}},\,\tilde{\theta}_{k}^{l}=\ln \frac{\tilde{f}_{k}^{l}}{\tilde{g}_{k}^{l}}, \tag{131}\] and then dividing (126) and (127) by \(\tilde{f}_{k}^{l}\tilde{g}_{k}^{l+1}\) and \(\tilde{f}_{k}^{l+1}\tilde{g}_{k}^{l}\), respectively, lead to \[e^{\frac{\mathrm{i}}{2}(\sigma_{k}^{l}+\tilde{g}_{k}^{l+1})}-bc=(1-bc)\frac{ \tilde{f}_{k}^{l+1}\tilde{g}_{k}^{l}}{\tilde{f}_{k}^{l}\tilde{g}_{k}^{l+1}}e^{ \frac{\mathrm{i}}{2}(\sigma_{k}^{l+1}+\tilde{g}_{k}^{l})}, \tag{132}\] \[\frac{\tilde{f}_{k}^{l}\tilde{g}_{k}^{l+1}}{\tilde{f}_{k}^{l+1}\tilde{g}_{k}^{l }}\left(1-bc^{\frac{\mathrm{i}}{2}(\sigma_{k}^{l}+\tilde{g}_{k}^{l+1})}\right)= 1-bc. \tag{133}\] Eliminating \(\frac{\tilde{f}_{k}^{l+1}\tilde{g}_{k}^{l}}{\tilde{f}_{k}^{l+1}}\), we get \[\frac{1}{bc}\sin\frac{\sigma_{k}^{l+1}-\tilde{\sigma}_{k}^{l+1}-\sigma_{k}^{l}+ \tilde{\sigma}_{k}^{l}}{4}=\sin\frac{\sigma_{k}^{l+1}+\tilde{\sigma}_{k}^{l+1}+ \sigma_{k}^{l}+\tilde{\sigma}_{k}^{l}}{4}. \tag{134}\] Meanwhile, if we dividing (126) and (127) by \(f_{k}^{l}g_{k}^{l+1}\) and \(\bar{f}_{k}^{l}\bar{g}_{k}^{l+1}\), respectively, we know that \[1-bc\frac{\bar{f}_{k}^{l}\bar{g}_{k}^{l+1}}{f_{k}^{l}g_{k}^{l+1}}= (1-bc)e^{\theta_{k}^{l+1}-\theta_{k}^{l}}, \tag{135}\] \[1-bc\frac{f_{k}^{l}g_{k}^{l+1}}{\bar{f}_{k}^{l}g_{k}^{l+1}}= (1-bc)e^{\bar{\theta}_{k}^{l+1}-\bar{\theta}_{k}^{l}}, \tag{136}\] which can be transformed into \[\cosh\frac{\theta_{k}^{l+1}-\theta_{k}^{l}+\bar{\theta}_{k}^{l+1} -\bar{\theta}_{k}^{l}}{2}-bc\sinh\frac{\theta_{k}^{l+1}-\theta_{k}^{l}+\bar{ \theta}_{k}^{l+1}-\bar{\theta}_{k}^{l}}{2}=\cosh\frac{\theta_{k}^{l+1}-\theta _{k}^{l}+\bar{\theta}_{k}^{l}-\bar{\theta}_{k}^{l+1}}{2}. \tag{137}\] Similarly, from (128) and (129), we know that \[ac^{-1}\sin\frac{\sigma_{k+1}^{l}-\tilde{\sigma}_{k+1}^{l}+ \sigma_{k}^{l}-\tilde{\sigma}_{k}^{l}}{4}=\sin\frac{\sigma_{k+1}^{l}+\tilde{ \sigma}_{k+1}^{l}-\sigma_{k}^{l}-\tilde{\sigma}_{k}^{l}}{4}, \tag{138}\] \[ac^{-1}\cosh\frac{\theta_{k+1}^{l}-\theta_{k}^{l}+\bar{\theta}_ {k+1}^{l}-\bar{\theta}_{k}^{l}}{2}-\sinh\frac{\theta_{k+1}^{l}-\theta_{k}^{l}+ \tilde{\theta}_{k+1}^{l}-\bar{\theta}_{k}^{l}}{2}=ac^{-1}\cosh\frac{\theta_{ k+1}^{l}+\theta_{k}^{l}-\tilde{\theta}_{k}^{l}-\tilde{\theta}_{k+1}^{l}}{2}. \tag{139}\] ### Fully discretization of the gsG equation with \(\nu=-1\) By choosing particular values in phase constants \[c_{ij}=\mathrm{i}\delta_{ij}, \tag{140}\] for Gram-type solution, or \[d_{i}=\mathrm{i}c_{i}, \tag{141}\] for Casorati-type solution, we can make \(\tau_{n}\) and \(\tau_{n+1}\) complex conjugate to each other, which means \[\bar{f}_{k}^{l}\asymp\bar{f}_{k}^{l},\ \tilde{g}_{k}^{l}\asymp\bar{g}_{k}^{l}. \tag{142}\] Here \(\bar{f}\) denotes complex conjugate of \(f\). Then, similar to the continuous case, we introduce discrete hodograph transformation and dependent variable transformation \[u_{k}^{l}=\frac{1}{2}(\sigma_{k}^{l}+\tilde{\sigma}_{k}^{l})= \mathrm{i}\ln\frac{\bar{f}_{k}^{l}\bar{g}_{k}^{l}}{f_{k}^{l}\bar{g}_{k}^{l}}, \tag{143}\] \[\phi_{k}^{l}=\frac{1}{2}(\sigma_{k}^{l}-\tilde{\sigma}_{k}^{l})= \mathrm{i}\ln\frac{\bar{f}_{k}^{l}g_{k}^{l}}{f_{k}^{l}\bar{g}_{k}^{l}}.\] (144) \[x_{k}^{l}=2kac^{-1}+2lbc-\theta_{k}^{l}-\tilde{\theta}_{k}^{l}= 2kac^{-1}+2lbc+\ln\frac{g_{k}^{l}\bar{g}_{k}^{l}}{f_{k}^{l}\bar{f}_{k}^{l}}. \tag{145}\] The fully discrete analogue of the gsG equation with \(\nu=-1\) is given by the following theorem: **Theorem 3.1**.: The fully discrete analogue of the gsG equation with \(\nu=-1\) is of the form \[\frac{1}{b}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^ {l}}{4}=\Delta_{k}^{l}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l }}{4}, \tag{146}\] \[(b^{2}c^{2}-1)\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x _{k}^{l}-4bc+4\chi_{1}}{2}\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}-x_{k}^{l+1}+x_{ k}^{l}}{2}\] \[=-b^{2}c^{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k} ^{l}}{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{2}, \tag{147}\] with \[\Delta_{k}^{l}=\frac{\sqrt{c^{2}-a^{2}}\sinh\frac{x_{k+1}^{l+1}-x_{k} ^{l+1}+x_{k+1}^{l}-x_{k}^{l}-4ac^{-1}+4\chi_{2}}{4}}{\sqrt{b^{2}c^{2}-1}\sinh \frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x_{k}^{l}-4bc+4\chi_{1}}{4}}, \tag{148}\] \[\sinh\chi_{1}=\frac{1}{\sqrt{b^{2}c^{2}-1}},\ \sinh\chi_{2}=\frac{a}{ \sqrt{c^{2}-a^{2}}}. \tag{149}\] Here \(u_{k}^{l},\ x_{k}^{l}\) are defined in (143) and (145). The \(\tau\)-functions \(f_{k}^{l}\) and \(g_{k}^{l}\) are defined in (125) with restriction (140) for Gram-type determinants, or with restriction (141) for Casorati-type determinants in Proposition 1. Moreover, two conserved quantities in the full-discrete gsG equation read as \[I_{k}^{l}=(c^{2}-a^{2})\sinh^{2}\frac{x_{k+1}^{l}-x_{k}^{l}-2ac^ {-1}+2\chi_{2}}{2}+c^{2}\sin^{2}\frac{u_{k+1}^{l}-u_{k}^{l}}{2}=a^{2}, \tag{150}\] \[J_{k}^{l}=\frac{b^{2}c^{2}-1}{b^{2}}\sinh^{2}\frac{x_{k}^{l+1}-x _{k}^{l}-2bc+2\chi_{1}}{2}+c^{2}\sin^{2}\frac{u_{k}^{l}+u_{k}^{l+1}}{2}=\frac{ 1}{b^{2}}. \tag{151}\] Proof.: Note that we can rewrite (134), (137), (138) and (139) as \[\frac{1}{b}\sin\frac{\phi_{k}^{l+1}-\phi_{k}^{l}}{2}=c\sin\frac{u _{k}^{l+1}+u_{k}^{l}}{2}, \tag{152}\] \[\frac{1}{b}\cos\frac{\phi_{k}^{l+1}-\phi_{k}^{l}}{2}=\frac{\sqrt{ b^{2}c^{2}-1}}{b}\sinh\frac{x_{k}^{l+1}-x_{k}^{l}-2bc+2\chi_{1}}{2},\ \sinh\chi_{1}=\frac{1}{\sqrt{b^{2}c^{2}-1}},\] (153) \[a\sin\frac{\phi_{k}^{l}+\phi_{k+1}^{l}}{2}=c\sin\frac{u_{k+1}^{l} -u_{k}^{l}}{2},\] (154) \[a\cos\frac{\phi_{k}^{l}+\phi_{k+1}^{l}}{2}=\sqrt{c^{2}-a^{2}} \sinh\frac{x_{k+1}^{l}-x_{k}^{l}-2ac^{-1}+2\chi_{2}}{2},\ \sinh\chi_{2}=\frac{a}{\sqrt{c^{2}-a^{2}}}. \tag{155}\] By making a shift of \(k\to k+1\) in (152), then adding and subtracting with (152), we obtain, respectively \[\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k} ^{l}}{4}\cos\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}-\phi_{k}^{l+1}+\phi_{k}^{l}}{ 4}\] \[=bc\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l+1}+u_{k}^{l}+u_{k}^{l}}{4} \cos\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{4}, \tag{156}\] \[\cos\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k} ^{l}}{4}\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}-\phi_{k}^{l+1}+\phi_{k}^{l}}{ 4}\] \[=bc\cos\frac{u_{k+1}^{l+1}+u_{k+1}^{l+1}+u_{k}^{l}}{4}\sin\frac{u _{k+1}^{l+1}+u_{k}^{l+1}-u_{k}^{l+1}-u_{k}^{l}}{4}. \tag{157}\] Similarly, from (153)-(155), one can obtain \[\sqrt{b^{2}c^{2}-1}\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+ 1}-x_{k}^{l}-4bc+4\chi_{1}}{4}\cosh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}-x_{k}^{l+1}+ x_{k}^{l}}{4}\] \[=\cos\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k }^{l}}{4}\cos\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}-\phi_{k}^{l+1}+\phi_{k}^{l}}{ 4}, \tag{158}\] \[\sqrt{b^{2}c^{2}-1}\cosh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+ 1}-x_{k}^{l}-4bc+4\chi_{1}}{4}\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}-x_{k}^{l+1}+ x_{k}^{l}}{4}\] \[=-\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k }^{l}}{4}\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}-\phi_{k}^{l+1}+\phi_{k}^{l}}{ 4}, \tag{159}\] \[a\sin\frac{\phi_{k+1}^{l+1}+\phi_{k+1}^{l}+\phi_{k}^{l+1}+\phi_{k}^ {l}}{4}\cos\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k}^{l}}{4}\] \[=c\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{4} \cos\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^{l}}{4}, \tag{160}\] \[a\cos\frac{\phi_{k+1}^{l+1}+\phi_{k+1}^{l}+\phi_{k}^{l+1}+\phi_{k }^{l}}{4}\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k}^{l}}{4}\] \[=c\cos\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{4} \sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^{l}}{4}, \tag{161}\] and \[\sqrt{c^{2}-a^{2}}\sinh\frac{x_{k+1}^{l+1}-x_{k}^{l+1}+x_{k+1}^{l }-x_{k}^{l}-4ac^{-1}+4\chi_{2}}{4}\cosh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}-x_{k}^{ l+1}+x_{k}^{l}}{4}\] \[=a\cos\frac{\phi_{k+1}^{l+1}+\phi_{k+1}^{l}+\phi_{k}^{l+1}+\phi_{ k}^{l}}{4}\cos\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k}^{l}}{4}, \tag{162}\] \[\sqrt{c^{2}-a^{2}}\cosh\frac{x_{k+1}^{l+1}-x_{k}^{l+1}+x_{k+1}^{l }-x_{k}^{l}-4ac^{-1}+4\chi_{2}}{4}\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}-x_{k}^ {l+1}+x_{k}^{l}}{4}\] \[=-a\sin\frac{\phi_{k+1}^{l+1}+\phi_{k+1}^{l}+\phi_{k}^{l+1}+\phi_ {k}^{l}}{4}\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k}^{l }}{4}, \tag{163}\] respectively. Thus, eqs. (156) and (161) give \[\frac{1}{ab}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k} ^{l}}{4}\cos\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}-\phi_{k}^{l+1}+\phi_{k}^{l}}{4}\] \[=\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l}}{4}\cos \frac{\phi_{k+1}^{l+1}+\phi_{k+1}^{l}+\phi_{k}^{l+1}+\phi_{k}^{l}}{4}. \tag{164}\] Eqs. (158) and (162) lead to \[a\sqrt{b^{2}c^{2}-1}\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^ {l+1}-x_{k}^{l}-4bc+4\chi_{1}}{4}\cos\frac{\phi_{k+1}^{l+1}+\phi_{k+1}^{l}+\phi _{k}^{l+1}+\phi_{k}^{l}}{4}\] \[=\sqrt{c^{2}-a^{2}}\sinh\frac{x_{k+1}^{l+1}-x_{k}^{l+1}+x_{k+1}^ {l}-x_{k}^{l}-4ac^{-1}+4\chi_{2}}{4}\cos\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}- \phi_{k}^{l+1}+\phi_{k}^{l}}{4}. \tag{165}\] A substitution of (165) into (164) leads to \[\frac{1}{b}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k} ^{l}}{4}=\Delta_{k}^{l}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l }}{4}, \tag{166}\] with \[\Delta_{k}^{l}=\frac{\sqrt{c^{2}-a^{2}}\sinh\frac{x_{k+1}^{l+1}-x_{k}^{l+1}+x_ {k+1}^{l}-x_{k}^{l}-4ac^{-1}+4\chi_{2}}{4}}{\sqrt{b^{2}c^{2}-1}\sinh\frac{x_{k +1}^{l+1}-x_{k}^{l+1}+x_{k}^{l+1}-x_{k}^{l}-4bc+4\chi_{1}}{4}}. \tag{167}\] On the other hand, by multiplying (156) and (157), (158) and (159), we obtain, respectively \[b^{2}c^{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^ {l}}{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{2}\] \[=\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k }^{l}}{2}\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}-\phi_{k}^{l+1}+\phi_{k}^{l}}{ 2}, \tag{168}\] \[(b^{2}c^{2}-1)\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x_{k}^ {l}-4bc+4\chi_{1}}{2}\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}-x_{k}^{l+1}+x_{k}^{l}}{ 2}\] \[=-\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}+\phi_{k}^{l+1}-\phi_{k} ^{l}}{2}\sin\frac{\phi_{k+1}^{l+1}-\phi_{k+1}^{l}-\phi_{k}^{l+1}+\phi_{k}^{l}}{ 2}, \tag{169}\] which leads to exactly (147) by eliminating the right side of the equations. Meanwhile, from (152)-(155), we have \[J_{k}^{l} =\frac{b^{2}c^{2}-1}{b^{2}}\sinh^{2}\frac{x_{k}^{l+1}-x_{k}^{l}-2 bc+2\chi_{1}}{2}+c^{2}\sin^{2}\frac{u_{k}^{l}+u_{k}^{l+1}}{2}=\frac{1}{b^{2}}, \tag{170}\] \[I_{k}^{l} =(c^{2}-a^{2})\sinh^{2}\frac{x_{k+1}^{l}-x_{k}^{l}-2ac^{-1}+2 \chi_{2}}{2}+c^{2}\sin^{2}\frac{u_{k+1}^{l}-u_{k}^{l}}{2}=a^{2}. \tag{171}\] Here \(a^{2}\) and \(\frac{1}{b^{2}}\) are constants, thus equations (170) and (171) actually give conserved quantities. The proof is complete. ### Fully discretization of the gsG equation with \(\nu=1\) In order to construct the fully discrete analogue of the gsG equation with \(\nu=1\), we choose the restriction as \[c=\lambda\mathrm{i},\,c_{ij}=\mathrm{i}\sqrt{\frac{1-cp_{i}}{1+cq_{j}}}\delta_ {ij}, \tag{172}\] for Gram-type solution, or \[c=\lambda\mathrm{i},\,d_{i}=\mathrm{i}\sqrt{\frac{1-cp_{i}}{1-cq_{i}}}c_{i}, \tag{173}\] for Casorati-type, which implies \[\tilde{f}_{k}^{l}\circ\tilde{g}_{k}^{l},\,\,\tilde{g}_{k}^{l}\circ\tilde{f}_{ k}^{l}. \tag{174}\] Next we introduce dependent variable transformations \[u_{k}^{l} =\frac{1}{2}(\sigma_{k}^{l}+\tilde{\sigma}_{k}^{l})=\mathrm{i} \ln\frac{\tilde{f}_{k}^{l}\tilde{g}_{k}^{l}}{\tilde{f}_{k}^{l}.} \tag{175}\] \[\varphi_{k}^{l} =\frac{1}{2\mathrm{i}}(\sigma_{k}^{l}-\tilde{\sigma}_{k}^{l})=\ln \frac{\tilde{g}_{k}^{l}g_{k}^{l}}{\tilde{f}_{k}^{l}\tilde{f}_{k}^{l}}, \tag{176}\] and \[\tilde{x}_{k}^{l} =2ka\lambda^{-1}-2lb\lambda-\mathrm{i}(\theta_{k}^{l}+\tilde{ \theta}_{k}^{l})=2ka\lambda^{-1}-2lb\lambda+\mathrm{i}\ln\frac{\tilde{f}_{k}^{l }g_{k}^{l}}{\tilde{f}_{k}^{l}\tilde{g}_{k}^{l}}, \tag{177}\] \[\tilde{t}_{k}^{l} =2lb\lambda. \tag{178}\] We can construct the fully discrete analogue of the gsG equation with \(\nu=1\) through the following theorem. **Theorem 3.2**.: The fully discrete analogue of the gsG equation with \(\nu=1\) is of the form \[\frac{1}{b}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^{l}}{4}=\tilde {\Delta}_{k}^{l}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l}}{4}, \tag{179}\] \[(1+b^{2}\lambda^{2})\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+ 1}^{l}+\tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l}+4b\lambda+4\omega_{1}}{2}\sin \frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l+1}+\tilde{x}_ {k}^{l}}{2}\] \[=b^{2}\lambda^{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+ u_{k}^{l}}{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{2}, \tag{180}\] with \[\tilde{\Delta}^{l}_{k}=\frac{\sqrt{a^{2}+\lambda^{2}}\sin\frac{\tilde{x }^{l+1}_{k+1}-\tilde{x}^{l+1}_{k}+\tilde{x}^{l}_{k+1}-\tilde{x}^{l}_{k}-4a \lambda^{-1}+4\omega_{2}}{4}}{\sqrt{1+b^{2}\lambda^{2}}\sin\frac{\tilde{x}^{l+1 }_{k+1}-\tilde{x}^{l}_{k+1}+\tilde{x}^{l+1}_{k}-\tilde{x}^{l+1}_{k}+\tilde{x}^{ l+1}_{k}-\tilde{x}^{l}_{k}+4b\lambda+4\omega_{1}}{4}}, \tag{181}\] \[\sin\omega_{1}=\frac{1}{\sqrt{b^{2}\lambda^{2}+1}},\ \sin\omega_{2}=\frac{a}{\sqrt{a^{2}+\lambda^{2}}}. \tag{182}\] Here \(u^{l}_{k},\ x^{l}_{k}\) are defined in (175) and (177). The \(\tau\)-functions \(f^{l}_{k}\) and \(g^{l}_{k}\) are defined in (125) with restriction (172) for Gram-type determinants, or with restriction (173) for Casorati-type determinants in Proposition 1. Moreover, there are two conserved quantities in the full-discrete gsG equation read as \[\tilde{I}^{l}_{k}=\left(a\cos\frac{\tilde{x}^{l}_{k+1}-\tilde{x}^ {l}_{k}-2a\lambda^{-1}}{2}+\lambda\sin\frac{\tilde{x}^{l}_{k+1}-\tilde{x}^{l}_ {k}-2a\lambda^{-1}}{2}\right)^{2}-\lambda^{2}\sin^{2}\frac{u^{l}_{k+1}-u^{l}_ {k}}{2}=a^{2}, \tag{183}\] \[\tilde{J}^{l}_{k}=\left(\frac{1}{\lambda}\cos\frac{\tilde{x}^{l+ 1}_{k}-\tilde{x}^{l}_{k}+2b\lambda}{2}+\lambda\sin\frac{\tilde{x}^{l+1}_{k}- \tilde{x}^{l}_{k}+2b\lambda}{2}\right)^{2}-\lambda^{2}\sin^{2}\frac{u^{l+1}_{ k}+u^{l}_{k}}{2}=\frac{1}{b^{2}}. \tag{184}\] **Remark 3.2**.: Fully discrete analogues of the generalized sG equation with \(\nu=1\) can also be transformed into the case with \(\nu=-1\) through \[\phi^{l}_{k}=\mathrm{i}\varphi^{l}_{k},\ \tilde{x}^{l}_{k}=\mathrm{i}x^{l}_{k}, \ \tilde{t}=-\mathrm{i}t,\ c=\lambda\mathrm{i}. \tag{185}\] The detail proof of Theorem 3.2 is similar to the case with \(\nu=-1\), which is given in Appendix A. ### Reduction to the discrete sG and discrete SP equation In this section, we mainly investigate the reduction from the discrete gsG equation to the discrete sG equation [16, 17, 18] and the discrete SP equation [7]. The \(\tau\)-functions, as well as the variable transformations of the discrete sG equation and the discrete SP equation, can be derived from those of the discrete gsG equation. The parameter \(c\) (or \(\lambda\)) also plays an important role in the reduction of the discrete case. #### 3.3.1 Reduction to the discrete sG equation **(I) From the discrete gsG equation with \(\nu=-1\) to the discrete sG equation.** In this part of reduction, we take \(c\in\mathbb{R}\) as a small parameter. Then the elements of the \(\tau\)-function in Theorem 3.1 can be written as \[m^{(n)}_{ij}(k,l,1) =\mathrm{i}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{p _{j}}\right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}\left(\frac{1-bp_{ i}^{-1}}{1+bp_{j}^{-1}}\right)^{-l}\left(\frac{1-cp_{i}}{1+cp_{j}}\right)\] \[=\mathrm{i}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{p _{j}}\right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}\left(\frac{1-bp_{ i}^{-1}}{1+bp_{j}^{-1}}\right)^{-l}(1-(p_{i}+p_{j})c)+O(c^{2})\] \[=m^{(n)}_{ij}(k,l,0)-(p_{i}+p_{j})m^{(n)}_{ij}(k,l,0)c+O(c^{2}). \tag{186}\] We can rewrite the dependent variable transformations and define new variable as \[u^{l}_{k}=\mathrm{i}\ln\frac{\bar{f}^{l}_{k}\bar{g}^{l}_{k}}{f^{ l}_{k}g^{l}_{k}}=2\mathrm{i}\ln\frac{\bar{f}^{l}_{k}}{f^{l}_{k}}+O(c), \tag{187}\] \[\phi^{l}_{k}=\mathrm{i}\ln\frac{\bar{f}^{l}_{k}g^{l}_{k}}{f^{l}_ {k}\bar{g}^{l}_{k}}=\mathrm{i}\ln\frac{\bar{f}^{l}_{k}f^{l}_{k}}{f^{l}_{k}\bar {f}^{l}_{k}}+O(c)=O(c),\] (188) \[\hat{x}^{l}_{k}=cx^{l}_{k}=2ka+2lbc^{2}+c\ln\frac{g^{l}_{k}\bar{ g}^{l}_{k}}{f^{l}_{k}\bar{f}^{l}_{k}}=2ka+O(c^{2}),\] (189) \[\hat{t}^{l}_{k}=c^{-1}t^{l}_{k}=2lb. \tag{190}\] In the limit of \(c\to 0\), (187)-(189) lead to \[u_{k}^{l}=2\mathrm{i}\ln\frac{\tilde{f}_{k}^{l}}{\tilde{f}_{k}^{l}},\ \phi_{k}^{l}=0,\ \hat{x}_{k}^{l}=2ka. \tag{191}\] The definition of \(u_{k}^{l}\) corresponds to the dependent transformation of the discrete sG equation. Substituting (187)-(189) into (146) and (148) and taking \(c\to 0\), one can obtain \[\frac{1}{b}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^{l}}{4}=a\sin \frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l}}{4}, \tag{192}\] which is just the discrete sG equation [16, 17, 18]. **(II) From the discrete gsG equation with \(\nu=1\) to the discrete sG equation.** In this part of reduction, we take \(\lambda\in\mathbb{R}\) as a small parameter. Here we have \[m_{ij}^{(n)}(k,l,1) =\mathrm{i}\sqrt{\frac{1-\mathrm{i}\lambda p_{i}}{1+\mathrm{i} \lambda q_{j}}}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{p_{j}} \right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}\left(\frac{1-bp_{i}^{-1 }}{1+bp_{j}^{-1}}\right)^{-l}\left(\frac{1-\mathrm{i}\lambda p_{i}}{1+ \mathrm{i}\lambda p_{j}}\right)\] \[=\mathrm{i}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{ p_{j}}\right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}\left(\frac{1-bp_{i}^{-1 }}{1+bp_{j}^{-1}}\right)^{-l}+O(\lambda)\] \[=\hat{m}_{ij}^{(n)}(k,l,0)+O(\lambda), \tag{193}\] \[m_{ij}^{(n)}(k,l,0) =\mathrm{i}\sqrt{\frac{1-\mathrm{i}\lambda p_{i}}{1+\mathrm{i} \lambda q_{j}}}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{p_{j}} \right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}\left(\frac{1-bp_{i}^{-1 }}{1+bp_{j}^{-1}}\right)^{-l}\] \[=\hat{m}_{ij}^{(n)}(k,l,0)+O(\lambda). \tag{194}\] We can rewrite the dependent variable transformations and define new variable as \[u_{k}^{l}=2\mathrm{i}\ln\frac{\tilde{\tilde{f}}_{k}^{l}}{\tilde{f}_{k}^{l}}+O (\lambda),\ \varphi_{k}^{l}=O(\lambda),\ \tilde{x}_{k}^{l}=\lambda\tilde{x}_{k}^{l}=2ka+O(\lambda^{2}),\ \tilde{t}_{k}^{l}=\frac{\tilde{t}_{k}^{l}}{\lambda}=2lb. \tag{195}\] In the limit of \(\lambda\to 0\), eqs. (195) and (179) converge to \[u_{k}^{l}=2\mathrm{i}\ln\frac{\tilde{\tilde{f}}_{k}^{l}}{\tilde{f}_{k}^{l}}, \ \varphi_{k}^{l}=0,\,\tilde{x}_{k}^{l}=2ka,\ \tilde{t}_{k}^{l}=2lb, \tag{196}\] \[\frac{1}{ab}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^{l}}{4}=\sin \frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l}}{4}, \tag{197}\] respectively. The latter is exactly the fully discrete sG equation. #### 3.3.2 Reduction to the discrete SP equation **(I) From the discrete gsG equation with \(\nu=-1\) to the discrete SP equation with \(\sigma=-1\).** In this part of reduction, we take \(c\) as a big parameter (or \(\epsilon=\frac{1}{c}\) small). Let us introduce a new auxiliary parameter \(s\), and redefine the matrix elements of the \(\tau\) function as \[\tilde{m}_{ij}^{(n)}(k,l,m)=\mathrm{i}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(- \frac{p_{i}}{p_{j}}\right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k} \left(\frac{1-bp_{i}^{-1}}{1+bp_{j}^{-1}}\right)^{-l}\left(\frac{1-cp_{i}}{1+ cp_{j}}\right)^{m}e^{\left(\frac{1}{2p_{i}}+\frac{1}{2p_{j}}\right)s}. \tag{198}\] It is obviously that \(\tau_{n}(k,l,m)=\left|\tilde{m}_{ij}^{n}(k,l,m)\right|_{1\leqslant i,j\leqslant N}\) is still the solution of (118)-(119). If we take \(c\) as a big parameter (or \(\epsilon=\frac{1}{c}\) small), the elements of the \(\tau\) function can be written as \[\tilde{m}_{ij}^{(n)}(k,l,1) =\mathrm{i}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{ p_{j}}\right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}\left(\frac{1-bp_{i}^{- 1}}{1+bp_{j}^{-1}}\right)^{-l}\left(\frac{1-cp_{i}}{1+cp_{j}}\right)^{1}e^{ \left(\frac{1}{2p_{i}}+\frac{1}{2p_{j}}\right)s}\] \[=\mathrm{i}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{ p_{j}}\right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}\left(\frac{1-bp_{i}^{- 1}}{1+bp_{j}^{-1}}\right)^{-l}e^{\left(\frac{1}{2p_{i}}+\frac{1}{2p_{j}} \right)s}\left(-\frac{p_{i}}{p_{j}}+\frac{p_{i}+p_{j}}{p_{j}^{2}}\epsilon \right)+O(\epsilon^{2})\] \[=m_{ij}^{(n+1)}(k,l,0)-2\epsilon\frac{\mathrm{d}}{\mathrm{d}s}m_{ ij}^{(n+1)}(k,l,0)+O(\epsilon^{2}), \tag{199}\] from which one can obtain \[g_{k}^{l} =\tau_{0}(k,l,1)=\bar{f}_{k}^{l}-2\epsilon\frac{\mathrm{d}}{ \mathrm{d}s}\bar{f}_{k}^{l}+O(\epsilon^{2}), \tag{200}\] \[\bar{g}_{k}^{l} =\tau_{1}(k,l,1)=f_{k}^{l}-2\epsilon\frac{\mathrm{d}}{\mathrm{d}s }f_{k}^{l}+O(\epsilon^{2}). \tag{201}\] We rewrite the dependent variable transformations and introduce new variable as \[\epsilon\hat{u}_{k}^{l} =u_{k}^{l}=\mathrm{i}\ln\frac{\bar{f}_{k}^{l}g_{k}^{l}}{f_{k}^{l} g_{k}^{l}}=\mathrm{i}\ln\frac{\bar{f}_{k}^{l}(f_{k}^{l}-2\epsilon\frac{\mathrm{d} }{\mathrm{d}s}\bar{f}_{k}^{l})}{f_{k}^{l}(\bar{f}_{k}^{l}-2\epsilon\frac{ \mathrm{d}}{\mathrm{d}s}\bar{f}_{k}^{l})}+O(\epsilon^{2}), \tag{202}\] \[\phi_{k}^{l} =\mathrm{i}\ln\frac{\bar{f}_{k}^{l}g_{k}^{l}}{\bar{f}_{k}^{l} \bar{g}_{k}^{l}}=\mathrm{i}\ln\frac{\bar{f}_{k}^{l}(\bar{f}_{k}^{l}-2\epsilon \frac{\mathrm{d}}{\mathrm{d}s}\bar{f}_{k}^{l})}{f_{k}^{l}(f_{k}^{l}-2\epsilon \frac{\mathrm{d}}{\mathrm{d}s}\bar{f}_{k}^{l})}+O(\epsilon^{2}),\] (203) \[\epsilon\hat{x}_{k}^{l} =\frac{2lb}{\epsilon}=x_{k}^{l}=2ka\epsilon+\frac{2lb}{\epsilon}+ \ln\frac{(\bar{f}_{k}^{l}-2\epsilon\frac{\mathrm{d}}{\mathrm{d}s}\bar{f}_{k}^ {l})(f_{k}^{l}-2\epsilon\frac{\mathrm{d}}{\mathrm{d}s}\bar{f}_{k}^{l})}{f_{k} ^{l}\bar{f}_{k}^{l}}+O(\epsilon^{2}),\] (204) \[\hat{t}_{k}^{l} =\epsilon t_{k}^{l}=2lb. \tag{205}\] In the limit of \(\epsilon\to 0\), (202)-(204) lead to \[\hat{u}_{k}^{l} =2\mathrm{i}\left(\ln\frac{\bar{f}_{k}^{l}}{\bar{f}_{k}^{l}} \right)_{s}, \tag{206}\] \[\phi_{k}^{l} =2\mathrm{i}\ln\frac{\bar{f}_{k}^{l}}{\bar{f}_{k}^{l}},\] (207) \[\hat{x}_{k}^{l} =2ka-2(\ln(f_{k}^{l}\bar{f}_{k}^{l}))_{s}. \tag{208}\] Substituting (202)-(204) into (146)-(148) and taking \(\epsilon\to 0\), one can obtain \[(\hat{x}_{k+1}^{l+1}-\hat{x}_{k+1}^{l}+\hat{x}_{k}^{l+1}-\hat{x}_ {k}^{l}+\frac{4}{b})(\hat{u}_{k+1}^{l+1}-\hat{u}_{k+1}^{l}-\hat{u}_{k}^{l+1}+ \hat{u}_{k}^{l})\] \[=(\hat{x}_{k+1}^{l+1}-\hat{x}_{k}^{l+1}+\hat{x}_{k+1}^{l}-\hat{x }_{k}^{l})(\hat{u}_{k+1}^{l+1}+\hat{u}_{k+1}^{l}+\hat{u}_{k}^{l+1}+\hat{u}_{k} ^{l}), \tag{209}\] \[(\hat{x}_{k+1}^{l+1}-\hat{x}_{k+1}^{l}+\hat{x}_{k}^{l+1}-\hat{x }_{k}^{l}+\frac{4}{b})(\hat{x}_{k+1}^{l+1}-\hat{x}_{k+1}^{l}-\hat{x}_{k}^{l+1}+ \hat{x}_{k}^{l})\] \[=-(\hat{u}_{k+1}^{l+1}+\hat{u}_{k+1}^{l}+\hat{u}_{k}^{l+1}+\hat{u} _{k}^{l})(\hat{u}_{k+1}^{l+1}+\hat{u}_{k+1}^{l}-\hat{u}_{k}^{l+1}-\hat{u}_{k} ^{l}), \tag{210}\] which lead to the discrete SP equations. In addition, conserved quantities \(I_{k}^{l}\) and \(J_{k}^{l}\) are recast into \[J_{k}^{l} =\left(\frac{1}{b}+\frac{\hat{x}_{k}^{l+1}-\hat{x}_{k}^{l}}{2} \right)^{2}+\left(\frac{\hat{u}_{k}^{l}+\hat{u}_{k}^{l+1}}{2}\right)^{2}, \tag{211}\] \[I_{k}^{l} =\left(\frac{\hat{x}_{k+1}^{l}-\hat{x}_{k}^{l}}{2}\right)^{2}+ \left(\frac{\hat{u}_{k+1}^{l}-\hat{u}_{k}^{l}}{2}\right)^{2}, \tag{212}\] which correspond to the conserved quantities of the discrete SP equation derived in [7]. **(II) From the discrete gsG equation with \(\nu=1\) to the discrete SP equation with \(\sigma=1\).** Here we give a discrete analog of the SP equation with \(\sigma=1\) through the variable transformation and the scaling limit from the discrete gsG equation with \(\nu=1\). Similar to the case with \(\nu=-1\), we take \(\epsilon=\frac{1}{\lambda}\) as a small parameter. Introducing an auxiliary parameter \(s\), and redefining the matrix elements of the \(\tau\) function as shown in (198), we have \[\tilde{m}_{ij}^{(n)}(k,l,0)\propto\hat{m}_{ij}^{(n+1)}(k,l)-\mathrm{i}\epsilon \frac{\mathrm{d}}{\mathrm{d}s}\hat{m}_{ij}^{(n+1)}(k,l)+O(\epsilon^{2}), \tag{213}\] \[\tilde{m}_{ij}^{(n+1)}(k,l,0)\propto\hat{m}_{ij}^{(n)}(k,l)-\mathrm{i}\epsilon \frac{\mathrm{d}}{\mathrm{d}s}\hat{m}_{ij}^{(n)}(k,l)+O(\epsilon^{2}), \tag{214}\] where \[\hat{m}_{ij}^{(n)}(k,l)=\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{ p_{j}}\right)^{n}\left(\frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}\left(\frac{1-bp_{i}^{-1 }}{1+bp_{j}^{-1}}\right)^{-l}e^{\left(\frac{1}{bp_{i}}+\frac{1}{bp_{j}}\right) s}. \tag{215}\] Thus one can obtain \[f_{k}^{l}=\hat{f}_{k}^{l}-\mathrm{i}\epsilon\frac{\mathrm{d}}{ \mathrm{d}\tau}\hat{f}_{k}^{l}+O(\epsilon^{2}),\;\vec{f}_{k}^{l}=\hat{f}_{k}^{ l}+\mathrm{i}\epsilon\frac{\mathrm{d}}{\mathrm{d}\tau}\hat{f}_{k}^{l}+O( \epsilon^{2}), \tag{216}\] \[g_{k}^{l}=\hat{g}_{k}^{l}+\mathrm{i}\epsilon\frac{\mathrm{d}}{ \mathrm{d}\tau}\hat{g}_{k}^{l}+O(\epsilon^{2}),\;\vec{g}_{k}^{l}=\hat{g}_{k}^{ l}-\mathrm{i}\epsilon\frac{\partial}{\partial\tau}\hat{g}_{k}^{l}+O(\epsilon^{2}),\] (217) \[\hat{f}_{k}^{l}=\left|\hat{m}_{ij}^{(n+1)}(k,l)\right|_{1\leqslant i,j\leqslant N},\;\hat{g}_{k}^{l}=\left|\hat{m}_{ij}^{(n)}(k,l)\right|_{1 \leqslant i,j\leqslant N}. \tag{218}\] Similar to the continuou, we rewrite the dependent variable transformations and introduce new variables, then under the limit \(\epsilon\to 0\), we have \[\hat{u}_{k}^{l}=2\left(\ln\frac{\hat{g}_{k}^{l}}{\hat{f}_{k}^{l}}\right)_{s},\;\varphi_{k}^{l}=2\ln\frac{\hat{g}_{k}^{l}}{\hat{f}_{k}^{l}},\;\hat{x}_{k}^ {l}=2ka-2(\ln(\hat{f}_{k}^{l}\hat{g}_{k}^{l}))_{s},\;\hat{t}_{k}^{l}=2lb. \tag{219}\] Moreover, one can propose a fully discrete analogue of the SP equation with \(\sigma=1\) \[(\hat{x}_{k+1}^{l+1}-\hat{x}_{k+1}^{l+1}+\hat{x}_{k}^{l+1}-\hat {x}_{k}^{l}+\frac{4}{b})(\hat{u}_{k+1}^{l+1}-\hat{u}_{k+1}^{l}-\hat{u}_{k}^{l+ 1}+\hat{u}_{k}^{l})\] \[=(\hat{x}_{k+1}^{l+1}-\hat{x}_{k}^{l+1}+\hat{x}_{k+1}^{l}-\hat{x }_{k}^{l})(\hat{u}_{k+1}^{l+1}+\hat{u}_{k+1}^{l}+\hat{u}_{k}^{l+1}+\hat{u}_{k} ^{l}), \tag{220}\] \[(\hat{x}_{k+1}^{l+1}-\hat{x}_{k+1}^{l+1}+\hat{x}_{k}^{l+1}-\hat{x }_{k}^{l}+\frac{4}{b})(\hat{x}_{k+1}^{l+1}-\hat{x}_{k+1}^{l}-\hat{x}_{k}^{l+1} +\hat{x}_{k}^{l})\] \[=(\hat{u}_{k+1}^{l+1}+\hat{u}_{k+1}^{l+1}+\hat{u}_{k}^{l+1}+\hat {u}_{k}^{l})(\hat{u}_{k+1}^{l+1}+\hat{u}_{k+1}^{l}-\hat{u}_{k}^{l+1}-\hat{u}_{ k}^{l}), \tag{221}\] and conserved quantities \(\tilde{I}_{k}^{l}\) and \(\tilde{J}_{k}^{l}\) are recast into \[\tilde{J}_{k}^{l}=\left(\frac{1}{b}+\frac{\hat{x}_{k}^{l+1}-\hat {x}_{k}^{l}}{2}\right)^{2}-\left(\frac{\hat{u}_{k}^{l}+\hat{u}_{k}^{l+1}}{2} \right)^{2}, \tag{222}\] \[\tilde{I}_{k}^{l}=(\frac{\hat{x}_{k+1}^{l}-\hat{x}_{k}^{l}}{2})^ {2}-\left(\frac{\hat{u}_{k+1}^{l}-\hat{u}_{k}^{l}}{2}\right)^{2}. \tag{223}\] Here \(N\)-soliton solutions of the full-discrete SP equation with \(\sigma=1\) also exhibits the singular nature. ## 4 The semi-discrete gsG equation ### From the fully discrete gsG equation to the semi-discrete gsG equation In this section, we demonstrate that the proposed fully discrete gsG equation with \(\nu=-1\) converges to the semi-discrete equation we obtained in [5] in the continuous limit \(b\to 0\). Moreover, we give a semi-discrete gsG equation with \(\nu=1\) through the same continuous limit from the fully discrete gsG equation with \(\nu=1\) #### 4.1.1 The semi-discrete gsG equation with \(\nu=-1\) Recall that \[\Delta_{k}^{l} =\frac{\sqrt{c^{2}-a^{2}}\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l+1}+x_{ k+1}^{l}-x_{k}^{l}-4ac^{-1}+4\chi_{2}}{\sqrt{b^{2}c^{2}-1}\sinh\frac{x_{k+1}^{l+1}-x_{ k+1}^{l+1}+x_{k}^{l+1}-x_{k}^{l}-4bc+4\chi_{1}}{4}}}{4}\] \[=\frac{\sqrt{c^{2}-a^{2}}\sinh\frac{x_{k+1}^{l+1}-x_{k}^{l+1}+x_{ k+1}^{l+1}-x_{k}^{l}-4ac^{-1}+4\chi_{2}}{\cosh\frac{x_{k+1}^{l+1}-x_{k+1}^{l+1}+x_{ k}^{l+1}-x_{k}^{l}-4bc}{4}+bc\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x_{k}^{l }-4bc}{4}}}{4}. \tag{224}\] Obviously, as \(b\to 0\), we have \[\cosh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x_{k}^{l}-4bc}{4} +bc\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x_{k}^{l}-4bc}{4}\to 1, \tag{225}\] \[\sqrt{c^{2}-a^{2}}\sinh\frac{x_{k+1}^{l+1}-x_{k}^{l+1}+x_{k+1}^{l} -x_{k}^{l}-4ac^{-1}+4\chi_{2}}{4}\rightarrow\sqrt{c^{2}-a^{2}}\sinh\frac{x_{k +1}-x_{k}-2ac^{-1}+2\chi_{2}}{2}. \tag{226}\] Note the relation (150) holds. Thus we have \[\Delta_{k}^{l}\rightarrow\sqrt{a^{2}-c^{2}\sin^{2}\frac{u_{k+1}-u_{k}}{2}}= \frac{\Delta_{k}}{2}. \tag{227}\] Furthermore, we can easily verify that \[\frac{1}{b}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^{l}}{4} \rightarrow\frac{\mathrm{d}}{\mathrm{d}\tau}\frac{u_{k+1}-u_{k}}{2}, \tag{228}\] and \[\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l}}{4} \rightarrow\sin\frac{u_{k+1}+u_{k}}{2}. \tag{229}\] Thus equation (146) converges to \[\frac{\mathrm{d}}{\mathrm{d}\tau}\left(u_{k+1}-u_{k}\right)= \Delta_{k}\sin\frac{u_{k+1}+u_{k}}{2}, \tag{230}\] \[\Delta_{k}=\sqrt{4a^{2}-4c^{2}\sin^{2}\frac{u_{k+1}-u_{k}}{2}}, \tag{231}\] which are actually part of the semi-discrete gsG equation with \(\nu=-1\) (see eqs. (3.47) and (3.49) in [5]). Here we used \(\frac{f^{l+1}-f^{l}}{2b}\rightarrow\frac{\mathrm{d}}{\mathrm{d}\tau}f\) as \(b\to 0\). For equation (147), the continuous limit \(b\to 0\) leads to \[(b^{2}c^{2}-1)\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x_{ k}^{l}-4bc+4\chi_{1}}{2}\] \[= (b^{2}c^{2}+1)\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x_{ k}^{l}-4bc}{2}+2bc\cosh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}+x_{k}^{l+1}-x_{k}^{l}-4bc}{2}\] \[\sim (b^{2}c^{2}+1)\left[\left(\frac{\mathrm{d}}{\mathrm{d}\tau}(x_{k} +x_{k+1})-2c\right)b+\mathcal{O}(b^{2})\right]+2bc+\mathcal{O}(b^{3})\] \[\sim \frac{\mathrm{d}}{\mathrm{d}\tau}(x_{k}+x_{k+1})b+\mathcal{O}(b^{ 3}), \tag{232}\] \[\sinh\frac{x_{k+1}^{l+1}-x_{k+1}^{l}-x_{k}^{l+1}+x_{k}^{l}}{2}\sim\frac{ \mathrm{d}}{\mathrm{d}\tau}(x_{k+1}-x_{k})b+\mathcal{O}(b^{3}). \tag{233}\] Note that \[\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l}}{2}\sin \frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{2}\] \[= \frac{1}{2}\left(\cos(u_{k}^{l+1}+u_{k}^{l})-\cos(u_{k+1}^{l+1}+u_ {k+1}^{l})\right)\] \[= \cos^{2}\frac{u_{k}^{l+1}+u_{k}^{l}}{2}-\cos^{2}\frac{u_{k+1}^{l+1 }+u_{k+1}^{l}}{2}\] \[= (\cos\frac{u_{k}^{l+1}+u_{k}^{l}}{2}+\cos\frac{u_{k+1}^{l+1}+u_{k +1}^{l}}{2})(\cos\frac{u_{k}^{l+1}+u_{k}^{l}}{2}-\cos\frac{u_{k+1}^{l+1}+u_{k +1}^{l}}{2}), \tag{234}\] and \[bc\left(\cos\frac{u_{k}^{l+1}+u_{k}^{l}}{2}+\cos\frac{u_{k+1}^{l +1}+u_{k+1}^{l}}{2}\right)\] \[= \sqrt{b^{2}c^{2}-1+(b^{2}c^{2}-1)\sinh^{2}\frac{x_{k}^{l+1}-x_{k} ^{l}-2bc+2\chi_{1}}{2}}+\sqrt{b^{2}c^{2}-1+(b^{2}c^{2}-1)\sinh^{2}\frac{x_{k+1 }^{l+1}-x_{k+1}^{l}-2bc+2\chi_{1}}{2}}\] \[\sim \frac{\mathrm{d}}{\mathrm{d}\tau}(x_{k}+x_{k+1})b+\mathcal{O}(b ^{3}). \tag{235}\] If we divide both sides of (147) by \(b^{2}\frac{\mathrm{d}}{\mathrm{d}\tau}(x_{k}+x_{k+1})\) and take \(b\to 0\), we arrive at \[\frac{\mathrm{d}}{\mathrm{d}\tau}(x_{k+1}-x_{k})=c(\cos u_{k+1}-\cos u_{k}), \tag{236}\] Eqs. (230), (231) and (236) are the semi-discrete analog of the gsG equation we proposed in [5]. It can be easily verified that the \(\tau\)-function and the variable transformations converge to those in the semi-discrete gsG equation with \(\nu=-1\) \[u_{k}(\tau)=\mathrm{i}\ln\frac{\bar{f}_{k}\bar{g}_{k}}{f_{k}g_{k }},\;\phi_{k}(\tau)=\mathrm{i}\ln\frac{g_{k}\bar{f}_{k}}{f_{k}\bar{g}_{k}}, \tag{237}\] \[x_{k}(\tau)=2kac^{-1}+c\tau+\ln\frac{\bar{g}_{k}g_{k}}{\bar{f}_{ k}\bar{f}_{k}},\;t=c\tau,\] (238) \[f_{k}=\tau_{00}(k),\,g_{k}=\tau_{01}(k), \tag{239}\] with \[\tau_{n,m}(\frac{\tau}{2},k)=\left|m_{ij}^{n,m}(k)\right|_{N\times N}=\left| \mathrm{i}\delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{p_{j}}\right)^{ n}\left(\frac{1-cp_{i}}{1+cp_{j}}\right)^{m}\left(\frac{1-ap_{i}}{1+ap_{j}} \right)^{-k}e^{\xi_{i}+\eta_{j}}\right|_{N\times N},\] \[\xi_{i}=\frac{\tau}{2p_{i}}+\xi_{i0},\quad\eta_{j}=\frac{\tau}{2p_{j}}+\eta_{ j0},\] or \[\tau_{nm}(\frac{\tau}{2},k)=\left|\phi_{(n+j-1)}^{(i)}(k,l,m)\right|_{1\leqslant i,j\leqslant N}, \tag{240}\] with \[\phi_{(n)}^{(i)}(k,l,m)=p_{i}^{n}\left(1-ap_{i}\right)^{-k}\left(1-cp_{i} \right)^{m}e^{\frac{1}{2p_{i}}\tau+\xi_{i0}}+\mathrm{i}(-p_{i})^{n}\left(1+ap _{i}\right)^{-k}\left(1+cp_{i}\right)^{m}e^{-\frac{1}{2p_{i}}\tau+\eta_{i0}}.\] #### 4.1.2 The semi-discrete gsG equation with \(\nu=1\) With the continuous limit \(b\to 0\) and the similar procedure in 4.1.1, one can obtain the following theorem. **Theorem 4.1**.: An integrable semi-discrete analogue of the gsG equation with \(\nu=1\) is of the form \[\frac{\mathrm{d}}{\mathrm{d}\tau}\left(u_{k+1}-u_{k}\right)=\tilde{ \Delta}_{k}\sin\frac{u_{k+1}+u_{k}}{2}, \tag{241}\] \[\frac{\mathrm{d}\delta_{k}}{\mathrm{d}\tau}=\lambda(\cos u_{k}- \cos u_{k+1}), \tag{242}\] where the lattice parameter \(\tilde{\Delta}_{k}\) is a function depending on \((k,\tau)\) defined by \[\tilde{\Delta}_{k}=\sqrt{4a^{2}+4\lambda^{2}\sin^{2}\frac{u_{k+1}-u_{k}}{2}}. \tag{243}\] Moreover, the \(N\)-soliton solution is given by \[u_{k}(\tau) =\mathrm{i}\ln\frac{\bar{f}_{k}\bar{g}_{k}}{f_{k}g_{k}}, \tag{244}\] \[x_{k}(\tau) =2ka\lambda^{-1}-\lambda\tau+\mathrm{i}\ln\frac{\bar{f}_{k}g_{k}} {f_{k}\bar{g}_{k}},\ t=\lambda\tau,\] (245) \[\varphi_{k}(\tau) =\ln\frac{g_{k}\bar{g}_{k}}{f_{k}\bar{f}_{k}}, \tag{246}\] where \(f_{k},g_{k},\bar{f}_{k}\) and \(\bar{g}_{k}\) are \(\tau\)-functions defined by \[f_{k}=\tau_{00}(k),\,g_{k}=\tau_{01}(k), \tag{247}\] either with Gram-type determinant \[\tau_{n,m}(\frac{\tau}{2},k)=\left|m_{ij}^{n,m}(k)\right|_{N\times N}=\left| \mathrm{i}\sqrt{\frac{1-\lambda\mathrm{i}p_{i}}{1+\lambda\mathrm{i}q_{j}}} \delta_{ij}+\frac{1}{p_{i}+p_{j}}\left(-\frac{p_{i}}{p_{j}}\right)^{n}\left( \frac{1-\lambda\mathrm{i}p_{i}}{1+\lambda\mathrm{i}p_{j}}\right)^{m}\left( \frac{1-ap_{i}}{1+ap_{j}}\right)^{-k}e^{\xi_{i}+\eta_{j}}\right|_{N\times N},\] and \[\xi_{i}=\frac{\tau}{2p_{i}}+\xi_{i0},\quad\eta_{j}=\frac{\tau}{2p_{j}}+\eta_{ j0},\] or with Casorati-type determinant \[\tau_{nm}(\frac{\tau}{2},k)=\left|\phi_{(n+j-1)}^{(i)}(k,m)\right|_{1\leqslant i,j\leqslant N}, \tag{248}\] \[\phi_{(n)}^{(i)}(k,m)=p_{i}^{n}\left(1-ap_{i}\right)^{-k}\left(1-cp_{i}\right) ^{m}e^{\frac{1}{ap_{i}}\tau+\xi_{i0}}+\mathrm{i}\left(\frac{1-cp_{i}}{1+cp_{i }}\right)^{\frac{1}{2}}(-p_{i})^{n}\left(1+ap_{i}\right)^{-k}\left(1+cp_{i} \right)^{m}e^{-\frac{1}{2p_{i}}\tau+\eta_{i0}}.\] **Remark 4.1**.: Semi-discrete analogue of the generalized sG equation with \(\nu=-1\) can be transformed into the case with \(\nu=1\) we proposed here through \[\phi_{k}=\mathrm{i}\varphi_{k},\ \tilde{x}_{k}=\mathrm{i}x_{k},\ \tilde{t}=-\mathrm{i}t,\ c=\lambda\mathrm{i}. \tag{249}\] According to a proof similar to that in [5], we know that the continuous limit of this semi-discrete analogue we proposed in this paper is the gsG equation with \(\nu=1\). However, their solutions are pretty dissimilar which we will illustrate in Section 5. ### From the semi-discrete 2DTL equation to the semi-discrete gsG equation with \(\nu=1\) We proposed two integrable semi-discrete gsG equations with \(\nu=-1\) from the semi-discrete 2DTL equation in [5]. In this part, we show that the semi-discrete gsG equation with \(\nu=1\) can also be obtained from the semi-discrete 2DTL equation through some reductions and appropriate definitions of discrete hodograph transformation. We start with the following semi-discrete 2DTL equations \[\left(\frac{1}{a}D_{x_{-1}}-1\right)\tau_{n,m}(k+1)\cdot\tau_{n,m}( k)+\tau_{n+1,m}(k+1)\tau_{n-1,m}(k)=0, \tag{250}\] \[\left(\frac{1}{c}D_{x_{-1}}-1\right)\tau_{n,m}(k)\cdot\tau_{n,m+1 }(k)+\tau_{n+1,m}(k)\tau_{n-1,m+1}(k)=0. \tag{251}\] Here \(a\) is a spatial discrete step. Eqs. (250) can also be viewed as Backlund transformations for the 2DTL equations in the sense that if \(\tau_{n,m}(k)\) is a solution to the 2DTL equations (35), so is \(\tau_{n,m}(k+1)\), while Eq. (251) is the BT linking the solution \(\tau_{n,m}(k)\) of Eq. (250) to \(\tau_{n,m+1}(k)\). **Lemma 4.1**.: The bilinear equations (250)-(251) admit the following Casorati determinant solutions \[\tau_{n,m}\left(x_{-1},k\right)=\left|\begin{array}{cccc}\phi_{n,m}^{(1)}(k )&\phi_{n+1,m}^{(1)}(k)&\cdots&\phi_{n+N-1,m}^{(1)}(k)\\ \phi_{n,m}^{(2)}(k)&\phi_{n+1,m}^{(2)}(k)&\cdots&\phi_{n+N-1,m}^{(2)}(k)\\ \cdots&\cdots&\cdots&\cdots\\ \phi_{n,m}^{(N)}(k)&\phi_{n+1,m}^{(N)}(k)&\cdots&\phi_{n+N-1,m}^{(N)}(k)\\ \end{array}\right|, \tag{252}\] where \[\phi_{n,m}^{(i)}(k)=c_{i}p_{i}^{n}\left(1-cp_{i}\right)^{m}\left(1-ap_{i} \right)^{-k}e^{\xi_{i}}+d_{i}q_{i}^{n}\left(1-cq_{i}\right)^{m}\left(1-aq_{i} \right)^{-k}e^{\eta_{i}}, \tag{253}\] with \[\xi_{i}={p_{i}}^{-1}x_{-1}+\xi_{i0},\quad\eta_{i}={q_{i}}^{-1}x_{-1}+\eta_{i0},\] and Gram-type determinant solutions \[\tau_{n,m}(x_{-1},k)=\left|m_{ij}^{n,m}(k)\right|_{N\times N}=\left|c_{ij}+ \frac{1}{p_{i}+q_{j}}\left(-\frac{p_{i}}{q_{j}}\right)^{n}\left(\frac{1-cp_{i }}{1+cq_{j}}\right)^{m}\left(\frac{1-ap_{i}}{1+aq_{j}}\right)^{-k}e^{\xi_{i}+ \eta_{j}}\right|_{N\times N}, \tag{254}\] with \[\xi_{i}=p_{i}^{-1}x_{-1}+\xi_{i0},\quad\eta_{j}=q_{j}^{-1}x_{-1}+\eta_{j0}. \tag{255}\] Here \(p_{i},q_{i},\xi_{i0}\) and \(\eta_{i0}\) are arbitrary parameters which can take either real or complex values. Applying reductions similar to continuous case, then we could have each of the \(\tau\) functions satisfies the following relations \[\tau_{n+1,0}\circlearrowleft\bar{\tau}_{n,1},\,\tau_{n+1,1}\circlearrowleft\bar{ \tau}_{n,0}. \tag{256}\] By putting \(\tau=2x_{-1}\), \(\tau_{00}(k)=f_{k},\,\,\tau_{01}(k)=g_{k}\), (250)-(251) can be converted into \[\left(\frac{2}{a}D_{\tau}-1\right)f_{k+1}\cdot f_{k}+\bar{g}_{k+1 }\bar{g}_{k}=0, \tag{257}\] \[\left(\frac{2}{a}D_{\tau}-1\right)\bar{f}_{k+1}\cdot\bar{f}_{k}+g_ {k+1}g_{k}=0,\] (258) \[\left(\frac{2}{a}D_{\tau}-1\right)g_{k+1}\cdot g_{k}+\bar{f}_{k+1 }\cdot\bar{f}_{k}=0,\] (259) \[\left(\frac{2}{a}D_{\tau}-1\right)\bar{g}_{k+1}\cdot\bar{g}_{k}+f_ {k+1}\cdot f_{k}=0,\] (260) \[(-2\mathrm{i}\lambda^{-1}D_{\tau}-1)f_{k}\cdot g_{k}+\bar{f}_{k} \bar{g}_{k}=0,\] (261) \[(2\mathrm{i}\lambda^{-1}D_{\tau}-1)\bar{f}_{k}\cdot\bar{g}_{k}+f _{k}g_{k}=0. \tag{262}\] Now we can rewrite the bilinear equations (257)-(262) as \[\frac{2}{a}\left(\ln\frac{f_{k+1}}{f_{k}}\right)_{\tau}-1=-\frac{ \bar{g}_{k+1}\bar{g}_{k}}{f_{k+1}f_{k}}, \tag{263}\] \[\frac{2}{a}\left(\ln\frac{\bar{f}_{k+1}}{\bar{f}_{k}}\right)_{\tau} -1=-\frac{g_{k+1}g_{k}}{f_{k+1}f_{k}},\] (264) \[\frac{2}{a}\left(\ln\frac{g_{k+1}}{g_{k}}\right)_{\tau}-1=-\frac{ \bar{f}_{k+1}\bar{f}_{k}}{g_{k+1}g_{k}},\] (265) \[\frac{2}{a}\left(\ln\frac{\bar{g}_{k+1}}{\bar{g}_{k}}\right)_{\tau} -1=-\frac{f_{k+1}f_{k}}{\bar{g}_{k+1}\bar{g}_{k}},\] (266) \[2\mathrm{i}\lambda^{-1}\left(\ln\frac{f_{k}}{g_{k}}\right)_{\tau} +1=\frac{\bar{f}_{k}\bar{g}_{k}}{f_{k}g_{k}},\] (267) \[2\mathrm{i}\lambda^{-1}\left(\ln\frac{\bar{f}_{k}}{\bar{g}_{k}} \right)_{\tau}-1=-\frac{f_{k}g_{k}}{\bar{f}_{k}\bar{g}_{k}}. \tag{268}\] Introducing two intermediate variables \[\sigma_{k}(\tau)=2\mathrm{i}\ln\frac{\bar{g}_{k}(s)}{f_{k}(s)}, \tag{269}\] \[\sigma_{k}^{\prime}(\tau)=2\mathrm{i}\ln\frac{\bar{f}_{k}(s)}{g_ {k}(s)}, \tag{270}\] one arrives at a pair of semi-discrete sG equation \[\frac{1}{2a}\left(\sigma_{k+1}-\sigma_{k}\right)_{\tau}=\sin \left(\frac{\sigma_{k+1}+\sigma_{k}}{2}\right), \tag{271}\] \[\frac{1}{2a}\left(\sigma_{k+1}^{\prime}-\sigma_{k}^{\prime} \right)_{\tau}=\sin\left(\frac{\sigma_{k+1}^{\prime}+\sigma_{k}^{\prime}}{2} \right). \tag{272}\] Next we introduce dependent variable transformations \[u_{k}(\tau)=\frac{1}{2}\left(\sigma_{k}+\sigma_{k}^{\prime}\right)=\mathrm{i} \ln\frac{\bar{f}_{k}\bar{g}_{k}}{f_{k}g_{k}},\ \varphi_{k}(\tau)=\frac{1}{2\mathrm{i}}\left(\sigma_{k}-\sigma_{k}^{\prime} \right)=\ln\frac{g_{k}\bar{g}_{k}}{f_{k}\bar{f}_{k}}, \tag{273}\] **Proposition 2**.: \[\frac{\mathrm{d}\varphi_{k}}{\mathrm{d}\tau}=\lambda\sin u_{k}\] (274) \[a\sinh\left(\frac{\varphi_{k+1}+\varphi_{k}}{2}\right)=\lambda \sin\left(\frac{u_{k+1}-u_{k}}{2}\right)\] (275) One can find a similar proof of this proposition in [5] for details, which we omit here. We define a discrete hodograph transformation \[x_{k}=2ka\lambda^{-1}-\lambda\tau+\mathrm{i}\ln\frac{\bar{f}_{k}g_{k}}{f_{k} \bar{g}_{k}}, \tag{276}\] then the nonuniform mesh can be derived as \[\delta_{k}=x_{k+1}-x_{k}=2a\lambda^{-1}+\mathrm{i}\ln\frac{\bar{f}_{k+1}g_{k+ 1}f_{k}\bar{g}_{k}}{f_{k+1}\bar{g}_{k+1}\bar{f}_{k}g_{k}}. \tag{277}\] Taking the derivative with respect to \(\tau\) results in \[\frac{\mathrm{d}\delta_{k}}{\mathrm{d}\tau} =\mathrm{i}\left(\ln\frac{\bar{f}_{k+1}}{\bar{g}_{k+1}}+\ln\frac{ \bar{f}_{k}}{g_{k}}-\ln\frac{\bar{f}_{k+1}}{g_{k+1}}-\ln\frac{\bar{f}_{k}}{ \bar{g}_{k}}\right)_{\tau}\] \[=\frac{\lambda}{2}\left(\frac{f_{k}g_{k}}{\bar{f}_{k}\bar{g}_{k}} +\frac{\bar{f}_{k}\bar{g}_{k}}{f_{k}g_{k}}-\frac{f_{k+1}g_{k+1}}{\bar{f}_{k+1} \bar{g}_{k+1}}-\frac{\bar{f}_{k+1}\bar{g}_{k+1}}{f_{k+1}g_{k+1}}\right)\] \[=\lambda(\cos u_{k}-\cos u_{k+1}). \tag{278}\] \[\frac{\mathrm{d}}{\mathrm{d}\tau}\left(u_{k+1}-u_{k}\right) =\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}\tau}\left(\sigma_{k+1}+ \sigma_{k+1}^{\prime}-\sigma_{k}-\sigma_{k}^{\prime}\right)\] \[=2a\sin\frac{u_{k}+u_{k+1}}{2}\cosh\frac{\varphi_{k}+\varphi_{k+1 }}{2}\] \[=\sin\frac{u_{k}+u_{k+1}}{2}\sqrt{4a^{2}+4\lambda^{2}\sin^{2} \frac{u_{k+1}-u_{k}}{2}}. \tag{279}\] Therefore, we can propose an integrable semi-discrete gsG equation with \(\nu=1\) which is just the same as the analog we obtained from the fully discrete gsG equation in Theorem 4.1. #### 4.2.1 Another integrable semi-discrete generalized sine-Gordon equation with \(\nu=1\) If we define a discrete hodograph transformation \[\delta_{k}=2a\lambda^{-1}\cosh\left(\frac{\varphi_{k}+\varphi_{k+1}}{2}\right). \tag{280}\] Then, we have \[\frac{\mathrm{d}}{\mathrm{d}\tau}(u_{k+1}-u_{k}) =\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}\tau}(\sigma_{k+1}+\sigma _{k+1}^{\prime}-\sigma_{k}-\sigma_{k}^{\prime})\] \[=a\left[\sin\left(\frac{\sigma_{k}+\sigma_{k+1}}{2}\right)+\sin \left(\frac{\sigma_{k}^{\prime}+\sigma_{k+1}^{\prime}}{2}\right)\right]\] \[=2a\sin\left(\frac{u_{k}+u_{k+1}}{2}\right)\cos\frac{\mathrm{i}( \varphi_{k}+\varphi_{k+1})}{2}\] \[=\lambda\delta_{k}\sin\left(\frac{u_{k}+u_{k+1}}{2}\right).\] On the other hand, taking the derivative with respect to \(\delta_{k}\) results in \[\frac{\mathrm{d}\delta_{k}}{\mathrm{d}\tau} =a\lambda^{-1}\sinh\left(\frac{\varphi_{k}+\varphi_{k+1}}{2} \right)(\varphi_{k}+\varphi_{k+1})_{\tau}\] \[=\lambda\sin\left(\frac{u_{k+1}-u_{k}}{2}\right)(\sin u_{k}+\sin u _{k+1})\] \[=\frac{\lambda}{2}\cos\left(\frac{3u_{k}-u_{k+1}}{2}\right)- \frac{\lambda}{2}\cos\left(\frac{3u_{k+1}-u_{k}}{2}\right)\] \[=-\lambda\cos\left(\frac{u_{k+1}-u_{k}}{2}\right)(\cos u_{k+1}- \cos u_{k}).\] Summarizing the above results, we obtain an alternative integrable semi-discrete analogue of the gsG equation (2) by the following theorem. **Theorem 4.2**.: An alternative semi-discrete analogue of the gsG equation (2) is of the form \[\frac{\mathrm{d}}{\mathrm{d}\tau}(u_{k+1}-u_{k})=\lambda\delta_{k }\sin\left(\frac{u_{k}+u_{k+1}}{2}\right) \tag{281}\] \[\frac{\mathrm{d}\delta_{k}}{\mathrm{d}\tau}=\lambda\cos\left( \frac{u_{k+1}-u_{k}}{2}\right)(\cos u_{k}-\cos u_{k+1}) \tag{282}\] where the lattice parameter \(\delta_{k}\) is a function depending on \((k,\tau)\) defined by \[\delta_{k}=2a\lambda^{-1}\cosh\left(\frac{\varphi_{k}+\varphi_{k+1}}{2}\right), \tag{283}\] and \(t=\lambda\tau\). ### Reduction to the semi-discrete sG and semi-discrete SP equation Similar to the continuous case and the discrete case, we demonstrate that the semi-discrete gsG equations reduce to the semi-discrete sG equation [15] and the semi-discrete SP equation [7] with the variable transformations and the scaling limit. #### 4.3.1 Reduction to the semi-discrete sG equation **(I) From the semi-discrete gsG equation with \(\nu=-1\) to the semi-discrete sG equation.** Similar to the continuous case and the discrete case, we rewrite the elements of the \(\tau\)-function as \[\phi_{n}^{(i)}(k,1) =p_{i}^{n}(1-ap_{i})^{-k}(1-cp_{i})e^{\frac{1}{2p_{i}}\tau+\xi_{i 0}}+\mathrm{i}(-p_{i})^{n}(1+ap_{i})^{-k}\left(1+cp_{i}\right)e^{-\frac{1}{2p_{ i}}\tau+\eta_{i0}}\] \[=\phi_{n}^{(i)}(k,0)-c\phi_{n+1}^{(i)}(k,0). \tag{284}\] Thus we have \[u_{k} =2\mathrm{i}\ln\frac{\bar{f}_{k}}{\bar{f}_{k}}+O(c), \tag{285}\] \[\phi_{k} =O(c),\] (286) \[\hat{x}_{k} =cx_{k}=2ka+O(c^{2}),\] (287) \[\hat{t} =c^{-1}t=\tau. \tag{288}\] We can develop the semi-discrete analogue of the gsG equation with \(\nu=-1\) proposed in [5] to \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(u_{k+1}-u_{k}) =\sqrt{4a^{2}-4c^{2}\sin^{2}\left(\frac{u_{k+1}-u_{k}}{2}\right) }\sin\left(\frac{u_{k+1}+u_{k}}{2}\right), \tag{289}\] \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(\hat{x}_{k+1}-\hat{x}_{k}) =c^{2}(\cos u_{k+1}-\cos u_{k}). \tag{290}\] With the scaling limit \(c\to 0\), equation (289)-(290) lead to \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(u_{k+1}-u_{k}) =2a\sin\left(\frac{u_{k+1}+u_{k}}{2}\right), \tag{291}\] \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(\hat{x}_{k+1}-\hat{x}_{k}) =0. \tag{292}\] Here equation (291) is just the semi-discrete sG equation [15]. Additionally, the solutions also reduce to those for the semi-discrete sG equation [31, 32, 33]. **(II) From the semi-discrete gsG equation with \(\nu=1\) to the semi-discrete sG equation.** In this case, the relations between \(\tau\)-functions are the same as in the semi-discrete gsG equation with \(\nu=-1\), then we have \[u_{k}=2\mathrm{i}\ln\frac{\bar{\hat{f}}_{k}}{\hat{f}_{k}}+O(\lambda),\ \phi_{k}=O(\lambda),\ \hat{x}_{k}=\lambda x_{k}=2ka+O(\lambda^{2}),\ \hat{t}=\lambda^{-1}t=\tau. \tag{293}\] One can rewrite the semi-discrete analogue of the gsG equation with \(\nu=1\) as \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}\left(u_{k+1}-u_{k}\right) =\sqrt{4a^{2}+4\lambda^{2}\sin^{2}\frac{u_{k+1}-u_{k}}{2}}\sin \frac{u_{k+1}+u_{k}}{2}, \tag{294}\] \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}\left(\hat{x}_{k+1}-\hat{x}_{ k}\right) =\lambda^{2}(\cos u_{k}-\cos u_{k+1}). \tag{295}\] Taking \(\lambda\to 0\), one can also arrive at the semi-discrete sG equation and its determinant solutions. #### 4.3.2 Reduction to the semi-discrete SP equation **(I) From the gsG equation with \(\nu=-1\) to the SP equation with \(\sigma=-1\).** We have \[\phi_{n}^{(i)}(k,1) =p_{i}^{n}(1-ap_{i})^{-k}(1-cp_{i})^{\frac{1}{2p_{i}}+\xi_{i0}}+ \mathrm{i}(-p_{i})^{n}(1+ap_{i})^{-k}\left(1+cp_{i}\right)e^{-\frac{1}{2p_{i}} \tau+\eta_{i0}}\] \[\propto\phi_{n+1}^{(i)}(k,0)-2\epsilon\frac{\mathrm{d}}{\mathrm{d }\tau}\phi_{n+1}^{(i)}(k,0), \tag{296}\] and \[g_{k}\propto\bar{f}_{k}-2\epsilon\frac{\mathrm{d}}{\mathrm{d} \tau}\bar{f}_{k}+O(\epsilon^{2}),\ \bar{g}_{k}\propto f_{k}-2\epsilon\frac{\mathrm{d}}{\mathrm{d}\tau}f_{k}+O( \epsilon^{2}), \tag{297}\] \[\hat{u}_{k}=\frac{u_{k}}{\epsilon}=2\mathrm{i}\left(\ln\frac{ \bar{f}_{k}}{\bar{f}_{k}}\right)_{\tau}+O(\epsilon),\ \phi_{k}=2\mathrm{i}\ln\frac{\bar{f}_{k}}{\bar{f}_{k}}+O(\epsilon),\] (298) \[\hat{x}_{k}=2ka-2(\ln(f_{k}\bar{f}_{k}))_{\tau}+O(\epsilon),\ \hat{t}=\tau. \tag{299}\] The semi-discrete gsG equation with \(\nu=-1\) can be recast to \[\epsilon\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(\hat{u}_{k+1}-\hat{ u}_{k}) =\sqrt{4a^{2}-\frac{4}{\epsilon^{2}}\sin^{2}\left(\frac{\epsilon( \hat{u}_{k+1}-\hat{u}_{k})}{2}\right)}\sin\left(\frac{\epsilon(\hat{u}_{k+1}+ \hat{u}_{k})}{2}\right), \tag{300}\] \[\epsilon\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(\hat{x}_{k+1}-\hat {x}_{k}) =\frac{1}{\epsilon}(\cos(\epsilon\hat{u}_{k+1})-\cos(\epsilon\hat{u}_{k})). \tag{301}\] Dividing both sides of (300)-(301) by \(\epsilon\) and taking \(\epsilon\to 0\) in (297)-(301), we arrive at \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(\hat{u}_{k+1}-\hat{u}_{k}) =\sqrt{4a^{2}-(\hat{u}_{k+1}-\hat{u}_{k})^{2}}\frac{\hat{u}_{k+1} +\hat{u}_{k}}{2}, \tag{302}\] \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(\hat{x}_{k+1}-\hat{x}_{k}) =\frac{\hat{u}_{k}^{2}-\hat{u}_{k+1}^{2}}{2}, \tag{303}\] which is nothing but the semi-discrete SP equation proposed in [7] by taking \(X_{k}=\hat{x}_{k}/2\). **(II) From the gsG equation with \(\nu=1\) to the SP equation with \(\sigma=1\).** With the similar analysis in the cotinuous case and the discrete case, we can construct the semi-discrete SP equation with \(\sigma=1\). By defining \[\hat{\phi}_{n}^{(i)}(k)=p_{i}^{n}(1-ap_{i})^{-k}e^{\frac{1}{2p_{i}}\tau+\xi_{ i0}}+(-p_{i})^{n}(1+ap_{i})^{-k}e^{-\frac{1}{2p_{i}}\tau+\eta_{i0}}, \tag{304}\] and \[\hat{f}_{k}=\left|\begin{array}{cccc}\hat{\phi}_{1}^{(1)}(k)&\hat{\phi}_{2 }^{(1)}(k)&\cdots&\hat{\phi}_{N}^{(1)}(k)\\ \hat{\phi}_{1}^{(2)}(k)&\hat{\phi}_{2}^{(2)}(k)&\cdots&\hat{\phi}_{N}^{(2)}(k )\\ \ldots&\ldots&\ldots&\ldots\\ \hat{\phi}_{1}^{(N)}(k)&\hat{\phi}_{2}^{(N)}(k)&\cdots&\hat{\phi}_{n+N}^{(N)} (k)\end{array}\right|,\ \hat{g}_{k}=\left|\begin{array}{cccc}\hat{\phi}_{0}^{(1)}(k)&\hat{\phi}_{1}^ {(1)}(k)&\cdots&\hat{\phi}_{N-1}^{(1)}(k)\\ \hat{\phi}_{0}^{(2)}(k)&\hat{\phi}_{1}^{(2)}(k)&\cdots&\hat{\phi}_{N-1}^{(2)} (k)\\ \ldots&\ldots&\ldots&\ldots\\ \hat{\phi}_{0}^{(N)}(k)&\hat{\phi}_{1}^{(N)}(k)&\cdots&\hat{\phi}_{N-1}^{(N)} (k)\end{array}\right|, \tag{305}\] we know the expansion of the \(\tau\)-functions for the semi-discrete gsG equation with \(\nu=1\) are \[\phi_{n}^{(i)}(k,0)\propto\hat{\phi}_{n+1}^{(i)}(k)-\mathrm{i} \epsilon\frac{\mathrm{d}}{\mathrm{d}\tau}\hat{\phi}_{n+1}^{(i)}(k)+O(\epsilon^{ 2}), \tag{306}\] \[\phi_{n}^{(i)}(k,1)\propto\hat{\phi}_{n}^{(i)}(k)-\mathrm{i} \epsilon\frac{\mathrm{d}}{\mathrm{d}\tau}\hat{\phi}_{n}^{(i)}(k)+O(\epsilon^{ 2}),\] (307) \[f_{k}=\hat{f}_{k}-\mathrm{i}\epsilon\frac{\mathrm{d}}{\mathrm{d} \tau}\hat{f}_{k}+O(\epsilon^{2}),\ \bar{f}_{k}=\hat{f}_{k}+\mathrm{i}\epsilon\frac{\mathrm{d}}{\mathrm{d}\tau}\hat{f} _{k}+O(\epsilon^{2}),\] (308) \[g_{k}=\hat{g}_{k}+\mathrm{i}\epsilon\frac{\mathrm{d}}{\mathrm{d} \tau}\hat{g}_{k}+O(\epsilon^{2}),\ \bar{g}_{k}=\hat{g}_{k}-\mathrm{i}\epsilon\frac{\partial}{\partial\tau}\hat{g}_{k} +O(\epsilon^{2}). \tag{309}\] The semi-discrete dependent transformation and the corresponding variables can be defined similar to the continuous case as \[\hat{u}_{k}=2\left(\ln\frac{\hat{g}_{k}}{\hat{f}_{k}}\right)_{\tau},\ \varphi_{k}=2\ln\frac{\hat{g}_{k}}{\hat{f}_{k}},\ \hat{x}_{k}=2ka-2(\ln\hat{f}_{k}\hat{g}_{k})_{\tau},\ \hat{t}=\tau, \tag{310}\] under the scaling limit \(\epsilon\to 0\). In addition, the semi-discrete SP equation with \(\sigma=1\) can be derived from the semi-discrete gsG equation with \(\nu=1\) \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(\hat{u}_{k+1}-\hat{u}_{k}) =\sqrt{4a^{2}+(\hat{u}_{k+1}-\hat{u}_{k})^{2}}\frac{\hat{u}_{k+1}+ \hat{u}_{k}}{2}, \tag{311}\] \[\frac{\mathrm{d}}{\mathrm{d}\hat{t}}(\hat{x}_{k+1}-\hat{x}_{k}) =\frac{\hat{u}_{k+1}^{2}-\hat{u}_{k}^{2}}{2}. \tag{312}\] Similarly to the continuous equation (95), solutions of the semi-discrete SP equation with \(\sigma=1\) develops singularities. Here we take the one-soliton solution as an example. The \(\tau\)-functions for the one-soliton solutions are given by \[\hat{f}_{k} =p_{1}(1-ap_{1})^{-k}e^{\frac{1}{2p_{1}}\tau+\xi_{10}}-p_{1}(1+ap _{1})^{-k}e^{-\frac{1}{2p_{1}}\tau+\eta_{10}}\propto 1-\left(\frac{1-ap_{1}}{1+ ap_{1}}\right)^{-k}e^{p_{1}^{-1}\tau+\xi_{10}-\eta_{10}}, \tag{313}\] \[\hat{g}_{k} =(1-ap_{1})^{-k}e^{\frac{1}{2p_{1}}\tau+\xi_{10}}+(1+ap_{1})^{-k} e^{-\frac{1}{2p_{1}}\tau+\eta_{10}}\propto 1+\left(\frac{1-ap_{1}}{1+ap_{1}} \right)^{-k}e^{p_{1}^{-1}\tau+\xi_{10}-\eta_{10}}. \tag{314}\] One can express the one-soliton solution as \[\hat{u}_{k} =-\frac{2}{p_{1}}\frac{1}{\sinh\left(p_{1}^{-1}\tau+\xi_{10}- \eta_{10}+k\ln\frac{1+ap_{1}}{1-ap_{1}}\right)}, \tag{315}\] \[\hat{x}_{k} =2ka-\frac{2}{p_{1}}\frac{1}{\tanh\left(p_{1}^{-1}\tau+\xi_{10}- \eta_{10}+k\ln\frac{1+ap_{1}}{1-ap_{1}}\right)}-\frac{2}{p_{1}}. \tag{316}\] When \(|\hat{x}_{k}|\) tends to infinity, \(\hat{u}_{k}\) diverges. The more detailed description of \(N\)-soliton solutions will be reported elsewhere. ## 5 One- and two-soliton solutions ### One-soliton solution The \(\tau\)-functions for the one-soliton solutions of the gsG equation (2) are given by \[f =e^{\xi_{1}}+\mathrm{i}\sqrt{\frac{1-\mathrm{i}p_{1}}{1+\mathrm{ i}p_{1}}}e^{\eta_{1}}\propto 1+\mathrm{i}\sqrt{\frac{1-\mathrm{i}p}{1+ \mathrm{i}p}}e^{-\zeta}, \tag{317}\] \[g =(1-\mathrm{i}p_{1})\,e^{\xi_{1}}+\mathrm{i}\sqrt{\frac{1- \mathrm{i}p_{1}}{1+\mathrm{i}p_{1}}}\,(1+\mathrm{i}p_{1})\,e^{\eta_{1}} \propto 1+\mathrm{i}\sqrt{\frac{1+\mathrm{i}p}{1-\mathrm{i}p}}e^{-\zeta}, \tag{318}\] with \[\zeta=py+\frac{\tau}{p}+\zeta_{0}. \tag{319}\] Here we set \(p=p_{1}\) and \(\lambda=1\) for simplicity. Thus, we are able to obtain the parametric form of the one-soliton solution \[u=\mathrm{i}\ln\frac{\bar{f}\bar{g}}{\bar{f}g}=-2\arctan(\sqrt{1 +p^{2}}\sinh\zeta)+\pi, \tag{320}\] \[X=x+c_{1}t+2\arctan p=\frac{\zeta}{p}+2\arctan(p\tanh\zeta), \tag{321}\] with \(c_{1}=1+\frac{1}{p^{2}}\) stands for the velocity of the soliton. By setting \(p\to-p\), one can see that the one-soliton solution presented above is equivalent to the solution in [4]. This relates to the fact that if \(u\) solves equation (2), then so do the functions \(\pm u+2\pi n\) (\(n\): integer). A detailed analysis of the solution can be seen in [4], which we omit here. For the semi-discrete gsG equation with \(\nu=1\), the \(\tau\)-functions are \[f_{k}\propto 1+\mathrm{i}\sqrt{\frac{1-\mathrm{i}p}{1+\mathrm{i}p}}\left( \frac{1-ap}{1+ap}\right)^{k}e^{-\theta},\quad g_{k}\propto 1+\mathrm{i}\sqrt{ \frac{1+\mathrm{i}p}{1-\mathrm{i}p}}\left(\frac{1-ap}{1+ap}\right)^{k}e^{- \theta}, \tag{322}\] with \(\theta=p^{-1}\tau+\theta_{0}\). Then the one-soliton solution can be expressed as \[u_{k}=-2\arctan(\sqrt{1+p^{2}}\sinh\zeta(k))+\pi, \tag{323}\] \[X_{k}=x_{k}+c_{1}t+2\arctan p=2ka+\frac{1}{p^{2}}t+2\arctan(p \tanh\zeta(k)), \tag{324}\] where \(\zeta(k)=k\ln\frac{1+ap}{1-ap}+\theta\). By taking \(a=0.5\), Figure 1 displays kink and anti-kink solutions for the semi-discrete gsG equation and Figure 2 shows the solutions under different \(p\) values. Figure 3 compares the semi-discrete gsG equation's kink and anti-kink solutions to those for the gsG equation. Similar to the continuous case, if \(p<0(>0)\), the solution \(u_{k}\) represents an kink(anti-kink) solution. The value of \(|p|\) has a positive correlation with the amplitude of \(v_{k}\equiv\frac{u_{k+1}-u_{k}}{\delta_{k}}\). For the fully discrete gsG equation with \(\nu=1\), the \(\tau\)-functions are \[f_{k}^{l}\propto 1+\mathrm{i}\sqrt{\frac{1-\mathrm{i}p}{1+\mathrm{i}p}}\left( \frac{1-ap}{1+ap}\right)^{k}\left(\frac{1-bp^{-1}}{1+bp^{-1}}\right)^{l},\ g_{k}^{l} \propto 1+\mathrm{i}\sqrt{\frac{1+\mathrm{i}p}{1-\mathrm{i}p}}\left(\frac{1- ap}{1+ap}\right)^{k}\left(\frac{1-bp^{-1}}{1+bp^{-1}}\right)^{l}. \tag{325}\] Then the one-soliton solution can be expressed as \[u_{k}^{l} =-2\arctan(\sqrt{1+p^{2}}\sinh\zeta(k,l))+\pi, \tag{326}\] \[x_{k} =2ka-2lb+2\arctan(p\tanh\zeta(k,l))-2\arctan p, \tag{327}\] where \(\zeta(k,l)=k\ln\frac{1+ap}{1-ap}+l\ln\frac{1+bp^{-1}}{1-bp^{-1}}\). For \(a=0.5\) and \(b=0.1\), Figure 4 shows kink and anti-kink solutions for the fully discrete gsG equation with \(\nu=1\). For \(\nu=-1\), one-soliton solutions of the fully discrete gsG equation may be multi-valued. The \(\tau\)-functions can be expressed as \[f_{k}^{l}\propto 1+\mathrm{i}\left(\frac{1-ap}{1+ap}\right)^{k}\left(\frac{1 -bp^{-1}}{1+bp^{-1}}\right)^{l},\ g_{k}^{l}\propto 1+\mathrm{i}\frac{1+p}{1-p} \left(\frac{1-ap}{1+ap}\right)^{k}\left(\frac{1-bp^{-1}}{1+bp^{-1}}\right)^{l}. \tag{328}\] The one-soliton solution can therefore be written as \[u_{k}^{l} =-2\arctan(\sinh\zeta(k,l)-p\cosh\zeta(k,l))+\pi, \tag{329}\] \[x_{k}^{l} =2ka+2lb+\ln\left(\frac{1+p^{2}}{\left(1-p\right)^{2}}-\frac{2p}{ \left(1-p\right)^{2}}\tanh\zeta(k,l)\right), \tag{330}\] Figure 4: Kink and anti-kink solution \(u_{k}^{l}\) for fully discrete gsG equation with \(\nu=1\). (a)\(p=-0.5\), (b)\(p=0.5\). Figure 3: Comparison between the kink and anti-kink solution of the gsG equation(solid line) and the semi-discrete gsG equation(dot) at \(t=0\). (a)\(p=-0.5\), (b)\(p=0.5\). where \(\zeta(k,l)=k\ln\frac{1+ap}{1-ap}+l\ln\frac{1+bp^{-1}}{1-bp^{-1}}\). Figure 5 shows one-soliton solutions to the fully discrete gsG equation with \(\nu=-1\) for \(a=0.1\) and \(b=0.1\). Figure 5(a) illustrates that when \(p=-0.4\), \(u\) is a single-valued kink solution because \(x_{k+1}^{l}-x_{k}^{l}>0\). Figure 5(b) presents irregular kink solution from [4], which exhibits a three-valued characteristic at \(p=-0.9\), whereas for \(p=-1.3\), \(u\) becomes loop soliton(see Figure 5(c)). ### Two-soliton solutions The tau-functions for the two-soliton solutions of the continuous and discrete gsG equations are given by **(I) the gsG equation with \(\nu=1\)** \[f\propto 1+\mathrm{i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{\frac{1- \mathrm{i}p_{1}}{1+\mathrm{i}p_{1}}}e^{-\zeta_{1}}-\mathrm{i}\frac{p_{1}+p_{2} }{p_{2}-p_{1}}\sqrt{\frac{1-\mathrm{i}p_{2}}{1+\mathrm{i}p_{2}}}e^{-\zeta_{2} }+\sqrt{\frac{1-\mathrm{i}p_{1}}{1+\mathrm{i}p_{1}}}\sqrt{\frac{1-\mathrm{i} p_{2}}{1+\mathrm{i}p_{2}}}e^{-\zeta_{1}-\zeta_{2}}, \tag{331}\] \[g\propto 1+\mathrm{i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{\frac{1 +\mathrm{i}p_{1}}{1-\mathrm{i}p_{1}}}e^{-\zeta_{1}}-\mathrm{i}\frac{p_{1}+p_{ 2}}{p_{2}-p_{1}}\sqrt{\frac{1+\mathrm{i}p_{2}}{1-\mathrm{i}p_{2}}}e^{-\zeta_{2 }}+\sqrt{\frac{1+\mathrm{i}p_{1}}{1-\mathrm{i}p_{1}}}\sqrt{\frac{1+\mathrm{i} p_{2}}{1-\mathrm{i}p_{2}}}e^{-\zeta_{1}-\zeta_{2}}, \tag{332}\] with \[\zeta_{i}=p_{i}y+\frac{\tau}{p_{i}}+\zeta_{i0},\ i=1,2, \tag{333}\] **(II) the semi-discrete gsG equation with \(\nu=1\)** \[f_{k}\propto 1+\mathrm{i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{ \frac{1-\mathrm{i}p_{1}}{1+\mathrm{i}p_{1}}}z_{1}^{k}e^{-\theta_{1}}-\mathrm{i }\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{\frac{1-\mathrm{i}p_{2}}{1+\mathrm{i}p_ {2}}}z_{2}^{k}e^{-\theta_{2}}+\sqrt{\frac{1-\mathrm{i}p_{1}}{1+\mathrm{i}p_{1} }}\sqrt{\frac{1-\mathrm{i}p_{2}}{1+\mathrm{i}p_{2}}}(z_{1}z_{2})^{k}e^{-\theta _{1}-\theta_{2}}, \tag{334}\] \[g_{k}\propto 1+\mathrm{i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{ \frac{1+\mathrm{i}p_{1}}{1-\mathrm{i}p_{1}}}z_{1}^{k}e^{-\theta_{1}}-\mathrm{i }\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{\frac{1+\mathrm{i}p_{2}}{1-\mathrm{i}p_ {2}}}z_{2}^{k}e^{-\theta_{2}}+\sqrt{\frac{1+\mathrm{i}p_{1}}{1-\mathrm{i}p_{1} }}\sqrt{\frac{1+\mathrm{i}p_{2}}{1-\mathrm{i}p_{2}}}(z_{1}z_{2})^{k}e^{-\theta _{1}-\zeta_{2}}, \tag{335}\] with \[z_{i}=\frac{1-ap_{i}}{1+ap_{i}},\ \theta_{i}=\frac{\tau}{p_{i}}+ \theta_{i0},\ i=1,2, \tag{336}\] **(III) the fully discrete gsG equation with \(\nu=1\)** \[f_{k}^{l}\propto 1+\mathrm{i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{ \frac{1-\mathrm{i}p_{1}}{1+\mathrm{i}p_{1}}}z_{1}^{k}w_{1}^{l}-\mathrm{i}\frac {p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{\frac{1-\mathrm{i}p_{2}}{1+\mathrm{i}p_{2}}}z _{2}^{k}w_{2}^{l}+\sqrt{\frac{1-\mathrm{i}p_{1}}{1+\mathrm{i}p_{1}}}\sqrt{ \frac{1-\mathrm{i}p_{2}}{1+\mathrm{i}p_{2}}}(z_{1}z_{2})^{k}(w_{1}w_{2})^{l}, \tag{337}\] \[g_{k}^{l}\propto 1+\mathrm{i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{ \frac{1+\mathrm{i}p_{1}}{1-\mathrm{i}p_{1}}}z_{1}^{k}w_{1}^{l}-\mathrm{i}\frac {p_{1}+p_{2}}{p_{2}-p_{1}}\sqrt{\frac{1+\mathrm{i}p_{2}}{1-\mathrm{i}p_{2}}}z _{2}^{k}w_{2}^{l}+\sqrt{\frac{1+\mathrm{i}p_{1}}{1-\mathrm{i}p_{1}}}\sqrt{ \frac{1+\mathrm{i}p_{2}}{1-\mathrm{i}p_{2}}}(z_{1}z_{2})^{k}(w_{1}w_{2})^{l}, \tag{338}\] with \[z_{i}=\frac{1-ap_{i}}{1+ap_{i}},\ w_{i}=\frac{1-bp_{i}^{-1}}{1+bp_ {i}^{-1}},\ i=1,2, \tag{339}\] w Figure 5: Regular kink, irregular kink and loop soliton solution \(u_{k}^{l}\) for fully discrete gsG equation with \(\nu=-1\). (a)\(p=-0.4\), (b)\(p=-0.9\), (c)\(p=-1.3\). **(4) the fully discrete gsG equation with \(\nu=-1\)** \[f_{k}^{l}\propto 1+{\rm i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}z_{1}^{k}w_{1 }^{l}-{\rm i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}z_{2}^{k}w_{2}^{l}+(z_{1}z_{2})^{k}( w_{1}w_{2})^{l}, \tag{340}\] \[g_{k}^{l}\propto 1+{\rm i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\frac{1+p _{1}}{1-p_{1}}z_{1}^{k}w_{1}^{l}-{\rm i}\frac{p_{1}+p_{2}}{p_{2}-p_{1}}\frac{1+p _{2}}{1-p_{2}}z_{2}^{k}w_{2}^{l}+\frac{1+p_{1}}{1-p_{1}}\frac{1+p_{2}}{1-p_{2}} (z_{1}z_{2})^{k}(w_{1}w_{2})^{l}, \tag{341}\] with \[z_{i}=\frac{1-ap_{i}}{1+ap_{i}},\ w_{i}=\frac{1-bp_{i}^{-1}}{1+bp_{i}^{-1}},\ i=1,2. \tag{342}\] In [3, 4], collisions between several types of one-soliton solutions for the continuous gsG equation with \(\nu=\pm 1\) are shown. Collisions of solutions are similar in the discrete case, thus we omit here. And we demonstrate a different type of solution known as breathers. As pointed out in [3, 4] and Theorem 2.1, if we set \(p_{1}=\tilde{p}_{2}\), one can obtain breather solutions. Figure 6 displays the comparison between the breather solutions of the gsG and the semi-discrete gsG equation with \(\nu=1\) for \(a=0.1,\ p_{1}=-0.3+0.5{\rm i},\ p_{2}=-0.3-0.5{\rm i}\), in which one can find that the breather solution of the semi-discrete gsG equation agrees with that of the gsG equation very well. And Figure 7 shows such kind of breather solutions also appear in fully discrete gsG equations with \(\nu=\pm 1\). ## 6 Conclusion In this paper, we have successfully proposed integrable semi-discrete and full-discrete analogues of a generalized sine-Gordon equation. Determinant formulations for the \(N\)-soliton solutions, encompassing multi-kink solitons and multi-breather solutions, have been derived for both the continuous and discrete versions of Figure 6: Comparison between the breather solutions of the gsG(solid line) and the semi-discrete gsG equation(dot) with \(\nu=1\) for \(a=0.1,\ p_{1}=-0.3+0.5{\rm i},\ p_{2}=-0.3-0.5{\rm i}\). the gsG equations. We have also investigated reductions from the gsG equation to the sG equation and the SP equation, both in continuous and discrete cases. Notably, we have demonstrated the essential role of the Backlund transformation of bilinear equations and its parameters in the construction and reduction processes. However, certain aspects still remain unknown. Firstly, it is crucial to determine the Lax pairs associated with the semi-discrete and full-discrete gsG equations presented in this study. Given the close relation between the bilinear forms of these equations and those of the 2DTL equation, derived from the discrete KP equation, it is natural to explore their connections within the context of Lax pairs. Identification of Lax pairs would enable further investigations into these discrete gsG equations. Recent attention has been focused on multi-component integrable systems, such as the integrable vector sine-Gordon equation [37, 38, 39, 40] and the multi-component short pulse equation [8, 41]. Given that the gsG equation lies between the SP equation and the sG equation, it is reasonable to propose multi-component gsG equations by establishing connections with the SP equation and the sG equation. These intriguing questions will be addressed in future studies. ## Acknowledgement G. Yu is supported by National Natural Science Foundation of China (Grant no. 12175155), Shanghai Frontier Research Institute for Modern Analysis and the Fundamental Research Funds for the Central Universities. B.F. Feng's work is supported by the U.S. Department of Defense (DoD), Air Force for Scientific Research (AFOSR) under grant No. W911NF2010276. ## Appendix A Proof of Theorem 3.2 Proof.: We can rewrite (134) and (137)-(139) as \[\frac{1}{b}\sinh\frac{\varphi_{k}^{l+1}-\varphi_{k}^{l}}{2}= \lambda\sin\frac{u_{k}^{l+1}+u_{k}^{l}}{2}, \tag{343}\] \[\frac{1}{b}\cosh\frac{\varphi_{k}^{l+1}-\varphi_{k}^{l}}{2}= \frac{\sqrt{b^{2}\lambda^{2}+1}}{b}\sin\frac{\tilde{x}_{k}^{l+1}- \tilde{x}_{k}^{l}+2b\lambda+2\omega_{1}}{2},\ \sin\omega_{1}=\frac{1}{\sqrt{b^{2}\lambda^{2}+1}},\] (344) \[a\sinh\frac{\varphi_{k}^{l}+\varphi_{k+1}^{l}}{2}=d\sin\frac{u_{ k+1}^{l}-u_{k}^{l}}{2},\] (345) \[a\cosh\frac{\varphi_{k}^{l}+\varphi_{k+1}^{l}}{2}=\sqrt{a^{2}+ \lambda^{2}}\sin\frac{\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l}-2a\lambda^{-1}+2 \omega_{2}}{2},\ \sin\omega_{2}=\frac{a}{\sqrt{a^{2}+\lambda^{2}}}. \tag{346}\] By making a shift of \(k\to k+1\) in (343), then adding and subtracting it and (343), one obtains \[\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}+\varphi_{k}^{l+1 }-\varphi_{k}^{l}}{4}\cosh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}-\varphi_ {k}^{l+1}+\varphi_{k}^{l}}{4}\] \[=bd\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l}}{4} \cos\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{4}, \tag{347}\] \[\cosh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}+\varphi_{k}^{l+1 }-\varphi_{k}^{l}}{4}\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}-\varphi_ {k}^{l+1}+\varphi_{k}^{l}}{4}\] \[=bd\cos\frac{u_{k+1}^{l+1}+u_{k+1}^{l+1}+u_{k}^{l}}{4}\sin\frac{ u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{4}. \tag{348}\] Similarly, from equations (344)-(346), we arrive at \[\sqrt{1+b^{2}\lambda^{2}}\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde {x}_{k+1}^{l}+\tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l}+4b\lambda+4\omega_{1}}{4} \cos\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l+1}+\tilde {x}_{k}^{l}}{4}\] \[=\cosh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}+\varphi_{k}^{l +1}-\varphi_{k}^{l}}{4}\cosh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}- \varphi_{k}^{l+1}+\varphi_{k}^{l}}{4}, \tag{349}\] \[\sqrt{1+b^{2}\lambda^{2}}\cos\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l+1}+ \tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l}+4b\lambda+4\omega_{1}}{4}\sin\frac{\tilde{ x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k}^{l+1}+\tilde{x}_{k}^{l}}{4}\] \[=\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}+\varphi_{k}^{l+1 }-\varphi_{k}^{l}}{4}\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l+1}- \varphi_{k}^{l+1}+\varphi_{k}^{l}}{4}, \tag{350}\] \[a\sinh\frac{\varphi_{k+1}^{l+1}+\varphi_{k+1}^{l}+\varphi_{k}^{l+1 }+\varphi_{k}^{l}}{4}\cosh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l+1}+ \varphi_{k}^{l+1}-\varphi_{k}^{l}}{4}\] \[=d\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l+1}-u_{k}^{l+1}-u_{k}^{l}}{4} \cos\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^{l}}{4}, \tag{351}\] \[a\cosh\frac{\varphi_{k+1}^{l+1}+\varphi_{k+1}^{l}+\varphi_{k}^{l+1 }+\varphi_{k}^{l}}{4}\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l+1}+ \varphi_{k}^{l+1}-\varphi_{k}^{l}}{4}\] \[=d\cos\frac{u_{k+1}^{l+1}+u_{k+1}^{l+1}-u_{k}^{l+1}-u_{k}^{l}}{4} \sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l+1}-u_{k}^{l+1}+u_{k}^{l}}{4}, \tag{352}\] and \[\sqrt{a^{2}+\lambda^{2}}\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x }_{k}^{l+1}+\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l}-4a\lambda^{-1}+4\omega_{2}} {4}\cos\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l+1}+ \tilde{x}_{k}^{l}}{4}\] \[=a\cosh\frac{\varphi_{k+1}^{l+1}+\varphi_{k+1}^{l}+\varphi_{k}^{ l+1}+\varphi_{k}^{l}}{4}\cosh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}+ \varphi_{k}^{l+1}-\varphi_{k}^{l}}{4}, \tag{353}\] \[\sqrt{a^{2}+\lambda^{2}}\cos\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x }_{k}^{l+1}+\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l}-4a\lambda^{-1}+4\omega_{2}} {4}\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l+1}+ \tilde{x}_{k}^{l}}{4}\] \[=a\sinh\frac{\varphi_{k+1}^{l+1}+\varphi_{k+1}^{l}+\varphi_{k}^{ l+1}+\varphi_{k}^{l}}{4}\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}+ \varphi_{k}^{l+1}-\varphi_{k}^{l}}{4}, \tag{354}\] respectively. Note that equation (347) and (352) give \[\frac{1}{ab}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k }^{l}}{4}\cosh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}-\varphi_{k}^{l+1}+ \varphi_{k}^{l}}{4}\] \[=\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l}}{4}\cosh\frac{ \varphi_{k+1}^{l+1}+\varphi_{k+1}^{l}+\varphi_{k}^{l+1}+\varphi_{k}^{l}}{4}. \tag{355}\] Equations (349) and (353) lead to \[a\sqrt{1+b^{2}\lambda^{2}}\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde {x}_{k+1}^{l+1}+\tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l}+4b\lambda+4\omega_{1}} {4}\cosh\frac{\varphi_{k+1}^{l+1}+\varphi_{k+1}^{l}+\varphi_{k}^{l+1}+ \varphi_{k}^{l}}{4}\] \[=\sqrt{a^{2}+\lambda^{2}}\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde {x}_{k}^{l+1}+\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l}-4a\lambda^{-1}+4\omega_{2}} {4}\cosh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}-\varphi_{k}^{l+1}+ \varphi_{k}^{l}}{4}, \tag{356}\] Substituting (356) into (355), we have \[\frac{1}{b}\sin\frac{u_{k+1}^{l+1}-u_{k+1}^{l}-u_{k}^{l+1}+u_{k}^{l}}{4}= \tilde{\Delta}_{k}^{l}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1}+u_{k}^{l }}{4}, \tag{357}\] with \[\tilde{\Delta}_{k}^{l}=\frac{\sqrt{a^{2}+\lambda^{2}}\sin\frac{\tilde{x}_{k+1}^ {l+1}-\tilde{x}_{k}^{l+1}+\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l}-4a\lambda^{-1}+4 \omega_{2}}{4}}{\sqrt{1+b^{2}\lambda^{2}}\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde {x}_{k+1}^{l}+\tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l}+4b\lambda+4\omega_{1}}{4}}. \tag{358}\] Subsequently, by multiplying equation (347) and (348), (349) and (350), we obtain \[b^{2}\lambda^{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l}}{2 }\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{2}\] \[=\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}+\varphi_{k}^{l+1 }-\varphi_{k}^{l}}{2}\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}-\varphi_{k}^{ l+1}+\varphi_{k}^{l}}{2}, \tag{359}\] \[(1+b^{2}\lambda^{2})\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l}+ \tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l}+4b\lambda+4\omega_{1}}{2}\sin\frac{ \tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l+1}+\tilde{x}_{k}^{l} }{2}\] \[=\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}+\varphi_{k}^{l+1 }-\varphi_{k}^{l}}{2}\sinh\frac{\varphi_{k+1}^{l+1}-\varphi_{k+1}^{l}-\varphi_ {k}^{l+1}+\varphi_{k}^{l}}{2}, \tag{360}\] which can be recast to \[(1+b^{2}\lambda^{2})\sin\frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k +1}^{l}+\tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l}+4b\lambda+4\omega_{1}}{2}\sin \frac{\tilde{x}_{k+1}^{l+1}-\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l+1}+\tilde{x} _{k}^{l}}{2}\] \[=b^{2}\lambda^{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}+u_{k}^{l+1} +u_{k}^{l}}{2}\sin\frac{u_{k+1}^{l+1}+u_{k+1}^{l}-u_{k}^{l+1}-u_{k}^{l}}{2}. \tag{361}\] Then we obtain the fully discrete gsG equation with \(\nu=1\). In addition, from (343)-(346), we know \[\tilde{J}_{k}^{l} =\left(\frac{1}{b}\cos\frac{\tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l} +2b\lambda}{2}+d\sin\frac{\tilde{x}_{k}^{l+1}-\tilde{x}_{k}^{l}+2b\lambda}{2} \right)^{2}-\lambda^{2}\sin^{2}\frac{u_{k}^{l+1}+u_{k}^{l}}{2}=\frac{1}{b^{2}}, \tag{362}\] \[\tilde{I}_{k}^{l} =\left(a\cos\frac{\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l}-2a\lambda ^{-1}}{2}+d\sin\frac{\tilde{x}_{k+1}^{l}-\tilde{x}_{k}^{l}-2a\lambda^{-1}}{2} \right)^{2}-\lambda^{2}\sin^{2}\frac{u_{k+1}^{l}-u_{k}^{l}}{2}=a^{2}. \tag{363}\] \(\tilde{I}_{k}^{l}\) and \(\tilde{J}_{k}^{l}\) are conserved quantities because \(a^{2}\) and \(\frac{1}{b^{2}}\) are constants.
2301.02309
The cluster decomposition of the configurational energy of multicomponent alloys
Lattice models parameterized using first-principles calculations constitute an effective framework to simulate the thermodynamic behavior of physical systems. The cluster expansion method is a flexible lattice-based method used extensively in the study of multicomponent alloys. Yet despite its prevalent use, a well-defined understanding of expansion terms has remained elusive. In this letter, we introduce the cluster decomposition as a unique and basis-agnostic decomposition of any general function of the atomic configuration in a crystal. We demonstrate that cluster expansions constructed from arbitrary orthonormal basis sets are all representations of the same cluster decomposition. We show how the norms of expansion coefficients associated with the same crystallographic orbit are invariant to changes between orthonormal bases. Based on its uniqueness and orthogonality properties, we identify the cluster decomposition as an invariant ANOVA decomposition. We leverage these results to illustrate how functional analysis of variance and sensitivity analysis can be used to directly interpret interactions among species and gain insight into computed thermodynamic properties. The work we present in this letter opens new directions for parameter estimation, interpretation, and use of applied lattice models on well-established mathematical and statistical grounds.
Luis Barroso-Luque, Gerbrand Ceder
2023-01-05T21:47:05Z
http://arxiv.org/abs/2301.02309v1
# The cluster decomposition of the configurational energy of multicomponent alloys ###### Abstract Lattice models parameterized using first-principles calculations constitute an effective framework to simulate the thermodynamic behavior of physical systems. The cluster expansion method is a flexible lattice-based method used extensively in the study of multicomponent alloys. Yet despite its prevalent use, a well-defined understanding of expansion terms has remained elusive. In this letter, we introduce the _cluster decomposition_ as a unique and basis-agnostic decomposition of any general function of the atomic configuration in a crystal. We demonstrate that cluster expansions constructed from arbitrary orthonormal basis sets are all representations of the same cluster decomposition. We show how the norms of expansion coefficients associated with the same crystallographic orbit are invariant to changes between orthonormal bases. Based on its uniqueness and orthogonality properties, we identify the cluster decomposition as an invariant ANOVA decomposition. We leverage these results to illustrate how functional analysis of variance and sensitivity analysis can be used to directly interpret interactions among species and gain insight into computed thermodynamic properties. The work we present in this letter opens new directions for parameter estimation, interpretation, and use of applied lattice models on well-established mathematical and statistical grounds. Computational methods based on lattice models are used extensively in the applied physical sciences. Parameterized lattice models are actively used in materials science to study metallic alloys [1; 2; 3], semi-conductors [4; 5], super-ionic conductors [6; 7], battery electrodes [8], and surface catalysis [9]. The cluster expansion (CE) method constitutes a mathematical formalism for the representation and parameterization of generalized lattice models [10; 11; 12]. The CE method coupled with Monte Carlo (MC) sampling has become an established technique to compute thermodynamic properties of multi-component crystals [13; 14]. Recent advancements have introduced generative models as alternative ways to compute free energies [15; 16]. Additionally, the underlying mathematical structure and formalism of the CE have been used to develop methodological extensions to parameterize functions of continuous degrees of freedom [17; 18; 19; 20] that can also be used to represent vector and tensor material properties [21; 22], and even capture full potential energy landscapes [23]. The core of the CE method is the expansion of a function of configurational variables distributed on a lattice. The expansion is expressed in terms of _correlation functions_, which are constructed by averaging over functions that act on symmetrically equivalent clusters of sites and so ensure that the symmetries of the physical system are respected. Formally, the mathematical formalism of the CE constitutes a harmonic expansion of functions over a tensor product domain [24]. Using correlation functions that operate over small subsets of variables permits tractable parameterization and calculations of complex properties. Intuitively, such a formalism leads to expansions that are generalizations of the Ising model [25; 26], \[H(\mathbf{\sigma})=\sum_{\beta}J_{\beta}\sum_{\alpha\in\beta}\prod_{i\in[N]}\phi_{ \alpha_{i}}(\mathbf{\sigma}_{i}), \tag{1}\] where \(\mathbf{\sigma}\) is a string of occupation variables that represent the chemical species residing on each of \(N\) sites; \(\alpha\) are multi-indices of length equal to \(N\); \(\beta\) are sets of symmetrically equivalent multi-indices; \(J_{\beta}\) are expansion coefficients. The site functions \(\phi_{\alpha_{i}}\) are taken from basis sets spanning the single variable function space over the corresponding occupation variable \(\mathbf{\sigma}_{i}\). The product of site functions over all sites is referred to as a product function or a _cluster basis function_[14], which we write compactly as \(\Phi_{\alpha}\). The resemblance to the Ising model is evident when considering binary occupation variables; for which the monomials \(\phi_{0}=1\) and \(\phi_{1}(\mathbf{\sigma}_{i})=\pm 1\) can be used as a site basis. For the case of an arbitrary number of components \(\mathbf{\sigma}_{i}\in\Omega_{i}\), the requirements are simply that the constant function \(\phi_{0}=1\) is included and that the basis is orthonormal under the following inner product [27; 28], \[\langle\phi_{j},\phi_{k}\rangle=\sum_{\mathbf{\sigma}_{i}\in\Omega_{i}}\rho_{i}( \mathbf{\sigma}_{i})\phi_{j}(\mathbf{\sigma}_{i})\phi_{k}(\mathbf{\sigma}_{i}) \tag{2}\] where \(\rho_{i}(\mathbf{\sigma}_{i})\) is an a-priori probability measure over the allowed values of \(\mathbf{\sigma}_{i}\in\Omega_{i}\). The inner product in Equation 2 can be interpreted as the expected value in the non-interacting limit. A uniform probability measure is most often used, but generally, it can be equal to the concentration of chemical species in the non-interacting limit [28]. We will call a site basis that satisfies the above two requirements a _standard site basis_. By including \(\phi_{0}=1\) in all site bases, Equation 1 is a _hierarchical_ expansion, where each function \(\Phi_{\alpha}\) has as an effective domain the occupation variables of a cluster of sites \(S\) given by the _support_ of its multi-index \(\text{supp}(\alpha)\). Leveraging this hierarchical framework, the cluster functions can be written solely in terms of clusters of sites \(S=\text{supp}(\alpha)\) and the nonzero entries of the multi-indices, which we call _contracted multi-indices_\(\widehat{\alpha}\), \[\Phi_{\alpha}(\mathbf{\sigma})=\Phi_{\widehat{\alpha}}(\mathbf{\sigma}_{S})=\prod_{i=1 }^{|S|}\phi_{\widehat{\alpha}_{i}}(\mathbf{\sigma}_{S_{i}}) \tag{3}\] Expression 3 makes the effective domain of cluster functions explicit. Additionally, Equation 3 separates the functional form of a cluster function and the particular cluster of sites it acts on. Meaning that cluster functions that operate on symmetrically equivalent clusters have the same functional form (indicated by \(\widehat{\alpha}\)), but differ in their effective domain (indicated by \(S\)). We refer to cluster functions that are constructed using a standard site basis as _Fourier cluster functions_, and a resulting expansion as a _Fourier CE_. The requirement that site basis functions be orthonormal ensures that the resulting set of cluster functions is itself orthonormal [10; 24]. However, orthonormality is not a strict requirement, since a set of cluster functions \(\Phi_{\alpha}\) based on any site basis will span the space of functions over configuration [24]. In fact, there exist many applications of CE methodology that use non-orthogonal basis sets [29; 30; 31; 32; 33]. Insightful connections to renowned classical lattice models exist for both non-orthogonal and Fourier CEs. A binary Fourier CE is a direct generalization of the Ising model to higher-degree interactions. Similarly, a binary CE using indicator functions \(\phi_{1}=\mathbf{1}_{\sigma}\) is a generalization of the _lattice gas_ model, or a generalization of the Potts model when an overcomplete frame representation is used [32; 33]. Such connections to classical lattice models have been used by practitioners to evaluate the spatial decay of interactions [28; 34] and to analyze the effects of specific species and their interactions on the total energy [7; 12; 33] by examining the fitted expansion coefficients. However, for complex systems with three or more components, coefficient values depend non-trivially on the particular choice among numerous possible basis sets,[35] and direct interpretation of coefficients leaning on intuition from the Ising or lattice gas models can be precarious and ambiguous. In this letter, we show that a Fourier CE can be expressed as a unique basis agnostic decomposition which we call the _cluster decomposition_. The cluster decomposition is related to well-established expansions of random variables known as ANOVA or Sobol decompositions [36] among other names [37; 38]. Moreover, the cluster decomposition has analytic properties that lead to a deeper understanding of the structure and interpretation of expansion terms. We then illustrate a practical use case of the cluster decomposition based on related concepts from functional analysis of variance (fANOVA) and sensitivity analysis (SA) as a means to gain mathematically rigorous insight from CE and MC simulations of real materials. Let us first motivate the search for a basis-agnostic representation of a CE from a geometric observation. By virtue of their aforementioned properties, it follows that standard site basis sets are related by rotations about the hyperplane normal to the constant function \(\phi_{0}=1\). This observation is illustrated graphically for a ternary site space in Figure 1a. Any standard site basis must include two orthogonal basis functions that lie on the plane orthogonal to \(\phi_{0}\). The geometry of standard site bases implies that the change of basis matrix (CBM) \(M\) between two resulting Fourier cluster basis sets is given by products of site basis rotations, \[M_{\gamma,\alpha}=\left(\prod_{i}^{N}R_{\alpha_{i},\gamma_{i}}\right)\delta_{ \text{supp}(\gamma)\,\text{supp}(\alpha)} \tag{4}\] where \(\gamma\) and \(\alpha\) are multi-indices for two Fourier cluster basis sets. The CBM is block-diagonal--any term connecting cluster functions of symmetrically distinct clusters are zero. Further, since the CBM is also unitary, it follows that the blocks themselves are unitary, implying that the norm of expansion coefficients within each block is conserved. A visualization of the block-diagonal CBM between two Fourier cluster bases of a ternary system including up to quadruplet terms is shown in Figure 1b. To continue, we define _reduced correlation functions_ as the average of cluster functions over symmetrically equivalent contracted multi-indices \(\widehat{\alpha}\), \[\widehat{\Theta}_{\beta}(\mathbf{\sigma}_{S})=\frac{1}{\widehat{m}_{\beta}}\sum_{ \widehat{\alpha}\in\widehat{\beta}}\Phi_{\widehat{\alpha}}(\mathbf{\sigma}_{S}), \tag{5}\] where \(\widehat{\beta}\) is a set (orbit) of symmetrically equivalent contracted multi-indices \(\widehat{\alpha}\), i.e. symmetrically equivalent site function permutations over a fixed cluster of sites \(S\). \(\widehat{m}_{\beta}\) is the total number of contracted multi-indices in \(\widehat{\beta}\). Figure 1: (a) Geometry of standard site basis sets for a ternary site space. Two standard site basis sets related by a rotation of \(2\pi/3\) are shown. Both basis sets include the constant \(\phi_{0}\). (b) Change of basis matrix relating the two different sets of Fourier cluster basis functions up to quadruplets constructed using the site basis sets in (a). Using reduced correlation functions we rewrite Equation 1 as follows, \[H(\mathbf{\sigma})=\sum_{B}\sum_{\widehat{\beta}\in\widehat{L}(B)}\widehat{m}_{\beta} J_{\beta}\sum_{S\in B}\widehat{\Theta}_{\beta}(\mathbf{\sigma}_{S}), \tag{6}\] where \(B\) are orbits of symmetrically equivalent clusters of sites \(S\subseteq[N]\); and \(\widehat{L}(B)\) are sets of orbits of contracted multi-indices \(\widehat{\beta}\), which represent symmetrically distinct labelings over the sites in the clusters \(S\in B\). The two inner sums in Equation 6 are independent and can be re-arranged to obtain a far more physically intuitive many-body expansion as follows, \[H(\mathbf{\sigma})=\sum_{B}\sum_{S\in B}\widehat{H}_{B}(\mathbf{\sigma}_{S}) \tag{7}\] where the \(n\)-body terms \(\widehat{H}_{B}(\mathbf{\sigma}_{S})\) account for the energy originating from the interactions amongst the species residing on the clusters \(S\in B\). For clusters \(S\) with more than one site, \(|S|>1\), we call these terms _cluster interactions_. Following the original CE formalism, Equation 7 can also be written as a density by using averages of cluster interactions \(\widehat{H}_{B}\) over symmetrically equivalent clusters \(S\in B\), \[H(\mathbf{\sigma}) =N\sum_{B}m_{B}\left(\frac{1}{m_{B}N}\sum_{\widehat{S}\in B} \widehat{H}_{B}(\mathbf{\sigma}_{S})\right)\] \[=N\sum_{B}m_{B}H_{B}(\mathbf{\sigma}), \tag{8}\] we will refer to the terms \(H_{B}\) with \(|S|>1\) for all \(S\in B\) as _mean cluster interactions_, and as _composition effects_ for point clusters (\(|S|=1\)). Equations 7 and 8 are the _cluster decomposition_ of the Hamiltonian \(H(\mathbf{\sigma})\). Note that although such an expression can be obtained for any choice of site basis--orthogonal or not--a true cluster decomposition is obtained from a Fourier CE only. This distinction is fundamental since CE expansions using non-orthogonal basis sets will not have the analytical properties that we describe in the remainder of this letter. It follows directly from our previous analysis of the geometry of Fourier cluster functions, that cluster interactions are invariant to a change of standard basis, i.e. they are invariant to arbitrary rotations orthogonal to \(\phi_{0}\). As a result, the norm of the cluster interactions, \[||\widehat{H}_{B}||_{2}^{2}=\sum_{\widehat{\beta}\in\widehat{L}(B)}\widehat{m }_{\beta}J_{\beta}^{2} \tag{9}\] is invariant to the choice of standard site basis. In line with CE and discrete Fourier expansion terminology, we will call the squared norm of a cluster interaction \(||\widehat{H}_{B}||_{2}^{2}\) the _effective cluster weight_ of a cluster \(S\in B\). In addition, we define the _total cluster weight_ as the effective cluster weight multiplied by the multiplicity of its orbit, \(m_{B}||\widehat{H}_{B}||_{2}^{2}\). Cluster interactions have the following significant mathematical properties [39]: 1. \(\langle H_{B}\rangle=0\) (zero mean) 2. \(\langle H_{B},H_{D}\rangle=0\) for \(B\neq D\) (orthogonal) 3. \(\langle H_{B},F_{\mathcal{D}}\rangle=0\) for any set of orbits \(\mathcal{D}\) such that \(B\notin\mathcal{D}\) and any function \(F_{\mathcal{D}}\) that can be expanded using Fourier basis functions \(\Phi_{\alpha}\) with \(\text{supp}(\alpha)\in D\) for \(D\in\mathcal{D}\). (irreducible) From properties (1) and (2) it follows that the cluster decomposition of \(H\) is _unique_[40]; meaning there exists one and only one set of cluster interactions \(\widehat{H}_{B}\) for any given Hamiltonian \(H\). Equivalently, property (1) implies that Equations 7 and 8 are _ANOVA-representations_ of \(H(\mathbf{\sigma})\)[36; 40]. In fact, re-written in such a form, a CE using a standard basis is nothing more than an fANOVA representation, in which by symmetry, interactions among equivalent clusters \(S\in B\) are given by the same function \(H_{B}\). By this consideration, using a cluster decomposition as an effective Hamiltonian to define a Boltzmann distribution can be thought of as log-density ANOVA estimation of a probabilistic graphical model [41; 42; 43]. Using the construction of ANOVA representations, we can now obtain a much deeper understanding of the terms in a CE. Precisely, ANOVA terms are constructed from hierarchical inclusion-exclusion of means conditioned on the occupancy of clusters. For example, it is already known from the original CE formalism [10] that the constant term is equal to the mean of the Hamiltonian. \(J_{\emptyset}=H_{\emptyset}=\langle H(\mathbf{\sigma})\rangle\). In the statistics literature, \(J_{\emptyset}\) is usually referred to as the _grand mean_[44]. The single site terms terms, \(\widehat{H}_{P}(\mathbf{\sigma}_{i})\) are the difference between the mean conditioned on the \(i\)-th site and the grand mean, \(\widehat{H}_{P}(\mathbf{\sigma}_{i})=\langle H(\mathbf{\sigma})\mid\mathbf{\sigma}_{i} \rangle-\langle H(\mathbf{\sigma})\rangle\). The point terms of an ANOVA representation are called _main effects_[44]. The main effects are the mean contribution that a specific species \(\mathbf{\sigma}_{i}\) residing on the \(i\)-th site has on the total energy. The average of _main effects_ in the cluster decomposition (a term \(H_{P}\) in Equation 8) represents the portion of the Hamiltonian that depends on composition only. The remaining terms involving clusters \(S\) with more than one site are known as _interactions_[44], motivating our terminology. A cluster interaction \(\widehat{H}_{B}(\mathbf{\sigma}_{S})\) of cluster \(S\) is computed as the mean conditioned on the sites in cluster \(S\), minus the cluster interactions of all its subclusters \(T\subset S\), \[\widehat{H}_{B}(\mathbf{\sigma_{S}})=\langle H(\mathbf{\sigma})\mid\mathbf{\sigma}_{S} \rangle-\sum_{T\subset S}\widehat{H}_{C}(\mathbf{\sigma_{T}}) \tag{10}\] Equation 10 clarifies the meaning of a cluster interaction as the average contribution to the total energy coming solely from a single cluster \(S\in B\) and none of its subclusters. Accordingly, we see that the terms in the cluster decomposition represent _energetic interactions_ among species occupying the sites of a cluster that are not captured by any lower-order interactions. Figure 2a shows a visualization of the main effect, nearest neighbor pair, and a triplet cluster interactions as Cartesian tensors for a cluster decomposition of a CrCoNi alloy. In our presentation so far, we started with a representation of a cluster decomposition using a CE with a standard basis. However, since the cluster decomposition is basis agnostic, we can discard the concept of a basis altogether. In fact, in the fANOVA and related literature, a function is simply decomposed into its ANOVA representation by directly appealing to Equation 10 [36; 40]. This approach has been used in concurrent work [45], presenting an axiomatic exposition of the cluster expansion and the cluster decomposition, which is in essence equivalent to the formalism of tensor product fANOVA decompositions [42]. As the name _analysis of variance_ suggests, a cluster decomposition also comprises a decomposition of the variance of a Hamiltonian \(H\) under the a-priori non-interacting product measure \(P(\mathbf{\sigma})=\prod_{i}\rho_{i}(\mathbf{\sigma}_{i})\)[40], \[\text{Var}[H(\mathbf{\sigma})] =\sum_{B\neq\emptyset}\sum_{S\in B}\text{Var}[\widehat{H}_{B}( \mathbf{\sigma}_{S})] \tag{11}\] \[=N\sum_{B\neq\emptyset}m_{B}||\widehat{H}_{B}||_{2}^{2} \tag{12}\] where we used the fact that the variance of each cluster interaction is equal to its cluster weight \(\text{Var}[\widehat{H}_{B}(\mathbf{\sigma}_{S})]=||\widehat{H}_{B}||_{2}^{2}\). Further by using Equation 10, we see that the effective cluster weights are the associated conditional variance with all lower order variances subtracted, i.e. the variance that can be attributed to a single cluster only and to none of its sub-clusters, \[\text{Var}[\widehat{H}_{B}(\mathbf{\sigma}_{S})]=\text{Var}[H(\mathbf{\sigma})\mid\bm {\sigma}_{S}]-\sum_{T\subset S}\text{Var}[H(\mathbf{\sigma})\mid\mathbf{\sigma}_{T}] \tag{13}\] Apart from providing a formal characterization of expansion terms, the cluster decomposition provides motivation and interpretations for the choice of regularization used when fitting. For example, Ridge regularization can be interpreted as setting an upper cutoff to the total variance. The use of Tikhonov regularization can be used as a way to more finely set variance cutoffs for specific cluster interaction terms. Recently proposed group-wise regularization [11], can be directly motivated as a judicious form to regularize cluster interactions \(H_{B}\) by weighing coefficients with their permutation multiplicities \(\widehat{m}_{\beta}\). Finally, estimation algorithms with hierarchical inclusion/exclusion of clusters [11; 46; 47; 48; 13], can be motivated by appealing to statistical concepts of _hierarchically well-formulated_ models [49] that satisfy _marginality constraints_[50], or that abide by _heredity principles_ which satisfy either strong or weak hierarchy constraints [51; 52]. In addition, the cluster decomposition allows one to formally rank the importance of the contribution of each cluster interaction following the prescription of Sobol's sensitivity indices [36]. Accordingly, we define the effective cluster sensitivity index \(\widehat{\tau}_{B}\) as the fraction of the total variance of \(H\) carried by the interactions of a cluster \(S\in B\), \[\widehat{\tau}_{B}=\frac{\text{Var}[\widehat{H}_{B}(\mathbf{\sigma}_{S})]}{\text {Var}[H(\mathbf{\sigma})]} \tag{14}\] Similarly, we define the cluster sensitivity index \(\tau_{B}\) as the normalized fraction of the total variance of \(H(\mathbf{\sigma})\) contributed by the cluster interaction \(\widehat{H}_{B}\) of all clusters \(S\in B\), \(\tau_{B}=m_{B}\widehat{\tau}_{B}\) per normalizing unit. Cluster sensitivity indices provide a mathematically formal route for Figure 2: (a) Visualization of the main effect (point), nearest neighbor pair, and triplet cluster interactions as tensors for a cluster decomposition of the configuration energy of a NiCoCr alloy using a fit that includes pairs and triplets with diameters up to up to 9\(\AA\) and 4.3\(\AA\) respectively. (b) Cluster sensitivity indices of two fitted NiCoCr cluster decompositions (one including only pairs, and another including pairs and triplets) sorted by cluster diameter. Effective (total) cluster indices are shown with solid (translucent) colors. evaluating trends in the strength of interactions. Cluster sensitivity indices can be directly computed from a CE by using Equations 9 for the cluster weights. Figure 2b shows cluster sensitivity indices for the interactions of two fitted cluster decompositions of a CrCoNi alloy. As a basic example demonstrating practical use cases of the cluster decomposition, we fit two cluster expansions of a CrCoNi medium entropy alloy. Our approach follows a recent study of the CrCoNi alloy [34] using a cluster expansion and Wang-Landau sampling [53]. Following previous work, we fit two expansions [34]: a less accurate expansion (in terms of cross-validation error) that includes pairs terms only (pair fit), and a more accurate expansion including pairs and triplets (triplet fit). We only reproduce previous results as an illustration and do not attempt to make any novel scientific claims about this particular alloy. The energy contributions from interactions of specific species can be obtained by directly inspecting cluster interactions. Figure 2a shows the main effect, pair, and triplet cluster interactions included in the triplet-fit cluster decomposition. We can readily determine which interactions are favorable (negative) and which are unfavorable (positive) based on the color map. The relative trend of the nearest-neighbor interactions obtained directly from the cluster decomposition agrees with previous results obtained via an ad-hoc, over-complete and less accurate nearest-neighbor pair model [34]. The interactions shown in Figure 2a are of different orders of magnitude: the main effect contributions are of eV magnitude, and higher degree interactions are of meV magnitude. We can identify the most important cluster interactions, rank their importance, and compare different fits on rigorous grounds by using the corresponding cluster sensitivity indices as shown in Figure 10b. In both fits, the first two pair interactions are the most important (largest sensitivity), with significant contributions coming from triplet interactions in the triplet fit. As a further illustration of insights that can be obtained from the cluster decomposition, we computed nearest-neighbor pair short-range order, internal energy, and heat capacity of the CrCoNi alloy from an equiatomic canonical Wang-Landau density of states using a 216 atom supercell. Figure 3 shows the computed values for the pair fit and the triplet fit, as well as a truncated expansion including only the pair interactions from the triplet fit (triplet interactions removed). Comparing the nearest-neighbor pair energies and SRO results in Figure 3 of the two decompositions that include only pair interactions, we observe that the overall SRO and total internal energy trends are set predominantly by the first and second nearest neighbor pair interactions (those with the highest cluster sensitivity from Figure 2b). However, based on the triplet fit, we can conclude that triplet terms reduce the fraction of energy attributed to pair terms, tune the SRO values and raise the transition temperature. To delve deeper, one could inspect the triplet interaction values to better understand their role in tuning Figure 3: Nearest neighbor pair probabilities, cluster energies and total internal energy, normalized heat capacity, and nearest neighbor cluster interactions. The cluster energies of nearest neighbors are plotted with a solid blue curve, the total energy with a solid red curve, and the remaining cluster energies are plotted with dashed curves. Pair fit results (top), triplet fit results (bottom, solid), truncated fit (bottom, translucent/dot-dash). the ordering transition. These results agree with those reported previously [34], however, by using the cluster decomposition, we have shown how the results can be substantiated with a mathematically formal analysis. We believe that substantially more insight, use cases, and parameter estimation methods beyond what has been presented here can be developed using the cluster decomposition and its formal statistical properties. The statistical literature is ripe with analysis techniques and methodology--such as log-density ANOVA models [42; 43] and sensitivity analysis [36; 54]--that can be directly leveraged in applications using parameterized lattice models. Several methods already exist in the statistics literature that can be used for direct estimation of cluster interactions and cluster indices in fully basis-agnostic manners [36; 41; 43; 54]. Moreover, the formalism of the cluster decomposition is not limited to scalar functions of discrete degrees of freedom as presented here. In fact, a cluster decomposition can be obtained for any representation of scalar, vector, or tensor-valued function over a tensor product space domain by following the same prescription we have presented. Thus related expansions and generalizations [17; 19; 22] can be suitably recast as cluster decompositions and thus open the door to continued and significant developments based on rigorously established mathematical and statistical grounds. An implementation of the cluster decomposition and all code used in this work is available at Ref [55]. This work was primarily funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231 (Materials Project program KC23MP). This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0020531.
2305.07786
Rationalizing Euclidean Assemblies of Hard Polyhedra from Tessellations in Curved Space
Entropic self-assembly is governed by the shape of the constituent particles, yet a priori prediction of crystal structures from particle shape alone is non-trivial for anything but the simplest of space-filling shapes. At the same time, most polyhedra are not space-filling due to geometric constraints, but these constraints can be relaxed or even eliminated by sufficiently curving space. We show using Monte Carlo simulations that the majority of hard Platonic shapes self-assemble entropically into space-filling crystals when constrained to the surface volume of a 3-sphere. As we gradually decrease curvature to "flatten" space and compare the local morphologies of crystals assembling in curved and flat space, we show that the Euclidean assemblies can be categorized either as remnants of tessellations in curved space (tetrahedra and dodecahedra) or non-tessellation-based assemblies caused by large-scale geometric frustration (octahedra and icosahedra).
Philipp W. A. Schönhöfer, Kai Sun, Xiaoming Mao, Sharon C. Glotzer
2023-05-12T22:23:24Z
http://arxiv.org/abs/2305.07786v1
# Rationalizing Euclidean Assemblies of Hard Polyhedra from Tessellations in Curved Space ###### Abstract Entropic self-assembly is governed by the shape of the constituent particles, yet _a priori_ prediction of crystal structures from particle shape alone is non-trivial for anything but the simplest of space-filling shapes. At the same time, most polyhedra are not space filling due to geometric constraints, but these constraints can be relaxed or even eliminated by sufficiently curving space. We show using Monte Carlo simulations that the majority of hard Platonic shapes self-assemble entropically into space-filling crystals when constrained to the surface volume of a 3-sphere. As we gradually decrease curvature to "flatten" space and compare the local morphologies of crystals assembling in curved and flat space, we show that the Euclidean assemblies can be categorized either as remnants of tessellations in curved space (tetrahedra and dodecahedra) or non-tessellation-based assemblies caused by large-scale geometric frustration (octahedra and icosahedra). _Introduction_ - Particle shape has become an important design parameter in material science [1; 2; 3], colloidal self-assembly [4; 5; 6] and granular matter [7; 8; 9]. One example of the importance of shape is systems of hard particles, which, due solely to entropy maximization, can self-assemble into a zoo of different colloidal crystals simply by changing, even subtly, particle shape [10]. Although methods exist to inversely design particle shapes likely to self-assemble into targeted crystalline structures [11; 12; 13; 14], it is non-trivial to predict those structures from particle shape, other than through molecular simulation. This challenge becomes apparent even for the simplest polyhedra, the Platonic solids. Particle shape directly determines both their assemblies [10; 15] - in which entropy is maximized - and _packings_[10] - in which density is maximized. However, assemblies and packings are the same only for some Platonic solids and polyhedra in general [16]. Cubes are one example where the self-assembled simple cubic (SC) crystal structure and densest packing (a space-filling SC crystal) coincide [17]. Hard octahedra and icosahedra do not fill 3D space, but they, too, maximize entropy in crystals that coincide with their densest packing structures: a rhombohedral and face-centered cubic (FCC) structure, respectively [10; 18]. However, hard dodecahedra self-assemble into a 20-particle unit cell \(\beta\)-Manganese rotator crystal [10] with two distinct local particle environments instead of its densest packing structure, FCC [18]. Likewise, hard tetrahedra famously form quasicrystals [19; 20] with a myriad of different particle environments instead of the putative densest packing structure with a unit cell comprised of four tetrahedra arranged in a double-dimer structure [19; 21]. Evidently, densest packings, at least in Euclidean space, cannot serve as indicators to predict self-assembly [22; 23]. But what of curved space? Polyhedra fail to self-assemble their densest packing structures when they fail to resolve global geometric constraints that prevent the polyhedra from maximizing entropy locally as well as globally [24]. For example, entropy is maximized for cubes when cube faces are aligned, a motif consistent with the SC densest packing, and thus no geometric constraints arise during assembly. Inspired by studies of Frank-Kasper phases [25; 26], glasses[27; 28; 29; 30; 31], tetrahelix sheets [32] and liquid crystal blue phases [33; 34; 35; 36], we investigate in this manuscript if and how hard particle assemblies are related to space filling tessellations of curved space. We hypothesize that, if we can find a suitable space with curvature \(K\) that permits a shape to tessellate, then the shape will self-assemble into a crystal based on the tessellation in that space because the tessellating arrangement will maximize entropy. By subsequently flattening the space and monitoring the defects that arise in the process, we posit that we will gain predictive information on the likely structure of the Euclidean (3D) assembly. We first tested our hypothesis by determining if, in positively curved space, any of the five Platonic solids self-assemble entropy maximizing, tessellating 4-polytopes with no global geometric frustration. We performed hard particle Monte Carlo (HPMC) simulations and show that tetrahedra, dodecahedra and octahedra self-assemble into their corresponding 4-polytopes, each in a differently curved space. We then geometrically frustrated the assemblies by simultaneously increasing the 3-sphere radius \(R\)=\(K^{-1}\) while keeping packing fraction constant, thereby flattening the curved spaces. By comparing the local environments of particles assembled in curved and in flat space, we show that the geometric incommensurability, that prevents particles to form entropically favorable tessellations, manifests itself in two different ways as we gradually flatten space. Interestingly, the Euclidean assemblies of tetrahedra and of dodecahedra still exhibit signs of their 3-sphere tessellations. The geometric frustration suffered by the tessellating assembly as curved space is flattened resolves by the appearance of defects, leading to an assembly with free volume distributed non-uniformly through the structure. In contrast, we find that the assemblies of octahedra and of icosahedra have a large curvature mismatch between the curved spaces they tessellate and Euclidean space, and thus their 3D assemblies are not related to defect-ridden tessellations from curved space. Instead, these shapes resolve the geometric frustration in Euclidean space by maximizing entropy uniformly among all the particles, resulting in colloidal crystals of considerably less complexity than those assembled by tetrahedra and by dodecahedra. _Self-Assembly of 4-polytopes_ - The family of regular 4-polytopes can be identified as tessellations of the 3D positively curved volume of a 3-sphere. We performed HPMC simulations of the self-assembly of \(N\)=600 tetrahedra, \(N\)=120 octahedra and \(N\)=24 dodecahedra with circumsphere diameter \(\sigma\) confined to the 3-sphere into 4-polytopes corresponding to the 600-cell consisting of 600 tetrahedral cells, the 120-cell with 120 dodecahedral cells and the 24-cell with 24 octahedral cells, respectively. There exist even more tessellations of the three Platonic solids both in hyperbolic and spherical space, but these three tessellations deviate in curvature the least from flat Euclidean space, which makes them the strongest candidates for a comparison with self-assembled 3D structures. Because cubes already tessellate Euclidean space, and icosahedral tessellations exist only in hyperbolic space, we do not simulate these two shapes [37]. All self-assembly simulations were carried out at constant pressure and constant \(N\). Fig. 1d shows equations of state for all three shapes. The data indicates a first-order transition from the disordered fluid phase into a crystalline phase for the tetrahedron and dodecahedron systems. Although a first-order transition is not evident in the octahedron data, we suspect this is simply due to the necessarily small system size [38; 39]. The crystal structures that self-assemble above the transition pressure (or corresponding density) are quantified by two different types of radial distribution functions (RDF; see Fig. 1b and c). The RDF \(g_{c}(r)\) quantifies spatial correlations between particle centroids, and develops peaks that coincide with the characteristic geodesic distances between cells of the ideal 4-polytopes. Similarly, the RDF \(g_{v}(r)\) calculated from the polyhedron vertices develops peaks that fit the dual lattices of the 600-cell (dual: 120-cell), 120-cell (dual: 600-cell) and 24-cell (self-dual). Moreover, we observe that during the formation of the 120-cell, the dodecahedron particles first achieve translational order before they align their orientations, indicating a transition from the isotropic phase into a plastic 120-cell and then into a 120-cell crystal. By further increasing the density in the 24 octahedra and 120 dodecahedra system, the peaks of \(g_{c}(r)\) and \(g_{v}(r)\) narrow into delta functions, indicating space-filling ideal packings. Also, the stereographic projections of the self-assembled structures reveal the formation of the tessellations (see Fig. 1a and Movies 1-3). We were unable to compress the 600 tetrahedra into the perfect tiling of the 3-sphere, instead, we observe a 600-cell with void defects and interstitials at a maximum density \(\rho\)=0.96. _Defect stabilized 600-cell_ - To determine why the perfect 600-cell tessellation of tetrahedra does not assemble at high densities we performed additional simulations with slightly lower and higher numbers of particles (see Figure 1: Self-assembly of 600 hard tetrahedra into the 600-cell (top row), 120 hard dodecahedra into the 120-cell (center row) and 24 hard octahedra into the 24-cell (bottom row) on the 3-sphere. a) Stereographic projections of the ideal 4-polytopes and the densest obtained assembled configuration via MC simulations. Particles that are highly deformed by the stereographic projection are outlined by their edges for better visualization. Normalized radial distribution functions at different densities \(\rho\) in regard to the b) center positions and c) vertex positions of the particles. d) Space curvature vs pressure calculations during the phase transition. e) Highest densities obtained from self-assembly simulations at different number of particles \(N\). Fig. 1e). Whereas the octahedron and dodecahedron 3-sphere systems are the most densely packed for the ideal number of particles that correspond to their 4-polytopes, the tetrahedron systems create the lowest local density for \(N\)=585 when defects are present (see Movie 4). Also the critical pressures at the phase transition and free energy calculations in Fig. 2 indicate that the 600-cell spherical lattice is stabilized by impurities at the transition with \(N\)=585. This stabilization of the crystalline phase via vacancies is similar to the equilibrium SC phase of hard cubes [17], where cubes form linear arrays that can slide along each other adding another entropic contribution and leading to the stabilization of the SC crystal via the inclusion of void defects. Analogously, the 600-cell can be separated into 20 linear arrays known as tetralhelix loops, indicating a similar sliding mechanism [40] (see Fig. 2b). Despite our free energy calculations, which suggest that the hard tetrahedron system should eliminate the vacancies at higher densities like hard cubes do in Euclidean space, the hard tetrahedron system confined to the 3-sphere is configurationally trapped. Our compression scheme without allowing temporary overlaps [41], therefore, is unable to eliminate defects integrated into the crystal structure after its initial assembly, resulting in a defected 600-cell. _Bending into flat space_ - To study how assemblies in Euclidean space resolve geometrical incompatibilities so that particles can arrange into entropically favored configurations, we frustrated the 24-, 120- and 600-cell structures from curved space into Euclidean space. Specifically, we increased the number of particles on the 3-sphere, which simultaneously decreases the space curvature \(K\)=\(\left(\frac{2\pi^{2}\rho_{N}}{N}\right)^{\frac{2}{3}}\) at a constant number density \(\rho_{N}\). As the flattening systems gradually incorporate more particles, the larger the 3-sphere radius deviates from the ideal curvature that allows for t Figure 3: Minkowski order parameter of hard a) dodecahedron, b) tetrahedron and c) octahedron assemblies on the 3-sphere at \(\rho\)=0.65 when flattening into Euclidean space. Each circle represents a particle environment obtained from the simulation and is compared to crystal structures in Euclidean space. The white outline indicates the region within which 80% of the typical local particle environments of the self-assembled structures in Euclidean space (\(\beta\)-manganese for dodecahedra with Wyckoff sites 8\(c\) and 12\(d\), quasicrystal for tetrahedra and rhombohedral crystals for octahedra). For the tetrahedra we use the (3, 4, 3\({}^{2}\), 4) quasicrystal approximation of the dodecagonal quasicrystal as a reference as both structures feature equivalent local environments [19]. d) Comparison between the Minkowski order parameter of hard icosahedra in their densest packings (flat space: FCC, hyperbolic space: icosahedral honeycomb) and a self-assembled FCC structure in Euclidean space. Figure 2: a) Normalized radial distribution functions at the highest obtained density for 560, 580 and 600 hard tetrahedra on the 3-sphere. b) Stereographic projections of tetral helix loops extracted from an ideal 600-cell. c) Pressure calculations during the phase transition for different numbers of hard tetrahedra on the 3-sphere. The inset plot shows the critical densities at the phase transition. d) Vacancy concentration with the lowest per-particle free energy at different packing fractions. We used the Frenkel-Ladd method (see SI) to calculate the free energy difference \(\Delta F\) relative to an Einstein crystal. e) Free energy difference at different vacancy concentrations for \(\rho\)=0.5. The red arrows indicate the relation between d) and e). all simulations with \(N_{\rm oct}\)\(\in\)[26, 480], \(N_{\rm{dod}}\)\(\in\)[125, 1000] and \(N_{\rm{tet}}\)\(\in\)[620, 1000]. We quantified the assemblies locally by calculating a set of Minkowski-weighted Steinhardt order parameters (SOP) \(q_{4}\), \(q_{5}\), \(q_{6}\), \(q_{8}\), \(q_{10}\) and \(q_{12}\)[42, 43] in Fig. 3 (see SI). If the number of polyhedra fits the number that can tessellate the 3-sphere perfectly, the particles achieve a local environment that is in accordance with the 4-polytope configuration once they enter the ordered state. When we slightly increase the number of particles in all three systems, local environments are introduced that deviate from their ideal 4-polytope arrangements. Consequently, the region of typical particle environments expands in the 6-dimensional SOP space, while most particles keep 4-polytope-like environments (see Fig. 3). For the dodecahedron and tetrahedron systems, this region of local environments characterized by SOPs remains consistent even for a large number of added particles (i.e. considerable flattening) and can also be identified as the typical environments of their representative ideal and thermalized self-assembled structures in Euclidean space: the \(\beta\)-Manganese structure for dodecahedra and the quasicrystal for tetrahedra. Moreover, the development of the local environments with decreasing \(K\) indicates why these systems feature multiple local particle arrangements in Euclidean space. The unit cell of the \(\beta\)-Manganese crystal lattice, for example, contains 20 atoms with two unique Wyckoff sites \(8c\) and \(12d\) and, hence, two different local environments. The Wyckoff sites are located in two different regions within the SOP space, where, remarkably, \(8c\) is close to the ideal 120-cell environment for hard dodecahedra. By assigning each environment during the flattening process to one of the Wyckoff positions depending on their distance in SOP space (see Fig. 4), we identify \(8c\) as inherited from the 3-sphere tessellation whereas \(12d\) is a disclination integrated into the 120-cell structure. In assemblies with a small number of added dodecahedra \(N\)\(\in\)[120, 200] the particles arrange mostly in an \(8c\)-like local environment with only a few particles accumulating around the \(12d\) environment. By flattening space further, more defects arise, which is in accordance with the increase in \(12d\)-like environments. The ratio between \(8c\)-like particles converges towards a value between the ideal ratio of sites in a \(\beta\)-Manganese crystal \(\frac{N_{8c}}{N}\)\(=\)\(\frac{8}{20}\)=0.4 and a ratio obtained from a self-assembled \(\beta\)-Manganese crystal \(\frac{N_{8c}}{N}\)\(=\)0.31\(\pm\)0.02. Similarly the multiple environments in the quasicrystal of hard tetrahedra can be interpreted as defects. Even by comparing the SOPs between a quasicrystal and the self-assembled 600-cell structure with void defects we detect that the local environments match (see Fig. SI1). By flattening space, more and more particles obtain the quasicrystalline environments. We, therefore, argue that the quasicrystal is a result of the entropic gain to integrate different defects into the 600-cell that allows for the development of a variety of different local environments that we interpret as vacancies and interstitials in the 600-cell. However, hard octahedron systems with many added particles \(N_{\rm{total}}\)\(>\)50 paint a different picture. Here, the octahedron assembly must overcome a larger curvature difference to flatten the 24-cell (\(\Delta K_{24}\)\(\approx\)\(1.571\sigma^{-1}\)) compared to the 120-cell (\(\Delta K_{120}\)\(\approx\)\(0.776\sigma^{-1}\)) or 600-cell (\(\Delta K_{600}\)\(\approx\)\(0.764\sigma^{-1}\)). Therefore, the strategy of entropy compartmentalization[44, 45] by adding defects to the 24-cell eventually becomes less efficient than maximizing entropy by forming crystals with only one type of local environment reminiscent of the rhombohedral crystal. This phenomenon draws similarities to frustration escape in geometrically frustrated assemblies (GFAs) of deformable particles with open boundaries[46, 47, 48, 49]. Despite not being dominated by entropy but instead energetic contributions such as particle deformation, binding between building blocks, and boundary energy terms, GFAs also feature an incompatibility between the locally preferred order and global constraints. Similar to the space curvature in our hard particle assemblies, the particle shape rigidity in GFAs dictates if it is energetically more favorable for the system to accumulate stresses without loosing locally preferred order (rigid self-limiting regime) or to escape the frustration by deforming the particles (soft bulk regime). Hence, the self-assembled structure of hard octahedra is not based on a 3-sphere tessellation. Instead the assembly is more related to the densest packing in Euclidean space, which indicates how to accommodate global and local geometric frustration uniformly. Although we do not perform a similar computational study with icosahedra, we observe in Fig. 3d that the typical local environments of the self-assembled FCC crystal of icosahedra show the same characteristics as the octahedron system. Within the SOP space the icosahedra sit in-between an ideal FCC and the ideal neighborhood of the icosahedral honeycomb (IH) tessellation of hyperbolic 3-space, but are clearly detached from the latter. This suggests that, like the octahedron system, the occurrence of the FCC crystal in hard icosahedron systems is also caused by the large differ Figure 4: Ratio of hard dodecahedra in assemblies on the 3-sphere with local environments closer to the \(8c\) rather than the \(12d\) Wyckoff site of the ideal \(\beta\)-Manganese structure in SOP space. The same data is used as in Fig. 3b. The orange dotted line refers to the ideal ratio of \(8c\) particles in the \(\beta\)-Manganese crystal \(\left(\frac{N_{8c}}{N}\right)_{\rm{ideal}}\)\(=\)0.4. The red dashed line refers to the obtained ratio when we apply the same calculations to a self-assembled \(\beta\)-Manganese structure of hard dodecahedra in Euclidean space \(\left(\frac{N_{8c}}{N}\right)_{\rm{sa}}\)\(=\)0.31\(\pm\)0.02. ence in curvature between the IH and Euclidean space. As in the octahedron system, the frustration of being geometrically restricted from tessellating Euclidean space is distributed uniformly. Therefore, we surmise that the FCC phase is not related to the IH tessellation in hyperbolic space, but rather to the densest packing in Euclidean space. _Conclusion_ - In this Letter, we related the assembly of complex colloidal crystal structures of hard polyhedra to tessellations in curved space. By performing MC simulations of hard tetrahedra, octahedra and dodecahedra on positively curved 3-spheres, we showed that the particles thermodynamically self-assemble their 4-polytope tessellations. This observation indicates that the particles will adopt their locally optimal configurations based on tessellations if no geometrical restrictions are present, such as those that occur for these systems when they attempt to crystallize in Euclidean space. Moreover, we observed by flattening space that the equilibrium colloidal crystal structures of the Platonic solids in Euclidean space can be separated into two categories. The first category includes entropy maximizing, self-assembled structures in flat space that can be understood as remnants of perfect tessellations in curved space. The dodecagonal quasicrystal reported in hard tetrahedron assemblies can be traced back to the 600-cell and its low entropic cost to implement defects into the crystal. Likewise, the \(\beta\)-Manganese configuration of hard dodecahedra stems from the 120-cell, with one Wyckoff site of the \(\beta\)-Manganese crystal identified as a local environment native to the 120-cell tessellation, and the other Wyckoff site identified as a defect of the ideal 120-cell structure. The second category includes self-assembled crystals in 3D space that do not stem from tessellations in curved space, such as the rhombohedral crystal of hard octahedra or the FCC crystal of hard icosahedra. Their corresponding space-filling tessellations require spaces with considerably larger curvature than the tessellations of the tetrahedron or dodecahedron. Consequently, maximizing entropy by introducing defects is only efficient when the system is slightly flattened. In Euclidean space, these assemblies instead adopt a geometric compromise where entropy maximization is achieved uniformly through a single type of local environment. Although we focused only on hard shapes so far, our findings probably also apply to systems with enthalpic contributions considering the mathematical description of Frank-Kapser phases as disclinated 600-cells [25; 26]. Hence, introducing additional degrees of freedom that help resolving geometric frustration - such as shape deformability or flexible particle bonds - might allow us to broaden the curvature window, where assemblies based on non-Euclidean crystals minimize free energy, and guide the prediction of new self-assembly structures. ## Acknowledgements The authors thank Nicholas Kotov, Nan Cheng and Francesco Serafin for helpful discussions. X.M. and K.S. were supported in part by the Office of Naval Research (MURI N00014-20-1-2479), and by the National Science Foundation (NSF PHY-1748958); P.S. and S.G. were supported by a grant from the Simons Foundation (256297, SCG). This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562; XSEDE award DMR 140129. Computational resources and services were also supported by Advanced Research Computing at the University of Michigan, Ann Arbor. ## References * [1] J.A. Champion, Y.K. Katare, and S. Mitragotri. Particle shape: a new design parameter for micro-and nanoscale drug delivery carriers. _J. Control. Release_, 121(1-2):3-9, 2007. * [2] J. Lee, K.H. Ku, J. Kim, Y.J. Lee, S.G. Jang, and B.J. Kim. Light-responsive, shape-switchable block copolymer particles. _J. Am. Chem. Soc._, 141(38):15348-15355, 2019. * [3] J. Wu, C. Ruan, Y. Ma, Y. Wang, and Y. Luo. Vital role of hydroxyapatite particle shape in regulating the porosity and mechanical properties of the sintered scaffolds. _J. Mater. Sci. Technol._, 34(3):503-507, 2018. * [4] M.R. Jones, R.J. Macfarlane, B. Lee, J. Zhang, K.L. Young, A.J. Senesi, and C.A. Mirkin. Dna-nanoparticle superlattices formed from anisotropic building blocks. _Nat. Mater._, 9(11):913-917, 2010. * [5] Y. Zhang, F. Lu, K.G. Yager, D. van der Lelie, and O. Gang. A general strategy for the dna-mediated self-assembly of functional nanoparticles into heterogeneous systems. _Nat. Nanotechnol._, 8(11):865-872, 2013. * [6] Z.J. Urbach, S.S. Park, S.L. Weigand, J.E. Rix, B. Lee, and C.A. Mirkin. Probing the consequences of cubic particle shape and applied field on colloidal crystal engineering with dna. _Angev. Chem._, 133(8):4111-4115, 2021. * [7] I. Zuriguel and T. Mullin. The role of particle shape on the stress distribution in a sandpile. _Proc. Royal Soc. A_, 464(2089):99-116, 2008. * [8] S. Wegner, R. Stannarius, A. Boese, G. Rose, B. Szabo, E. Somfai, and T. Borzsonyi. Effects of grain shape on packing and dilatancy of sheared granular materials. _Soft Matter_, 10(28):5157-5167, 2014. * [9] K.A. Murphy, K.A. Dahmen, and H.M. Jaeger. Transforming mesoscale granular plasticity through particle shape. _Phys. Rev. X_, 9(1):011014, 2019. * [10] P.F. Damasceno, M. Engel, and S.C. Glotzer. Predictive self-assembly of polyhedra into complex structures. _Science_, 337(6093):453-457, 2012. * [11] G. van Anders, D. Klotsa, A.S. Karas, P.M. Dodd, and S.C. Glotzer. Digital alchemy for materials design: Colloids and beyond. _ACS Nano_, 9(10):9542-9553, 2015. * [12] Y. Geng, G. van Anders, P.M. Dodd, J. D'shenuchadse, and S.C. Glotzer. Engineering entropy for the inverse design of colloidal crystals from hard shapes. _Sci. Adv._, 5(7):eaaw0514, 2019. * [13] M.Z. Miskin, G. Khaira, J.J. de Pablo, and H.M. Jaeger. Turning statistical physics models into materials design engines. _Proc. Nat. Ac. Sci._, 113(1):34-39, 2016. * [14] G.M. Coli, E. Boattini, L. Filion, and M. Dijkstra. Inverse design of soft materials via a deep learning-based evolutionary strategy. _Sci. Adv._, 8(3):eabj6731, 2022. * [15] T. Vo and S.C. Glotzer. A theory of entropic bonding. _Proc. Nat. Ac. Sci._, 119(4):e2116414119, 2022. * [16] B.A. Schultz, P.F. Damasceno, M. Engel, and S.C. Glotzer. Symmetry considerations for the targeted assembly of entropically stabilized colloidal crystals via voronoi particles. _ACS Nano_, 9(3):2336-2344, 2015. * [17] F. Smallenburg, L. Filion, M. Marechal, and M. Dijkstra. Vacancy-stabilized crystalline order in hard cubes. _Proc. Nat. Ac. Sci._, 109(44):17886-17890, 2012. * [18] S. Torquato and Y. Jiao. Dense packings of the platonic and archimedean solids. _Nature_, 460(7257):876-879, 2009. * [19] A. Haji-Akbari, M. Engel, and S.C. Glotzer. Phase diagram of hard tetrahedra. _J. Chem. Phys._, 135(19):194101, 2011. * [20] A. Haji-Akbari, M. Engel, and S.C. Glotzer. Degenerate quasicrystal of hard triangular bipyramids. _Phys. Rev. Lett._, 107(21):215702, 2011. * [21] E.R. Chen, M. Engel, and S.C. Glotzer. Dense crystalline dimer packings of regular tetrahedra. _Disc. Comput. Geom._, 44(2):253-280, 2010. * [22] E.R. Chen, D. Klotsa, M. Engel, P.F. Damasceno, and S.C. Glotzer. Complexity in surfaces of densest packings for families of polyhedra. _Phys. Rev. X_, 4(1):011024, 2014. * [23] D. Klotsa, E.R. Chen, M. Engel, and S.C. Glotzer. Intermediate crystalline structures of colloids in shape space. _Soft Matter_, 14(43):8692-8697, 2018. * [24] G. van Anders, D. Klotsa, N.K. Ahmed, M. Engel, and S.C. Glotzer. Understanding shape entropy through local dense packing. _Proc. Nat. Ac. Sci._, 111(45):E4812-E4821, 2014. * [25] M. Kleman. Curved crystals, defects and disorder. _Adv. Phys._, 38(6):605-667, 1989. * [26] A. Travesset. Nanoparticle superlattices as quasi-frank-kasper phases. _Phys. Rev. Lett._, 119(11):115701, 2017. * [27] D.R. Nelson. Order, frustration, and defects in liquids and glasses. _Phys. Rev. B_, 28(10):5515, 1983. * [28] D.R. Nelson. Liquids and glasses in spaces of incommensurate curvature. _Phys. Rev. Lett._, 50(13):982, 1983. * [29] R. Mosseri and J.F. Sadoc. Hierarchical structure of defects in non-crystalline sphere packings. _J. Phys. Lett._, 45(17):827-832, 1984. * [30] G. Venkataraman and D. Sahoo. Curved space and amorphous structures part i geometric models. _Contemp. Phys._, 26(6):579-615, 1985. * [31] J.P. Straley. Crystallization in curved three-dimensional space. _Phys. Rev. B_, 30(11):6592, 1984. * [32] F. Serafin, J. Lu, N. Kotov, K. Sun, and X. Mao. Frustrated self-assembly of non-euclidean crystals of nanoparticles. _Nat. Commun._, 12(1):1-11, 2021. * [33] J.P. Sethna, D.C. Wright, and N.D. Mermin. Relieving cholesteric frustration: the blue phase in a curved space. _Phys. Rev. Lett._, 51(6):467, 1983. * [34] J.P. Sethna. Frustration, curvature, and defect lines in metallic glasses and the cholesteric blue phase. _Phys. Rev. B_, 31(10):6278, 1985. * [35] B.G. Chen, P.J. Ackerman, G.P. Alexander, R.D. Kamien, and I.I. Smalyukh. Generating the hopf fibration experimentally in nematic liquid crystals. _Phys. Rev. lett._, 110(23):237801, 2013. * [36] J.-F. Sadoc, R. Mosseri, and J.V. Selinger. Liquid crystal director fields in three-dimensional non-euclidean geometries. _New J. Phys._, 22(9):093036, 2020. * [37] A first attempt to perform simulations in hyperbolic space [2] investigated only hard spheres. * [38] K. Binder and D.P. Landau. Finite-size scaling at first-order phase transitions. _Phys. Rev. B_, 30(3):1477, 1984. * [39] V. Privman and M.E. Fisher. Finite-size effects at first-order transitions. In _Current Physics-Sources and Comments_, volume 2, pages 149-181. Elsevier, 1988. * [40] J.F. Sadoc. Helices and helix packings derived from the {3, 3, 5} polytope. _Euro. Phys. J. E_, 5(1):575-582, 2001. * [41] A. Haji-Akbari, M. Engel, A.S. Keys, X. Zheng, R.G. Petschek, P. Palffy-Mubroya, and S.C. Glotzer. Disordered, quasicrystalline and crystalline phases of densely packed tetrahedra. _Nature_, 462(7274):773-777, 2009. * [42] P.J. Steinhardt, D.R. Nelson, and M. Ronchetti. Bond-orientational order in liquids and glasses. _Phys. Rev. B_, 28(2):784, 1983. * [43] W. Mickel, S.C. Kapfer, G.E. Schroder-Turk, and K. Mecke. Shortcomings of the bond orientational order parameters for the analysis of disordered particulate matter. _J. Chem. Phys._, 138(4):044501, 2013. * [44] T.C. Moore, J.A. Anderson, and S.C. Glotzer. Shape-driven entropic self-assembly of an open, reconfigurable, binary host-guest colloidal crystal. _Soft Matter_, 17(10):2840-2848, 2021. * [45] S. Lee, T. Vo, and S.C. Glotzer. Entropy compartmentalization stabilizes open host-guest colloidal clathrates. _Nat. Chem._, page (in press), 2023. * [46] D.M. Hall and G.M. Grason. How geometric frustration shapes twisted fibres, inside and out: competing morphologies of chiral filament assembly. _Interface Focus_, 7(4):20160140, 2017. * [47] M.F. Hagan and G.M. Grason. Equilibrium mechanisms of self-limiting assembly. _Rev. Mod. Phys._, 93(2):025008, 2021. * [48] I.R. Spivack, D.M. Hall, and G.M. Grason. Stress accumulation versus shape flattening in frustrated, warped-jigsaw particle assemblies. _New J. Phys._, 24(6):063023, 2022. * [49] D.M. Hall, M.J. Stevens, and G.M. Grason. Building blocks of non-euclidean ribbons: size-controlled self-assembly via discrete frustrated particles. _Soft Matter_, 19(5):858-881, 2023. **Supplementary information:** **Rationalizing Euclidean Assemblies of Hard Polyhedra from Tessellations in Curved Space** Philipp W. A. Schonhofer\({}^{1}\), Kai Sun\({}^{2}\), Xiaoming Mao\({}^{2}\), and Sharon C. Glotzer\({}^{1,2,3}\)\({}^{*}\) _Department of Chemical Engineering, University of Michigan, Ann Arbor, Michigan 48109, USA._ _Department of Physics, University of Michigan, Ann Arbor, Michigan 48109, USA. and_ _Biointerfaces Institute, University of Michigan, Ann Arbor, Michigan 48109, USA._ **Methods** We perform Monte Carlo simulations of hard polyhedra with circumsphere diameter \(\sigma\) in positively curved three-dimensional space. We model the 3-dimensional space with constant curvature \(K\) as a spherical confinement in four dimensions such that all points \({\bf r}=(w,x,y,z)\) of the spherical polyhedra are embedded in the surface volume of a hypersphere with radius \(R=\frac{1}{K}\) \[R^{2}=w^{2}+x^{2}+y^{2}+z^{2}. \tag{1}\] Computationally, these simulations are realized by implementing hyperspherical boundary conditions into Hoomd-blue v2.9 [1] and representing both the position and orientation of each polyhedron by a pair of quaternions. The algorithm allows for three Monte Carlo moves: translation via parallel transportation along geodesic great circle lines, local rotation around a randomly chosen axis and reflections at the face of the polyhedron. The exact scheme is detailed in Ref. [2]. A configurational change within a Monte Carlo step is accepted based on the excluded volume between particles. Hence, to determine the overlap between two polyhedra we considered geodesic distances on the hypersphere surface volume instead of 4-dimensional Euclidean distances. This means that the whole bodies of the polyhedra live on the hypersphere surface. Consequently, the edges and faces spanned by the vertices all lie on geodesic lines and geodesic planes, respectively, and are curved according to \(R\). Numerically, we check for intersections between particles using a XenoColloide algorithm scheme [3] modified to positively curved spaces by applying spherical instead of Euclidean trigonometry. The maximum translational and rotational step is chosen such that around 50% of change attempts are accepted. For our simulations we perform multiple sets of \(NVT\)-simulations. Starting from a set number of particles \(N\) we start the simulations at a high hypersphere radius \(R=N^{\frac{1}{3}}\) such that the polyhedra are in an isotropic gas phase. After every 1000th Monte Carlo step we attempt to decrease \(R\) by a factor of 0.999 to both slowly increase the packing fraction and the curvature of the space. The attempt is successful if no overlaps have been detected and the simulation continues with the new radius. Otherwise, the system is reset to the radius before the attempt. At every increment of \(\Delta R=0.2\sigma\) the system is equilibrated for \(1\times 10^{6}\) Monte Carlo steps to avoid frustration during the process. These compression steps are repeated until 100 consecutive shrinking attempts have failed. For each number of particles, we ran 5 replica simulations. Additional to the \(NVT\)-simulations we also run sets of isobaric simulations to determine the phase transition of polyhedra from the isotropic gas phase into ordered crystal structures. For the volume move attempts we increase or decrease the radius of the hypersphere. We choose the maximum radius move size such that roughly 50% of volume move attempts are successful. We start from a low pressure \(P=1\) and increase the pressure by \(\Delta P=0.5\) every \(2\times 10^{6}\) steps. For the last \(1\times 10^{6}\) Monte Carlo steps on each pressure level we determine the hypersphere radius to determine the phase transitions. **Free energy calculations** To calculate the per particle free energies \(\frac{\beta F}{N}\) of the 600-cell with defects we use the Frenkel-Ladd method [4; 5]: \[\frac{\beta F}{N}=\frac{\beta F^{\text{Ein}}(\lambda_{m})}{N}+\frac{\beta \Delta F}{N} \tag{2}\] We compare the different systems to a non-interacting Einstein crystal \(\frac{\beta F^{\text{Ein}}(\lambda_{m})}{N}\) where each particle is harmonically bonded to a site of the ideal 600 cell both in terms of position and orientation with a high coupling strength \(10000\epsilon\). The free energy difference \[\frac{\beta\Delta F}{N}=-\frac{1}{N}\log\left(\frac{N_{L}!}{(N_{L}-N)!N!}\right)- \frac{\beta}{N}\int_{0}^{\lambda_{m}}\!\left\langle\frac{\partial U^{\rm ext}( \lambda)}{\partial\lambda}{\rm d}\lambda\right\rangle \tag{3}\] consists of a combinatorial first term that takes all possible positions of the vacancies into account and the second external term. We calculate the external term via numerical integration of the harmonic bond potentials \[\beta U^{\rm ext}(\lambda)=\lambda\sum_{i=0}^{N}\left(\frac{1}{\sigma^{2}}|{ \bf x}_{i}-{\bf r}_{0}|^{2}+(1-{\bf u}_{i}\cdot{\bf q}({\bf r}_{0}))^{2}\right) \tag{4}\] where \({\bf x}_{i}\) and \({\bf u}_{i}\) are the position and orientation quaternion of the \(i\)-th particle and \({\bf r}_{0}\) and \({\bf q}({\bf r}_{0})\) are the position and orientation quaternion of the closest site of the ideal 600-cell structure. For \({\bf q}({\bf r}_{0})\) we considered the symmetries of the particle shape. **Steinhardt order prameter** To determine the local environments within the polyhedra system we calculate the rotationally invariant Minkowski weighted Steinhardt order parameters \(q_{l}\) with \(l\in\{3,4,5,6,10,12\}\) for each particle \(i\)[6; 7]. \[q_{l}(i)=\sqrt{\frac{4\pi}{2l+1}\sum_{m=-l}^{l}|q_{lm}(i)|^{2}} \tag{5}\] The quantity \(q_{lm}\) is comprised of a weighted sum over the spherical harmonics \(Y_{lm}\) between particle \(i\) and \(j\) \[q_{lm}(i)=\frac{1}{N_{b}}\sum_{j=1}^{N_{b}}w_{ij}Y_{lm}(\theta_{ij},\phi_{ij}) \tag{6}\] with the number of neighbors \(N_{b}\), weights \(w_{ij}\) and the polar angles \(\theta_{ij}\) and \(\phi_{ij}\) between the bond of \(i\) and \(j\). To calculate the polar angles from the 4-dimensional position data of the particles we map all neighbor particles \(j\) to the flat 3-dimensional space that is tangential to the hypersphere and touches it at the position of particle \(i\). Afterwards, we determine the angles according to the bond vector \({\bf r}_{ij}\) and the orientation of particle \(i\). The polar angles are the same on the hypersphere and the tangential space as the mapping is conformal and preserves all angles between directed vectors. For Minkowski weighted Steinhardt order parameter the weights are defined as \(w_{ij}=\frac{A_{ij}}{A_{i}}\), where \(A_{i}\) is the surface area of the Voronoi polyhedra around particle \(i\) and \(A_{ij}\) is the area of the face between particle \(i\) and \(j\). We construct the Voronoi tessellation on the hypersphere by calculating the 4-dimensional convex hull around the central position of each particle. As the particles live on the surface volume of the hypersphere, the 4D convex hull is equivalent to a Delauney tetrahedralization in spherical space. The dual lattice of the Delauney tetrahedralisation then corresponds to the Voronoi tessellation.
2310.12883
A Markovian dynamics for C. elegans behavior across scales
How do we capture the breadth of behavior in animal movement, from rapid body twitches to aging? Using high-resolution videos of the nematode worm $C. elegans$, we show that a single dynamics connects posture-scale fluctuations with trajectory diffusion, and longer-lived behavioral states. We take short posture sequences as an instantaneous behavioral measure, fixing the sequence length for maximal prediction. Within the space of posture sequences we construct a fine-scale, maximum entropy partition so that transitions among microstates define a high-fidelity Markov model, which we also use as a means of principled coarse-graining. We translate these dynamics into movement using resistive force theory, capturing the statistical properties of foraging trajectories. Predictive across scales, we leverage the longest-lived eigenvectors of the inferred Markov chain to perform a top-down subdivision of the worm's foraging behavior, revealing both "runs-and-pirouettes" as well as previously uncharacterized finer-scale behaviors. We use our model to investigate the relevance of these fine-scale behaviors for foraging success, recovering a trade-off between local and global search strategies.
Antonio C. Costa, Tosif Ahamed, David Jordan, Greg J. Stephens
2023-10-19T16:36:35Z
http://arxiv.org/abs/2310.12883v2
# A Markovian dynamics for _C. elegans_ behavior across scales ###### Abstract How do we capture the breadth of behavior in animal movement, from rapid body twitches to aging? Using high-resolution videos of the nematode worm _C. elegans_, we show that a single dynamics connects posture-scale fluctuations with trajectory diffusion, and longer-lived behavioral states. We take short posture sequences as an instantaneous behavioral measure, fixing the sequence length for maximal prediction. Within the space of posture sequences we construct a fine-scale, maximum entropy partition so that transitions among microstates define a high-fidelity Markov model, which we also use as a means of principled coarse-graining. We translate these dynamics into movement using resistive force theory, capturing the statistical properties of foraging trajectories. Predictive across scales, we leverage the longest-lived eigenvectors of the inferred Markov chain to perform a top-down subdivision of the worm's foraging behavior, revealing both "runs-and-pirouettes" as well as previously uncharacterized finer-scale behaviors. We use our model to investigate the relevance of these fine-scale behaviors for foraging success, recovering a trade-off between local and global search strategies. + Footnote †: Current address: Laboratoire de Physique de l’Ecole normale superérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université Paris Cité, F-75005 Paris, France ## Significance Statement Complex phenotypes, such as an animal's behavior, generally depend on an overwhelming number of processes that span a vast range of scales. While there is no reason that behavioral dynamics permit simple models, by subsuming inherent nonlinearities and memory into maximally-predictive microstates, we find one for _C. elegans_ foraging. The resulting "Markov worm" is effectively indistinguishable from real worm motion across a range of timescales, and we can decompose our model dynamics both to recover and discover behavioral states. Employing a simple form of substrate interactions, we connect postures to trajectories, illuminating how worms explore the environment. In more complex organisms, our approach can also link behaviors across time, from rapid muscular control to neuromodulation. ## Introduction From molecular motors contracting muscles, to neurons processing an ever changing environment, or the large-scale diffusion of hormones and other neuromodulatory chemicals, animal behavior arises from biological activity across innumerable spatial and temporal scales. With an instantaneous snapshot of all of these variables, the future behavioral state of the animal would be uniquely defined, a biological setting for the demon of Laplace (see e.g. [1]). Of course, such an approach is practically unrealizable. We are limited to a much smaller set of observations and the unobserved degrees of freedom will generally induce non-Markovianity, or memory, to the dynamics of the variables that we do measure [2; 3]. In animal behavior, interpretations of this memory guide our understanding of the complexity of the process [4; 5]. But what if we could use our observations to construct memory-full state variables that admit predictive, yet minimal-memory dynamics [6; 7]? The construction of such dynamics appears daunting. We may even conclude that this is impossible if it were not for the fact that it is done routinely in physical systems. Indeed, it is often the case that a subset of observable functions is enough to capture behavior at a particular scale. Hydrodynamics, for example, can be formulated with effective variables such as fluid velocity, density, or temperature: their memory only coming from the previous state. In behavior, we expect the emergent reconstructed dynamics to be generally high-dimensional in order to account for the multitude of unobserved mechanisms. Yet our approach also suggests a principled coarse-graining. Since the dynamics of the reconstructed states are Markovian, the emergent timescales of the (nonlinear) system are naturally ordered by the eigenvalue spectrum of a _linear_ evolution operator, or transition matrix in the case of discrete states. The eigenvectors associated with the gaps in the spectrum indicate slow collective modes and provide natural targets for coarse-graining. In the hydrodynamic example of \(\sim 10^{23}\) interacting molecules, these modes are the effective variables. Here we seek such Markov dynamics from the time series of posture in the foraging behavior of the nematode worm _C. elegans_, an important model organism in genetics and neuroscience [8; 9; 10]. For both the worm and animals generally, the collection of high-resolution behavioral data has been greatly accelerated by advancing techniques for pose estimation via machine vision [11; 12; 13; 14; 15], combined with computational and imaging improvements. Such measurement advances demand new behavioral understanding: analyses, models, and theory of posture-scale dynamics [16; 4; 17]. We implement a principled, generally-applicable framework which combines delay embedding with Markov modeling [6]. In this approach, we seek to overcome the partial observability of behavioral dynamics; variables which influence behavior but are instantaneously hidden become apparent over time and Markov predictability provides the quantitative measure of a self-determined system. Posture itself is a very complicated function of its underlying biological variables. In such situations, an initial expansion of dimensionality can simplify computations like function estimation and classification. We thus trade the complex modeling of a low-dimensional time series for the simpler modeling of a much higher-dimensional state space: the encoding of the unobserved degrees of freedom through time delays drastically simplifies our theory, leading to a powerful yet simple description of the emergent nonlinear Markovian dynamics. While Markov approaches have an extensive history, perhaps most familiarly in Markov Chain Monte Carlo sampling of _equilibrium_ distributions [18], substantially less attention has focused on a Markov encoding of actual _dynamics_, especially with a large number of states. Importantly, we note that there is no guarantee that our approach will work; for example, the number of necessary delays may be computationally prohibitive. But even this "failure" would provide important information about the memory of the system. On the other hand, if we are successful, we will be left with a finite set of observables that are approximately self-determined, measurable, and whose dynamics span the timescales that are relevant to the phenomena of interest. Such observables are likely to be biologically meaningful. We find state variables for worm behavior that exhibit Markovian evolution across the multiple timescales of _C. elegans_ foraging behavior: from fine-scale posture movements to "run-and-pirouette" strategies. Additionally, the macroscopic variables we reveal are not some baroque non-physical mathematical functions but rather correspond to interpretable behavioral motifs. We rediscover canonical behaviors from the rich history of _C. elegans_ ethomics, as well as describe new ones. Each of these motifs is associated with its own characteristic timescale, and with them we provide a new hierarchical subdivision of behavior. We show how the dynamics of these macroscopic variables can be propagated through a model of the organisms physical interaction with the environment to accurately predict locomotion from posture. Finally, we dissect the function of these behavioral motifs by investigating their relation to the exploration and exploitation of food sources. ## Short-Time Behaviors as Maximally-Predictive Posture Sequences On a 2D agar plate, worms move by making dorsoventral bends along their bodies [19]. At the shortest timescales (\(\sim 1\,s\)) these traveling waves along the body give rise to forward, backward, and turning locomotion [20; 21; 22]. We show here that this organization emerges naturally from short posture sequences, formally a delay embedding [23; 24; 25; 26; 27]. We employ a previously analyzed dataset [28] composed of 35 minute recordings of 12 lab-strain N2 worms freely moving on an agar plate, sampled at \(\delta t=1/16\,\)s. From high-resolution videos, we measure the worm's body posture using a rich low dimensional representation of the centerline, expressed as five "eigenworm" coefficients \(\vec{a}=[a_{1},a_{2},a_{3},a_{4},a_{5}]\in\mathbb{R}^{5}\)[20; 28], Fig. 1(a). We construct a maximally predictive sequence space [6] by stacking \(K\) delays of the posture time series, and increasing \(K\) until we have maximized the predictability of the resulting dynamics, as measured by the entropy rate, Fig. 1(b). To estimate the entropy rate, we partition each sequence space (indexed by \(K\)) into \(N\) microstates using k-means clustering so that the worm's posture dynamics now appear as transitions between microstates, a Markov chain. We approximate the entropy rate as that of the inferred Markov chain and choose the largest \(N\) before finite size sampling reduces the estimated entropy 1. This "maximum entropy" partitioning requires a large number of microstates but enables our model to be maximally expressive. After \(K\approx 8\,\)frames = 0.5 s and the entropy rate curves start to collapse, and we set \(K^{*}=11\,\)frames = 0.6875 s to define the maximally-predictive sequence space \(X_{K^{*}}\); lengthening the sequences does not increase predictability, Fig. S1(a). At \(K^{*}\) we choose a partition of size \(N^{*}=1000\). Footnote 1: The entropy rate of the Markov chain should not be confused with the Kolmogorov-Sinai (KS) entropy, which is an intrinsic property of the dynamics. Indeed, the KS entropy can also be estimated using our approach, yielding accurate estimates that agree with the sum of positive Lyapunov exponents [6; 22]. We visualize \(X_{K^{*}}\) by projecting into two dimensions using the UMAP manifold learning algorithm [29] (see Methods), Fig. 1(c). Color-coding according to the worm's body wave phase velocity \(\omega=-\frac{1}{2\pi}\frac{d}{dt}\left[\tan^{-1}(a_{2}/a_{1})\right]\), [20] Fig. 1(c, left), and overall curvature (obtained by summing the tangent angles along the body), \(\gamma=\sum_{i}\theta_{i}\), reveals that distinct short-time behavioral motifs, corresponding to forward, reversal, and turning movements, naturally correspond to different regions of the maximally predictive state space, Fig. 1(c, right). In other words, while the instantaneous posture itself \(\vec{a}(t)\) is not enough to disentangle different behaviors, a point in the sequence space \(X_{K^{*}}(t)\) uniquely corresponds to a particular short-time behavioral motif. ## II The Markov worm To better understand behavioral dynamics in the posture sequence space \(X_{K}^{*}\) we trade individual trajectories for the evolution of the probability density, formally akin to the choice of a Langevin (trajectory) vs. a Fokker-Plank (ensemble) perspective [30; 31], and here we briefly sketch the mathematical framework. We expect the maximally predictive sequences to evolve according to \(\frac{d}{dt}X_{K^{*}}=\Phi(X_{K^{*}})\), where \(\Phi(X_{K^{*}})\) is a nonlinear noisy function. The corresponding evolution of the probability densities \(\rho_{t}=\rho(X_{K^{*}},t)\) is \(\frac{d}{dt}\rho_{t}=\mathcal{L}\rho_{t}\), where \(\mathcal{L}\) is a differential operator that depends on the exact form of \(\Phi\). For a finite time step we have \(\rho_{t+\tau}=e^{\mathcal{L}\tau}\rho_{t}\). Importantly, even when the trajectories evolve non-linearly, the probability dynamics are linear. We approximate this linear ensemble evolution \(e^{\mathcal{L}\tau}\) as a finite-dimensional Markov chain with \(N\) microstates \(s_{i}\) determined by partitioning the space of posture sequences, and a transition matrix \(P(s_{j}(t+\tau)|s_{i}(t))\equiv P_{ij}(\tau)\approx e^{\mathcal{L}\tau}\) constructed by counting transitions between microstates \(s_{i}(t)\) and \(s_{j}(t+\tau)\) after a delay \(\tau\)[6], \[p_{j}(t+\tau)=P_{ij}(\tau)\,p_{i}(t), \tag{1}\] where \(p_{i}(t)\) is the probability of observing state \(s_{i}\) at time \(t\) and we sum over repeated indices. Note that \(P\) is a stochastic matrix so that each transition probability \(P_{ij}\geq 0\) and \(\sum_{j}P_{ij}=1\) for all microstates \(i\). For the worm's posture dynamics, as described in the previous section, we set the number of microstates as \(N=N^{*}=1000\) so as to maximize the amount of information with respect to the partitioning, Fig. 1(b). We set the transition time as \(\tau^{*}=0.75\,\)s so that the relaxation times of the long-lived dynamics are approximately independent of \(\tau\), as rigorously true in a Markov process (see Fig. S1(b,c) and Methods). Despite the conceptual simplicity of our Markov chain model, Eq.1, we show that it accurately predicts _C. elegans_ foraging behavior across scales, from fine-scale posture movements to long time scale transitions between behavioral states, Fig. 2. ## III Predicting behavior across scales Starting from the initial state of an individual worm, we simulate symbolic sequences by sampling from the conditional probability distribution \(P^{w}(s_{j}|\hat{s}_{i})\), Fig. 2(a), where \(\hat{s}_{i}\) is the current microstate, \(s_{j}\) are all possible future microstates after a time scale \(\tau^{*}\) and \(P^{w}(s_{j}|\hat{s}_{i})\) is the \(i\)-th row of the transition matrix inferred for worm \(w\). The result is a sequence of microstates with the same duration as the worm trajectories, but with a sampling time \(\delta t=\tau^{*}\). From each microstate we can then obtain a nearly continuous time series of "eigenworm" coefficients \(\{\vec{a}(t)\}\) through the sequence of \(K^{*}\) postures in each state \(X_{K^{*}}\) (note that \(K^{*}\) and \(\tau^{*}\) are quite close in this case). These dynamics are effectively diffusive in the space of posture sequences: hopping between microstates according to the Markov dynamics, followed by random selection from the set of posture sequences \(X_{K^{*}}^{\dagger}\) within each visited microstate \(s_{i}\). The posture time series generated through this procedure are nearly indistinguishable from the data, Fig. S2(a) and SI Movie 1. Quantitatively, the autocorrelation functions of the simulated time series, Fig. 2(b), capture the correlations observed in the data, and the distribution of mode coefficients agrees with the steady-state distribution, Fig. S3. In addition to the finescale posture dynamics, our model also predicts the rate at which forward movements are interrupted by biologically relevant behaviors [32] such as reversals, dorsal turns or ventral turns (identified by thresholding the body wave phase velocity [20] and the overall body curvature, see Methods), Fig. 2(c). Finally, at larger spatio-temporal scales the foraging random-walk can be coarsely split into forward "runs" interrupted by sharp "pirouettes": sequences of reversals and turns used by the worm to reorient itself [33]. Here we identify "runs" and "pirouettes" directly from posture dynamics by using the inferred transition matrix to identify stereotyped sequences of states (see the following section COARSE-GRAINING BEHAVIOR THROUGH ENSEMBLE DYNAMICS). As illustrated in the inset of Fig. 2(d), the identified states split the trajectory into "runs" and "pirouettes". We estimate the kinetic transition rates from runs-to-pirouettes \(\kappa_{R\to P}\) and from pirouettes-to-runs \(\kappa_{P\to R}\) and find close agreement between data and simulations across worms, Fig. 2(d). ## IV Posture to Path The accuracy of our Markov dynamics suggests the intriguing possibility that we may also recover the properties of foraging trajectories, i.e. the motility of the worm in 2D space. With such a bridge we could, for the first time, connect the neuromechanical control of posture with movement strategies such as optimal search. To do so, however, we must connect posture deformations with movement in the environment. Following previous work [34], we approximate the interaction between the worm's body and the viscous agar surface through resistive force theory (RFT) [35]. This phenomenological approach assumes that each segment along the body experiences independent drag forces. Despite its simplicity, this approximation has been successfully applied to predict the motility of various organisms in viscous fluids [36; 37; 38] and granular materials [39]. To propel the worm, we first reconstruct the skeleton positions in each frame \(\mathbf{x}_{i}(t)\) from the instantaneous tangent angles \(\theta_{i}(t)\) along each body segment \(i\), Fig. 4(a-left). From these we derive the worm-centric veloci ties \(\mathbf{v}_{i}(t)=\mathbf{x}_{i}(t+1)-\mathbf{x}_{i}(t)\) and displacements with respect to the center-of-mass position \(\mathbf{x}_{\text{CM}}\), \(\Delta\mathbf{x}_{i}(t)=\mathbf{x}_{i}(t)-\mathbf{x}_{\text{CM}}(t)\). This results in an expression for the underlying velocities at each body segment as a function of the measured worm-centric \(\mathbf{v}\) and \(\Delta\mathbf{x}\) and unknown overall translational \(\tilde{\mathbf{V}}(t)\) and angular \(\tilde{\boldsymbol{\Omega}}(t)\) velocities. As in [34], we use linear resistive force theory to decompose the force acting on each body segment into tangent and normal components \(\tilde{\mathbf{F}}_{i}(t)=\alpha_{i}\tilde{v}_{i}^{t}\tilde{t}+\alpha_{n} \tilde{v}_{i}^{n}\hat{n}\), with drag coefficients \(\alpha_{n}\) and \(\alpha_{n}\), Fig. 4(a-midle). Using this approximation, we can then recover the unknown underlying velocities \(\tilde{\mathbf{v}}_{i}(t)\) by imposing a zero net force and torque condition. The only free parameter in this model is the ratio between the normal and tangential drag coefficients \(\alpha=\alpha_{n}/\alpha_{t}\), which we infer by minimizing the distance between the reconstructed centroid trajectories and the real data (see Methods), Fig.S4(a). In agreement with the results of Keaveny et al. [34], we find that in such food-free conditions, the value of \(\alpha\) that optimizes the reconstruction of centroid trajectories is \(\alpha^{*}=30\,(29,31)\). Using such \(\alpha^{*}\), we then reconstruct the centroid path corresponding to the posture time series simulated according to our Markov chain, and show that these qualitatively resemble real worm trajectories, Fig. 3(a-right). To further quantify the similarity between the centroid trajectories reconstructed from posture simulations and the data, we estimate the mean squared displacement \(\text{MSD}(\tau)=\langle(\mathbf{x}_{\text{CM}}(t+\tau)-\mathbf{x}_{\text{ CM}}(t))^{2}\rangle\), which exhibits a transition between super-diffusive (nearly ballistic) and diffusive behavior between \(10\,\mathrm{s}\) and \(100\,\mathrm{s}\)[40; 41; 42; 43], Fig. 3(b-left), Fig S4(b). The foraging trajectories corresponding to the operator-based simulations accurately capture the MSD across a wide range of scales, including the ballistic-to-diffusive transition. To further assess the quality of the simulations, we estimate an effective diffusion coefficient by fitting \(\text{MSD}=4D\tau\) in the linear regime2 and find that, across worms, the resulting diffusion coefficients obtained from simulations closely follow the data, Fig. 3(b-right). Our results demonstrate that it is possible to go from microscopic posture dynamics to diffusive properties in a living organism. Footnote 2: We note that on longer time scales the MSD exhibits the behavior of a confined random walk due to the rigid boundaries of the agar plate, which makes it non-trivial to accurately estimate the diffusion coefficient [44]. We fit the diffusion coefficient in the regime \(\tau\in[60,100]\), which corresponds to a time scale within which finite-size effects are negligible and the mean squared displacement is approximately a linear function of \(\tau\), \(\text{MSD}\sim\tau\). ## III Coarse-graining behavior through ensemble dynamics As highlighted in Fig. 2, _C. elegans_ foraging behavior exhibits multiple time scales: from the body waves that define short-time behaviors (e.g., forward, reversal, turns), to longer-time sequences (e.g., run, pirouette) that the worm uses to navigate its environment. Typically, these longer-time sequences have been identified phenomenologically by setting thresholds on heuristically defined quantities, as was done in Fig. 2(c). Here, we show that it is possible to reveal the multiple scales of _C. elegans_ locomotion directly from the posture dynamics. Intuitively, stereotyped behaviors correspond to regions of the behavioral space that the animal visits often. In an analogy with statistical mechanics, we can imagine behavior as evolving on a complex potential landscape, where each well corresponds to a particular stereotyped behavior, and the barrier heights set the transition time scales. Such a picture emerges naturally when analyzing the dynamics through an ensemble approach, and we leverage our inferred Markov chain to directly identify metastable behaviors. ### "Run-and-Pirouette" The eigenvalues of the transition matrix provide direct access to the long time-scale properties of the dynamics, even when these are not directly apparent from the original trajectories or the equations of motion. The real part of the eigenvalues \(\{\lambda_{i}\}\) of \(P_{ij}(\tau^{*})\) characterize the exponential relaxation to the steady state. \[\Lambda_{i}^{-1}=\frac{-\tau^{*}}{\log\text{Re}(\lambda_{i})}. \tag{2}\] The spectrum of relaxation times is shown in Fig. 4(a-right), and exhibits an isolated, longer-lived mode with \(\Lambda_{2}^{-1}=2.68\,(2.11,3.27)\,\mathrm{s}\). Although there are \(\approx 10\) significant modes, beyond the first mode, all others are indistinguishable from each other. To assess significance, we obtain a noise floor (horizontal line in Fig. 4(a-right)) by shuffling the symbolic sequence, restimating the transition matrix, and computing its first nontrivial eigenvalue. In the limit of infinite data, there is only one surviving nonzero eigenvalue corresponding to the steady-state distribution (infinite relaxation time). The fact that the second largest eigenvalue is nonzero in the shuffle reveals finite-size effects that result in small deviations from the invariant density. The eigenvectors corresponding to long-lived dynamics reveal reaction coordinates that capture transitions between "macroscopic" metastable sets [45; 46]. In Fig. 4(a-right), these are groups of microstates that transition more often within rather than between groups. The structure of these sets and the kinetics between them offer a principled coarse-graining, which is not imposed but rather follows directly from the ensemble dynamics. As in [6], we identify metastable sets through spectral analysis of the time-symmetric (reversibilized) transition matrix \(P_{r}\) (see Methods) [47; 48], whose second eigenvector \(\phi_{2}\) provides an _optimal_ subdivision of the state space into almost invariant sets [49]. To elucidate the meaning of the slow mode, we use \(\phi_{2}\) to color-code the maximally-predictive sequence space, Fig. 1(c,d); Positive values (blue) generally align with negative phase velocities and large dorsal and ventral curvatures indicative of "pirouettes", while negative values (red) correspond to positive phase velocities and low curvatures, indicative of "forward runs". In the inset we show an example 10 minute long centroid trajectory color coded by the projection along \(\phi_{2}\). Negative projections occur during "runs", while positive values are found during abrupt reorientation events composed of sequences of reversals and turns. We thus obtain a slow reaction coordinate that captures the dynamics along a "run-and-pirouette" axis. The remaining eigenfunctions also reveal interpretable features of worm behavior, Fig. S5, albeit on a faster timescale. To achieve a principled, data-driven, coarse-graining of the slow dynamics, we search along \(\phi_{2}\) for a single threshold that maximizes the metastability of both resulting coarse-grained sets (see Methods) [6], Fig. S6, and we identify the resulting two macrostates as "run" and "pirouette". In Fig. 4(b-right) we show that the complementary cumulative distribution of the resulting run lengths \(1-P(t_{\text{state}}<t)\) is roughly characterized by two time scales, Fig. 4(b-right), fit by a sum of exponential functions and in excellent agreement with previous phenomenological observations [33]. In addition, these transition timescales are related to the timescale of relaxation to the steady state distribution as \(\hat{\Lambda}^{-1}=1/(\tau_{1}^{-1}+\tau_{2}^{-1})=3.313\,(2.985,3.709)\, \text{s}\)[31], which agrees with the relaxation times of the transition matrix within statistical accuracy, Fig. 4(a-right). In our analysis "run-and-pirouette" kinetics emerge directly from worm-centric posture dynamics, without any positional information. ### "Run(s)-and-Pirouette(s)" While dividing the dynamics along \(\phi_{2}\) identifies the longest lived states, splitting the effective free energy landscape along its highest energy barrier, what if there are important additional states within each metastable set? We use the transition matrix to perform a sequential subdivision of the posture embedding, revealing finer-scale states [50] (see Methods), Fig. 4(c). At each step, the metastable state with the largest measure is subdivided along the first nontrivial eigenvector of the recrystallized transition matrix conditioned solely on the microstates within the metastable state. This yields a subdivision of the posture embedding that obeys the structure of the free-energy landscape; at each iteration, we subdivide the system along the largest energy barrier within the highest measure basin. We note that our subdivisionsing process proceeds _from the longest-lived states down_ rather than from the shortest-lived states up, where the latter is more common in behavior coarse-graining approaches [51, 52, 53]. In the foraging behavior of _C. elegans_, beyond the initial division into runs and pirouettes (which we denote as "macroscopic" states), we further subdivide the dynamics into 7 "mesoscopic" interpretable states: 4 distinct run states and 3 subdivisions of the pirouette state Fig. 4(c),S7. The run state essentially splits into two fast states and two slower states, which can be distinguished either by the wave length of the body, or by having a particular bias towards the dorsal or ventral sides: the dorsally-biased slow state is akin to a headcasting state [54], while the ventrally-biased stated is akin to a "dwelling" state [55, 56, 57], with incoherent head and tail movements and no propagating wave [13]. On the other hand, the pirouette state neatly splits into dorsal turns, deep ventral \(\delta\)-turns, and reversals followed by shallow \(\Omega\)-turns. These mesoscopic states that decorate the worm's foraging behavioral landscape are short-lived, with a characteristic time scale of \(\langle\tau_{\text{dwell}}\rangle=1.65\,(1.63,1.68)\,\text{s}\approx 2\tau^{*}\), Fig. 4(c-right). The transition diagram between them, Fig. 4(c-right,inset), reveals the fine-scale organization of the worms' foraging strategy. Further subdivisions result in even shorter-lived states, which are increasingly challenging to interpret. ## V Exploring the role of the mesoscopic states In the data analyzed here, worms were grown in a food-rich environment, but then placed on food-free agar plates and allowed to move without restrictions. Under these conditions, the worm's behavior has been qualitatively described as foraging [58, 59]. We apply our approach to better understand the role of the mesoscopic states in the worm's search for food. We use the Markov model to simulate _in silico_ worms that are forced to remain in a particular mesoscopic state and the posture-to-path framework to investigate the properties of the trajectories resulting from the posture dynamics in each of the states. We can simulate trajectories that are much longer than those observed in the data, Fig. 4(c-right), allowing us to dissect how different states produce distinct large-scale tracks. In Fig. 5(a), we show simulated \(10\,\text{min}\) long trajectories for each of the 7 mesoscopic states. Notably, the difference in posture wavelengths exhibited by the two fast "run" states, Fig. S7(b), results in dramatically different trajectories, with the longer wavelength state (fast wide runs) resulting in overall straighter paths, and the shorter wavelength state (fast narrow runs) resulting in ventrally-biased curved trajectories with a diameter that is several times the body length and a period orders of magnitude longer than the body wave period, Fig. S8. Interestingly, the dorsally-biased slow state also results in loopy trajectories, but with a shorter diameter and faster recurrence time. In addition, the ventrally-biased "dwelling"-like slow state [55, 56, 57] with its frequent head retractions results in a denser sampling of a local patch. Finally, the three "pirouette" states result in a denser sampling of space and a reduced centroid displacement. We next interrogate the efficiency of each of the 7 mesoscopic states at encountering food uniformly distributed within a disc of radius \(r\) around the initial position, Fig. 5(b), a simple but informative condition. We find that the "pirouette" states as well as the slow "run" states are most efficient at finding food at shorter distances, while at larger distances the two fast "run" states perform best. Such a differential use of behaviors is also seen in nature. Upon encountering food, _C. elegans_, as well as many other species, engage in area restricted search, which is characterized by shorter paths and a high frequency of large angle turns [42; 58; 60; 61; 62; 63; 64; 65]. Conversely, upon removal from food, _C. elegans_ lowers its turning rate [66; 41] to engage in global search or long distance travel [62; 63; 64; 65; 58]. Remarkably, we find that instead of only using the most efficient behavioral state ("fast wide runs"), worms engage in a strategy that employs each mesoscopic state in a proportion that closely matches the relative efficiency of the different states at finding food uniformly distributed in a large patch (several body lengths), Fig. 5(c). This "probability matching" behavior has been observed across several species, including humans (see, e.g., [67; 68; 69; 70; 71; 72]), and emerges naturally in "multi-armed bandit" situations in which agents must decide among different actions that yield variable amounts of reward without knowing _a priori_ the relative reward of each action (see, e.g., [73]). ## Discussion We combine maximally-predictive short posture sequences with a Markov chain model to bridge disparate scales in the foraging dynamics of the nematode worm _C. elegans_. Rather than seeking low-dimensional descriptions of the data directly (e.g. [74; 75; 76; 77; 42; 77]), we instead first _expand_ in representation complexity: enlarging the variable of interest to include time in the form of posture _sequences_ and constructing a maximum entropy partition to capture as much predictive information as possible. This expansion both in time and number of micostates is similar in spirit to that currently found in large language models, though our conceptual approach is dramatically simpler. The maximally-predictive sequence space combines worm postures from roughly a quarter of the duration of a typical body wave, in agreement with previous work [22]. On longer timescales, the posture-based "run-and-pirouette" navigation strategy [78; 33] derived from the inferred Markov dynamics provide an accurate and principled coarse graining of foraging behavior, disentangling motions that are confounded by centroid-derived measurements (see e.g. [79]). This is particularly evident in our subdivision of the behavioral space. For example, we identify distinct "run" gaits that exhibit comparable centroid speeds, but are clearly distinguishable by the posture dynamics. Additionally, our top-down subdivision of behavior reflects the hierarchy of timescales in _C. elegans_ foraging behavior [54]. Our approach systematically identifies such a control hierarchy from behavioral recordings alone, connecting posture timescales to "run-and-pirouette" kinetics. It will be interesting to investigate how the mesoscopic states identified here are controlled by the nervous system of the worm, and recent advances in experimental techniques that permit simultaneous neural and behavioral imaging in _C. elegans_ provide an exciting path toward such discoveries [80; 81; 82; 83; 84]. The power of our modeling approach is in its simplicity; we bridge scales using a simple but effective Markov model, and this is only possible by recognizing and exploiting the mutual dependence between modeling and representation. Instead of directly modeling the posture time series (which can require higher-order and highly non-linear terms, see e.g. [20]), we search for maximally predictive states such that a simpler Markovian description can nevertheless accurately predict behavior. These emergent Markov dynamics offer a promising and powerful demonstration of quantitative connections across the hierarchy of movement behavior generally exhibited by all organisms [76; 85]. By finely partitioning the space of posture sequences, we encode continuous nonlinear dynamics through a Markov chain with a large number of states. This is analogous to building a hidden Markov model (HMM), but one in which the "hidden" states are actually observable (through time delays of our observations), and for which there is a one-to-one correspondence between "hidden" states and emitted symbols: each observation in the posture sequence \(\vec{a}(t)\) uniquely determines the state \(X_{K^{\star}}(t)\). While HMMs are commonly used in behavioral analysis (see e.g. [13; 86]), they are rarely built with so many states and with the goal of correctly predicting dynamics. In particular, most approaches employ a small number of discrete behavioral states, where the number of states is a hyperparameter of the model and the discretization is not unique. In contrast, we let the data _reveal_ the "hidden" states through time delays, and set the discretization so as to maximize predictive information. In this sense, the HMM we build is unique: by revealing the _hidden_ dynamics through time delays, the "hidden" states are uniquely determined by the observations, making the HMM unifilar [87]. In other words, the "hidden" states themselves have a very definite meaning in our approach: we effectively group together "pasts" that have equal predictability over the future up to an \(\epsilon\)-resolution (set by the number of partitions \(N\sim\epsilon^{-D_{\rm emb}}\), where \(D_{\rm emb}\) is the intrinsic embedding dimension of the dynamics), approximating the system's _causal states_[88]. This set of states together with the resulting Markov chain effectively constitutes an \(\epsilon\)-machine [89], the minimal maximally-predictive machine. Any other HMM in which hidden states are not _causal_ returns models that severely overestimate the complexity of the dynamics. In addition, even though we start with a large transition matrix, we can coarse-grain it by identifying which states commonly follow each other in time to generate stereotyped sequences. In this way, instead of imposing discrete states from the start (as is common with HMM approaches), we first identify a large number of predictive causal states and only then leverage the resulting Markovian dynamics to identify coarse-grained stereotyped behaviors. Our information theoretic framework also frees us from the constraint of linearity that is commonly imposed in graphical models applied to animal behavior (such as autoregressive hidden Markov models [90]). In particular, while the stereotyped states found through such models are encoded by linear dynamics, the states we identify can exhibit much more complex nonlinear dynamics, allowing us to capture longer time-scale structures in behavior. Are Markov models enough to capture the richness of animal behavior more universally? It is important to distinguish between two sources of non-Markovianity. The first one is general and is simply induced by the fact that time series data are typically only a partial observation of the full dynamical state [6, 7]. Projecting the full unobserved dynamics onto a subset of observable degrees of freedom inherently results in non-Markovian dynamics for the measured variables [91, 92, 2, 3]. Such an "under-embedding" might result in apparent memory when naively constructing behavioral states. The second source of non-Markovianity, which is less trivial and likely ubiquitous in behavior, derives from the fact that there may be "hidden" latent variables that modulate behavior over timescales comparable to the measurement time. In this case, the steady-state distribution itself is changing slowly over time, rendering the dynamics explicitly non-stationary [93, 5]. A relevant demonstration of non-stationarity is provided by the adaptive changes in pirouette rate seen in the behavior of _C. elegans_ upon removal from a food-rich environment [41, 42, 58, 62]. This adaptation is present in the data we analyze and is not captured by our Markov model, Fig. S9. To characterize such non-ergodic latent variables requires explicitly time-dependent Markov models, which we leave for future work. We note, however, that our coarse graining can be easily extended to capture non-stationary dynamics through the discovery of \(\tau\)-dependent coherent sets that identify _moving_ regions of the state space that remain coherent within a time scale \(\tau\)[94, 95, 96, 97, 98, 99]. Particularly interesting future directions include the analysis of even longer dynamics in _C. elegans_[100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112], where we expect to be able to extract longer-lived behavior strategies, such as the minutes-long transitions between "roaming" and "dwelling" states in food-rich environments [55, 56, 57]. Our modeling approach can also be used as means to obtain a deeper understanding of the effects of genetic, neural, or environmental perturbations on the multiple scales of _C. elegans_ behavior. Indeed, the inferred transition matrices are a powerful phenotype that encapsulates multiple scales of _C. elegans_ foraging behavior, and has the power to reveal how behavior is affected by a given perturbation. Of particular relevance for the study of long timescales in behavior would be to focus on mutations that impair neuromodulatory pathways and are thus likely to impact the spectrum of relaxation times of the inferred Markov chain. The effectiveness of our Markov model at capturing the nonlinear dynamics of _C. elegans_ body pose, combined with the ability to translate those spatiotemporal dynamics into movement, have allowed us to investigate how different behavioral states result in distinct ways of exploring the environment at much larger scales. Our analysis recovered the two main foraging modes exhibited by _C. elegans_: one that combines different pirouette states and slow runs resulting in a local search, and another one that mostly leverages fast run states to search for food more globally [62, 63, 42, 58]. We also discovered that the relative use of different behavioral states closely follows the relative efficiency of each state in food discovery. In fact, instead of using the behavioral state that would maximize its chances of finding food in its environment, worms match their strategy with the relative efficiency of each state. Interestingly, such a strategy, termed probability matching [71], or Thompson sampling [103], is a well-studied heuristic solution for the multi-armed bandit problem, a game in which different actions have variable rewards that are _a priori_ unknown to the player, whose goal is to maximize total pay-out. Evidence for probability matching in decision making tasks has been previously demonstrated in experiments in animals and humans [104, 105], and is an active area of research in cognitive science of decision making [106]. While this strategy seems "irrational" in the context of maximizing reward in a fixed environment, that is not the condition in which worms have evolved: in ecologically-relevant situations the environment changes over time, rendering the distribution of rewards non-stationary and the subsequent sampling events correlated. Interestingly, reinforcement learning agents that have been evolved in a changing environment also develop probability matching strategies [107]. In addition, it has been shown that optimal Bayesian learners engage in probability matching when they expect sampling events to have temporal dependencies [108], as is the case in most ecologically relevant scenarios in which samples of the environment are not independent, but exhibit temporal correlations (at least as a result of the actions taken by the agent). This suggests that probability matching may reflect an exploration-exploitation tradeoff that robustly maximizes reward in an ever-changing environment. Our results indicate that _C. elegans_ may implement such a heuristic in its foraging strategy. If the worm is indeed probability matching, it may have a way of storing its estimate of the current probability of success for each strategy (which may be reflected the dynamics itself). It will be fascinating to look for signatures of this in situations where we can experimentally adjust the pay-out probabilities. It may also be possible in a simple organism like _C. elegans_ to estimate metabolic costs to utilize each behavioral state [109]. The multitude of genetic tools [110], the ability to image neurons in behaving animals [80, 81, 82, 83, 84] and to quantify behavior using our methods may make _C. elegans_ an ideal system to look inside of an organism performing the dynamic loop of experiencing the world, making decisions based on those observations and its internal model of the world, and updating that internal model based on the outcomes of those decisions and their effect on the environment. ## Methods **Software and data availability:** Code for reproducing our results is publicly available: [https://github.com/AntonioCCosta/markov_worm/](https://github.com/AntonioCCosta/markov_worm/). Data can be found in [111]. _C. elegans_ foraging dataset:We used a previously-analyzed dataset [20], in which N2-strain _C. elegans_ were imaged at \(f=32\,\mathrm{Hz}\) with a video tracking microscope on a food-free plate and downsampled to \(f=16\,\mathrm{Hz}\) to incorporate coiled postures [28]. Worms were grown at \(20^{\circ}C\) under standard conditions [112]. Before imaging, worms were removed from bacteria-stream agar plates using a platinum worm pick, and rinsed from _E. coli_ by letting them swim for \(1\,\mathrm{min}\) in NGM buffer. They were then transferred to an assay plate (\(9\,\mathrm{cm}\) Petri dish) that contained a copper ring (\(5.1\,\mathrm{cm}\) inner diameter) pressed into the agar surface, preventing the worm from reaching the side of the plate. Recording started approximately \(5\,\mathrm{min}\) after the transfer and lasted for \(2100\,\mathrm{s}\), for a total of \(T=33600\,\mathrm{frames}\). Each frame is converted into a 5-dimensional "eigenworm" representation \(\vec{a}(t)\) by projecting the local tangent angles along the worm's centerline onto an "eigenworm" basis [20], Fig. 1(a). **Maximally predictive states:** Given the measurement time series, \(\vec{a}(t)\), with \(t\in\{\delta t,\ldots,T\delta t\}\) and \(\vec{a}\in\mathbb{R}^{5}\), we build a trajectory matrix by stacking \(K\) time-shifted copies of \(\vec{a}\), yielding a \((T-K)\times Kd\) matrix \(X_{K}\). For each \(K\), we partition the candidate state space and estimate the entropy rate of the associated Markov chain (see below). We choose \(K^{*}\) such that \(\partial_{K}h(K^{*})\sim 0\), which defined \(X_{K}^{*}\) as the maximally predictive states [6], Fig. S1(a). **State space partitioning:** We partition the state space into \(N\) Voronoi cells, \(s_{i},i\in\{1,\ldots,N\}\), through k-means clustering with a k-means++ initialization using scikit-learn [113]. **Transition matrix estimation:** We build a finite dimensional approximation of the Perron-Frobenius operator using an Ulam-Galerkin discretization [46]. In practice, given T observations, a set of \(N\) partitions, and a transition time \(\tau\), we compute \[C_{ij}(\tau)=\sum_{t=0}^{T-\tau}\zeta_{i}(X_{K^{*}}(t))\zeta_{j}(X_{K^{*}}(t+ \tau)),\] where \(\zeta_{i}(x)\) are the Ulam basis functions, which are characteristic functions \[\zeta_{i}(x)=\begin{cases}1,&\text{for }x\in s_{i}\\ 0,&\text{otherwise}\end{cases}\] set by the k-means clustering. The maximum likelihood estimator of the transition matrix is obtained by simply row normalizing the count matrix, \[P_{ij}(\tau)=\frac{C_{ij}(\tau)}{\sum_{j}C_{ij}(\tau)},\] which yields an approximation of the Perron-Frobenius operator. **Invariant density estimation:** Given a transition matrix \(P\), the invariant density is obtained through the left eigenvector of the non-degenerate eigenvalue \(1\) of \(P\), \(\pi P=\pi\): \(\pi_{i}\) is the probability of finding the system in a partition \(s_{i}\). **Short-time entropy rate estimation:** Given a number of partitions \(N\) and a sampling time scale \(\tau=\delta t\), we estimate the Markov transition matrix \(P\) and the corresponding invariant density \(\pi\) as detailed above and compute the short-time entropy rate as, \[h=-\frac{1}{\delta t}\sum_{ij}\pi_{i}P_{ij}\log P_{ij} \tag{3}\] **Two-dimensional UMAP embedding:** We use the UMAP embedding [29] as a tool to visualize the maximally predictive states of _C. elegans_ posture dynamics. In a nutshell, the UMAP algorithm searches for a low dimensional representation of the data that preserves its topological structure. We use a publicly available implementation of the algorithm found in [https://github.com/lmcinnes/unap](https://github.com/lmcinnes/unap), within which we chose the Chebyshev distance metric to compute distances in the high-dimensional space, n_neighbors=50 nearest neighbors and min_dist=0.05 as the minimum distance. **Matrix diagonalization:** The high dimensionality and the sparsity of the transition matrices for large \(N\) results in numerical errors when using a naive estimator for the full spectrum of eigenvalues. In addition, since we are interested in the longest lived dynamics, we focus on finding only the \(n_{\text{modes}}\) largest magnitude real eigenvalues using the ARPACK [114] algorithm. **Choice of transition time \(\tau^{*}\):** We choose \(\tau^{*}\) such that the resulting Markovian dynamics approximate the long-term behavior of the system accurately, as in [6]. In practice, we find the shortest transition time scale after which the inferred implied relaxation times reach a plateau, Fig. S1(b,c). For \(\tau\) too short, the approximation of the operator yields a transition matrix that is nearly identity (due to the finite size of the partitions and too short transition time), which results in degenerate eigenvalues close to \(\lambda\sim 1\): an artifact of the discretization and not reflective of the underlying dynamics. For \(\tau\) too large, the transition probabilities become indistinguishable from noisy estimates of invariant density, which results in a single surviving eigenvalue \(\lambda_{1}=1\) while the remaining eigenvalues converge to a noise floor resulting from a finite sampling of the invariant density. Between such regimes, we find a region with the largest time scale separation which also corresponds to the regime for which the longest relaxation times, Eq. (2), are robust to the choice of \(\tau\), Fig.S1(b,c). For further discussion see [6]. _C. elegans_ **posture simulations:** At each iteration, we sample from the conditional distribution given by the inferred Markov chain \(P(s_{j}(t+\tau^{*})|s_{i}(t))\) to generate a symbolic sequence sampled on a timescale \(\tau^{*}\). We then randomly sample a state space point \(X_{K^{*}}\) within the partition \(s_{i}\), and unfold it to obtain a sequence of postures \(\vec{a}_{t:t+K^{*}}\) at each \(\tau^{*}\). We can thus generate artificial posture time series with the same duration as the experimental time series (35 minutes), but with a missing frame every \(\tau^{*}\) frames (the gap between \(K^{*}\) and \(\tau^{*}\)), which we interpolate across using a cubic spline with scipy's interpolate package [115], and smooth with a cubic polynomial and a window size of 11 frames using the signal.sawgl_filter package from Scipy [115]. We then take the simulated \(\vec{a}(t)\) time series and transform it back to the tangent angles at each body segment \(\theta_{i}(t)\) using the "eigenworms" [20]. **Estimating the rate of reversals, dorsal and ventral turn events:** Reversal events where identified as segments in which the absolute value of the worms' overall curvature \(\gamma(t)=\sum_{i}\theta_{i}(t)\) was \(|\gamma|<3\times 10^{-4}\,\mathrm{rad}\) and the body wave phase velocity \(\omega(t)=-\frac{1}{2\pi}\frac{d}{dt}\left[\tan^{-1}(a_{2}(t)/a_{1}(t))\right]\)[20] was \(\omega<-0.2\,\mathrm{cycles\,s^{-1}}\) for at least 0.5 s. ventral and dorsal turns were identified as segments where the overall body curvature was either \(\gamma<-3.5\times 10^{-4}\,\mathrm{rad}\) or \(\gamma>3.5\times 10^{-4}\,\mathrm{rad}\), respectively, for at least 0.5 s. **Resistive force theory simulations:** We recover the rigid body motion from the tangent angle time series using linear resistive force theory, as in [34]. We approximate the forces acting independently on each body segment as \[\tilde{\mathbf{F}}_{i}(t)=\alpha_{t}\tilde{v}_{i}^{t}\hat{t}+\alpha_{n}\tilde {v}_{i}^{n}\hat{n}\] where \(\tilde{v}_{i}^{t,n}\) are the tangent and normal components of the velocity at each segment \(i\), which can be written in terms of the velocity and displacements measured after subtracting the overall rigid body motion, \[\tilde{\mathbf{v}}_{i}(t)=\mathbf{v}_{i}(t)+\tilde{\mathbf{V}}(t)+\tilde{ \mathbf{\Omega}}(t)\times\Delta\mathbf{x}_{i}(t).\] Then, by imposing a zero net-force and net-torque condition at each frame, \[\sum_{i}\tilde{\mathbf{F}}_{i} =0\] \[\sum_{i}\tilde{\mathbf{F}}_{i}\times\Delta\mathbf{x}_{i} =0,\] we obtain a system of linear equations that for a given \(\alpha=\alpha_{n}/\alpha_{t}\) can be solved for the components of the worm's velocity \(\tilde{\mathbf{V}}(t)\) and angular velocity \(\tilde{\mathbf{\Omega}}(t)\)[34]. From these we can integrate the path taken by the worm's body to obtain a reconstructed \(\tilde{\mathbf{x}}_{\text{CM}}(t)\). We optimize the single free parameter \(\alpha\) by comparing the reconstructed trajectories with the real worm trajectories \(\mathbf{x}_{\text{CM}}^{\text{data}}\), Fig. S4(a). In particular, we minimize the maximum distance between 100 s trajectories randomly sampled from the dataset \(L(\alpha)=\max(\|\tilde{\mathbf{x}}_{\text{CM}}^{\text{c}}(t)-\mathbf{x}_{ \text{CM}}^{\text{data}}(t)\|_{2})\), \(t\in[t_{0},t_{0}+100\,\mathrm{s}]\). To minimize \(L(\alpha)\) we use the Nelder-Mead algorithm through the scipy.optimize library of Scipy [115]. The software to translate posture into path can be found in [https://github.com/AntonioCCosta/markov_worm/](https://github.com/AntonioCCosta/markov_worm/), and follows closely the implementation of [34]. **Metastable states:** Metastable states correspond to collections of short-time movements that typically follow each other in time to give rise to stereotyped sequences. Leveraging our previous work [6], we search for metastable states along the slowest mode of the reversabilized dynamics [47]. As shown in [49], the second eigenvector \(\phi_{2}\) of a time-reversibilized transition matrix \(P_{r}\) provides an _optimal_ subdivision of the state space into almost invariant sets. In practice, we estimate \(P_{r}\) as \[P_{r}(\tau)=\frac{P(\tau)+P(-\tau)}{2}, \tag{4}\] where, \[P_{ij}(-\tau)=\frac{\pi_{j}P_{ji}(\tau)}{\pi_{i}}\] is the stochastic matrix governing the time-reversal of the Markov chain. The first non-trivial (\(\lambda<1\)) right eigenvector of \(P_{r}\), \(\phi_{2}\), allows us to define macrostates as collections of microstates \(s_{i}\), \[S^{+}(\phi_{2}^{c})\coloneqq\bigcup_{i:\phi_{2}\geq\phi_{2}^{c}}s_{i},\,S^{-}( \phi_{2}^{c})\coloneqq\bigcup_{i:\phi_{2}\leq\phi_{2}^{c}}s_{i},\] where \(\phi_{2}^{c}\) is a threshold that is chosen to maximize the metastability of a set. We measure the metastability of each set \(S\) by estimating how much of the probability density remains in \(S\) after a time scale \(\tau\), \[\chi_{\pi,\tau}(S)=\frac{\sum_{i,j\in S}\pi_{i}P_{ij}(\tau)}{\sum_{i\in S}\pi_{ i}}.\] To estimate the overall measure of metastability across both sets \(S^{+}\) and \(S^{-}\), we define \[\chi(\phi_{2}^{c})=\min\left\{\chi_{\pi,\tau^{*}}(S^{+}),\chi_{\pi,\tau^{*}}(S ^{+})\right\}. \tag{5}\] which we maximize with respect to \(\phi_{2}^{c}\). Metastable states are then defined with respect to the sign of \(\phi_{2}-\phi_{2}^{c}\). See [6] for further details and applications to known dynamical systems. In Fig. S6 we show the overall coherence measure as a function of \(\phi_{2}\) for the worm data. **Operator-based state space subdivision:** We leverage the notion of relatively coherent sets [50] to subdivide the state space. However, instead of subdividing both metastable state at each iteration \(k\), we identify the state with the most measure \(S_{k}^{*}\) and build a new transition matrix only with partitions belonging to that state, \[P_{S_{k}^{*}}(\tau)=p(s_{j}(t+\tau)|s_{i}(t)),\,i,j\in S_{k}^{*}.\] From \(P_{S_{k}^{*}}\) we proceed as before: we compute the stationary distribution of \(S_{k}^{*}\) through the first left eigenvector of \(P_{S_{k}^{*}}\), \(\pi_{i}^{*}\), build the corresponding reversibilized transition matrix \(P_{r,S_{k}^{*}}\) and identify relatively metastable states through its first non-trivial eigenvector by maximizing Eq. (5) where \(\pi_{i}\) and \(P_{ij}(\tau)\) are replaced by their relative counterparts \(\pi_{i}^{*}\) and \(P_{S_{k}^{*}}\). **Simulating posture-to-path within mesoscopic behavioral states:** To generate a centroid trajectory within a given state, we construct a transition matrix among the partitions corresponding to each of the mesoscopic states identified in Fig. 4(c). We then proceed as in Figs. 2,3 to generate both posture time series and centroid trajectories. We first generate a symbolic sequence by sampling states according to the corresponding transition probability matrix \(\hat{s}_{j}(t+\tau)\sim P_{S}(s_{j}|\hat{s}_{i}(t))\), \(i,j\in S\). From the symbolic sequence, we then sample a time series segment \(\vec{a}_{t:t+K^{*}}\) within each sampled partition, and use resistive force theory to translate the resulting \(\theta(t)\) time series into locomotion. In this way, we can simulate posture and centroid trajectories for _in silico_ worms that are forced to remain within a particular mesoscopic behavioral state for an arbitrary amount of time. **Probability of finding food as a function of distance and behavioral state:** We estimate the likelihood of finding food in a given radius \(r\) by estimating the fraction of the area within a disc of radius \(r\) covered by the worm's body during \(100\,\mathrm{s}\) trajectories, taking the worm's width to be \(5\%\) of its length. We then normalize these area fractions by the total across states, obtaining the \(p(\mathrm{food}|r,\mathrm{state})\) showed in Fig. 5(b). ## Acknowledgements We thank Massimo Vergassola and Federica Ferretti for comments. This work was supported by OIST Graduate University (TA, GJS), a program grant from the Netherlands Organization for Scientific Research (AC, GJS), by the Herchel Smith Fund (DJ), and by Vrije Universiteit Amsterdam (AC, GJS). GJS acknowledges useful (in-person!) discussions at the Aspen Center for Physics, which is supported by National Science Foundation Grant PHY-1607611.
2306.10972
Understanding the Challenges of Deploying Live-Traceability Solutions
Software traceability is the process of establishing and maintaining relationships between artifacts in a software system. This process is crucial to many engineering processes, particularly for safety critical projects; however, it is labor-intensive and error-prone. Automated traceability has been a long awaited tool for project managers of these systems, and due to the semantic similarities between linked artifacts, NLP techniques, such as transformer models, may be leveraged to accomplish this task. SAFA.ai is a startup focusing on fine-tuning project-specific models that deliver automated traceability in a near real-time environment. The following paper describes the challenges that characterize commercializing software traceability and highlights possible future directions.
Alberto D. Rodriguez, Katherine R. Dearstyne, Jane Cleland-Huang
2023-06-19T14:34:16Z
http://arxiv.org/abs/2306.10972v1
# Understanding the Challenges of Deploying Live-Traceability Solutions ###### Abstract Software traceability is the process of establishing and maintaining relationships between artifacts in a software system. This process is crucial to many engineering processes, particularly for safety critical projects; however, it is labor-intensive and error-prone. Automated traceability has been a long awaited tool for project managers of these systems, and due to the semantic similarities between linked artifacts, NLP techniques, such as transformer models, may be leveraged to accomplish this task. SAFA.ai is a startup focusing on fine-tuning project-specific models that deliver automated traceability in a near real-time environment. The following paper describes the challenges that characterize commercializing software traceability and highlights possible future directions. ## 1 Introduction Software traceability is a critical task for many software systems and entails linking high-level software artifacts (e.g. requirements or safety-goals) to their fulfillment, often in the form of design definitions, source code or test-cases (Gotel and Finkelstein, 1994; Cleland-Huang et al., 2014; Torkar et al., 2012). By creating accurate trace links in a project, engineers can analyze how the introduction of a change in the system might impact existing components (Hamilton and Beeby, 1991). These links also aid in verifying the completeness of a project and provide clarity onto the rationale of an artifact's inception (Ramesh and Edwards, 1992). Specifically, trace links might demonstrate that all requirements have been fulfilled or highlight which requirements are addressed by specific design decisions. In many safety-critical systems, traceability is required to assure their safety by governing bodies. Unfortunately, creating and maintaining trace links is an effort-intensive, time-consuming, error-prone task which may impede the development process (Hayes et al., 2006). As a result, many projects are left with incomplete and inaccurate trace links (Mahmoud et al., 2012). Since most linked artifacts share semantic similarities, natural language processing techniques may be leveraged to support engineers in identifying missing trace links, reducing both the time and cost required for the task (Antoniol et al., 2002; Dekhtary et al., 2007; Guo et al., 2017; Lin et al., 2021). Inspired by this goal, SAFA ("Software Artifact Forest Analysis") (Rodriguez et al., 2022) is an emerging traceability tool that was based on recent research showing the achievement of BERT and other bi-directional transformers at predicting trace links. (Lin et al., 2021). Despite the promising results of these models in the research setting, we have encountered challenges in deploying SAFA on commercial projects. These challenges have included the lack of data-availability, necessity of domain-specific knowledge, poor quality of many datasets, and expectations of immediate results from engineers. Furthermore, many safety-critical domains require near-perfect recall when Figure 1: Example sub-tree from the SAFA platform, linking high-level requirements to low-level requirements in the CM1 dataset. The solid blue line is an **established** relationship as determined by the project engineers. The yellow lines are relationships **predicted** by SAFA, where the dotted line represents a prediction that has not yet been manually approved while the solid line is an approved link. using NLP techniques to automatically generate links, which can not be consistently achieved by the current models. Therefore, for the focus of this paper, we explore each of these problems in depth within the area of traceability and highlight insights into future directions. Although there remain challenges that must still be overcome, SAFA shows promise in bringing the vision of "ubiquitous traceability" to fulfillment at last [10]. In this vision, traceability will occur automatically alongside the engineering processes, incurring little cost and allowing engineers to focus on those tasks that are most significant to them. By integrating the latest advances in NLP into the platform, SAFA can reduce the burden of traceability and accelerate engineering processes. ## 2 NLP for Automated Traceability There has long been a need for developer tools focusing on the traceability of software artifacts [1], and this need has only grown as these systems have become more complex, distributed, and collaborative. Previous approaches to automated traceability included vector and topic based information retrieval techniques such as vector-space model (VSM), latent semantic indexing (LSI), and latent dirichlet allocation (LDA) [12, 13]. However, these approaches have many short-comings. For example, VSM fails to correctly link artifacts that relate to one through synonyms rather than identical terms, referred to as the _term-mismatch problem_[14]. Although topic based techniques can overcome instances of the term-mismatch problem by translating artifacts into their latent space, this translation looses some of the necessary information required for tracing, resulting in VSM generally out-performing these approaches [1, 13]. Interestingly, VSM and topic-based techniques were shown to bring orthogonal information about the trace links as combining their similarity scores helped to mitigate their individual problems [1]. Nevertheless, performance still fell short of the accuracy needed for commercial applications, and it became evident that a model would have to understand synonyms, the relationships between the words in a sentence, and be tuned to best understand the specific vocabulary of a project. Classical statistical methods were not equipped to over \begin{table} \begin{tabular}{|c|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Name (domain)** & **Description** & **Candidates** & **True Links** \\ \hline CM1 (Embedded System) & Requirements for a data processing unit for the then NASA metrics data program. Contains high (22) and low level requirements(53) and prepared by researchers at the University of Kentucky [11]. & 1,166 & 45 \\ \hline Medical Infusion Pump (Healthcare) & Software system implementing a medical infusion pump extracted from [13]. Contains trace links between system components (21) and regulatory requirements (126). & 2,778 & 132 \\ \hline iTrust (Healthcare) & An electronic health record (EHR) system created as part of a course at North Carolina State University [12]. Contains requirements (131) and JSP code modules [22]. & 29,606 & 534 \\ \hline Dronology (UAV) & A system for managing the navigation of UAVs and their communication to the ground control station. Contains requirements (55), designs (99), and java code (458). Prepared by researchers at the University of Notre Dame [10]. & 5,445 & 58 \\ D-NL D-PL & Subset containing only designs and java code. & 45,342 & 232 \\ \hline \end{tabular} \end{table} Table 1: Descriptions of the datasets evaluated in study. Candidate links represent all the potential combinations between the source and target artifacts. True links represent the number of those combinations that are positively linked, the remaining ones are considered unlinked. All datasets were extracted from coest.org come these challenges; however, deep learning models, like word2vec (Mikolov et al., 2013), re-invigorated the community by showing that models could create contextualized embedding for sentences, hinting that it could be possible to capture all this information. The advances in deep learning caused researchers to re-think what was possible for software traceability. First, the use of RNNs (recurrent neural networks) and Bi-GRU (Bidirectional Gated Recurrent Unit) were explored and shown to outperform previous baselines like VSM or LSI (Guo et al., 2017). As exciting as these results were, these models were data hungry and required large amounts of data to be able to out-perform the baselines. Finally, the introduction of transformer based models, such as Google's BERT model and OpenAI's GPT models, resulted in an additional performance leap, showing the potential for breaking through the previous glass ceiling (Feng et al., 2020; Lin et al., 2021, 2022). One particular strength of these models was their ability to be fine-tuned to specific projects and domains, finally enabling achievement of highly accurate results. (Lin et al., 2021, 2022). This is the focus of SAFA, fine-tuning state-of-the-art traceability models to create a real-time project-management environment with live-traceability (Rodriguez et al., 2022). ## 3 Current Performance Demonstration In order to demonstrate the current performance of NLP techniques across traceability datasets, we utilize 5 different datasets, spanning four domains as shown in Table 1. To these datasets, we apply VSM (Salton et al., 1975), an un-pretrain bert-base-uncased model (Devlin et al., 2019) and one of two pre-trained bert models (Lin et al., 2021, 2022) depending on whether the dataset trace between natural language artifacts or between natural language artifacts and source code. A detailed description of each model can be found in Table 2. All candidate travel links in dataset are randomly split into three parts. 35% of the data was used for training (train), 10% was used for validation (e.g. performing early stopping), and 55% was used for the final evaluation. We run each dataset-model combination across three different random seeds to obtain three unique combinations of the data splits. We trained all transformer models for \begin{table} \begin{tabular}{|c|c|p{284.5pt}|} \hline **Name** & **Model Id** & **Description** \\ \hline nl-bert & _ANONYMOUS/nlbert_ & Bert-base-uncased model pre-trained on tracing commit messages to related issues. Traces between two sets of natural language artifacts. (Lin et al., 2021). \\ \hline pl-bert & _ANONYMOUS/plbert_ & Robert-base model pre-trained on docstring to code snippets and git commit to commit code. Traces between natural language and programming language artifacts \\ \hline bert-base-uncased & _bert-base-uncased_ & The original BERT model released by Google in 2019 (Devlin et al., 2019). \\ \hline VSM & N/A & Uses the scikit-learn Tf-Idf vectorizer and related utilities for calculating similarity scores (Pedregosa et al., 2011). \\ \hline \end{tabular} \end{table} Table 2: Description of deep learning and classical models used in evaluation. Model id is the name of the model in the HuggingFace model repository (_[https://huggingface.co_](https://huggingface.co_)). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Dataset** & **Model** & **MAP** & **F2** & **Train** \\ & & & & **Time** \\ \hline CM1 & VSM & 71.4 & 46.4 & **\textless{1s}** \\ CM1 & BERT-BASE & 60.8 & 45.8 & 12.1m \\ CM1 & NL-BERT & **72.3** & **57.2** & 12.2m \\ \hline MIP & VSM & **100** & 38.9 & **\textless{1s}** \\ MIP & BERT-BASE & **100** & **100** & 28m \\ MIP & NL-BERT & **100** & **100** & 27.7m \\ \hline D-NL & VSM & 78.0 & 58.7 & **\textless{1s}** \\ D-NL & BERT-BASE & 68.4 & 51.8 & 55.5m \\ D-NL & NL-BERT & **86.5** & **64.9** & 55.9m \\ \hline D-PL & VSM & 21.6 & 14.4 & **\textless{1s}** \\ D-PL & BERT-BASE & 39.4 & 39.2 & 8h \\ D-PL & PL-BERT & **51.6** & **46.7** & 8.5h \\ \hline iTrust & VSM & 28.4 & 24.7 & **\textless{1s}** \\ iTrust & BERT-BASE & 72.1 & 65.6 & 4.5h \\ iTrust & PL-BERT & **78.7** & **69.3** & 4.7h \\ \hline \end{tabular} \end{table} Table 3: The average performance across three random seeds for each of models on all dataset. These results are calculated on the evaluation dataset. 20 epochs and with a batch size of 4, performing gradient accumulation for 16 steps. All the models produce similarity scores between source-target artifact combinations ranging from 0 to 1. We selected to train with less than 50% of the data to simulate project's that need to produce trace links during the development process. To evaluate model performance, we calculate the Mean Average Precision (MAP) and max F2 scores as described in Lin et al. (2022). These metrics are commonly used in the traceability community and were selected because MAP helps favor the precision of the model while F2 favors recall. All results are displayed in Table 3, and an example of the pre-trained model predictions in the SAFA platform is shown in Figure 1. ## 4 Problems Encountered ### Data-Availability As with many research areas within software engineering, automated traceability suffers from the lack of available data to train models. The ideal dataset would contain a complete software project across multiple layers of artifacts with high quality trace links, but few such datasets are open source. This problem is only intensified when focusing on safety-critical systems, where accurate traceability is often required by regulatory bodies. Due to protecting proprietary data, many potential industry partners are only willing to provide example projects under the condition that the data remains on their premises, requiring that all models be trained using the companies own computing resources. This unfortunately limits the resources which can be dedicated to model training and predicting, inhibiting a traceability tool from being able to guarantee real-time results as described in Section 4.3. Furthermore, many industry projects in non safety-critical domains do not contain any trace links at all due to the additional time required to trace artifacts throughout the development process. However, these companies often desire traceability after project completion to validate that all requirements have been met. This presented a challenge as fine-tuning models on example links from the target project is important in allowing the models to reach satisfactory performance levels Lin et al. (2021). Previous research has achieved some degree of success by performing transfer learning using commits linked to issues in git-repositories, allowing the model to better learn the task of traceability Lin et al. (2022). However, the performance of such models in a few- or zero-shot task is highly variable and can not reach the level of performance of the fine-tuned models. One possible approach to increase the available links for fine-tuning is to use VSM to make predictions over a subset of the data. Since VSM is able to obtain competitive MAP scores on many natural language datasets, its predictions would likely capture a significant number of true trace links. Afterwards, an engineer could manually review predictions with the highest similarity scores, and the resulting trace-links could be fed to the model as training data, hopefully boosting model performance on the remaining links. We plan to explore this approach with a pilot customer in the upcoming months. Recently, the NLP community has seen great improvements in few-shot learning problems, as demonstrated by GPT3 Brown et al. (2020). There have also been some encouraging examples of few-shot learning within requirements engineering Alhoshan et al. (2023). In the future, we are eager to see how these recent advancements might improve trace link predictions in projects where there is insufficient training data. ### Domain-Specific Knowledge Since requirements and other software artifacts tend to include highly technical, domain-specific jargon, models pre-trained on only a general corpus of text are often unable to perform well on traceability tasks Lin et al. (2021). For this reason, domain-specific pre-training and/or transfer-learning are especially important to these models. Due to the similarity of git-hub repositories to the task of tracing, domain-specific projects make particularly good transfer-learning material. This motivated us to explore open-source repositories within one of the domains most in need of traceability, namely robotics. Unfortunately, our search turned up only 8,790 public repositories on the topic of 'robotics'. This contrasted starkly with Keras, one of the datasets on which the model was shown to perform best Lin et al. (2022) Keras included tags for 'data-science' and'machine-learning', which each contain over 30K search results. This suggests that supplemental training strategies and data may be required for domains that are not as well-represented by open-source public repositories. To better highlight the problem of domain-specific vocabulary, we provide an examination of a trace link (Figure 2) from the data slice of CM1 presented in the Section 1. The high level requirement details that the _DPU-TMALI_, a telescope module access library, should utilize the status register, _SCM_DCI_SR_, to decode errors and place them on a queue for another component, _DPU-CCM_. The lower level requirement describes how the _DPU-CCM_ module checks its queue for errors and how those errors are forwarded to the control unit, _DPU-SCUI_, before being sent to the ground station. The explanation was generated by reading the specification for the requirements (noa, 2003) and took about one hour for a knowledgeable traceability researcher to sufficiently understand. While constructing or vetting trace links would likely take domain-experts far less time, and be simpler in less-complex projects, this example highlights the complexity of reviewing a single candidate trace link. In future work, we plan on exploring how to make this process more efficient by leveraging different trace link explanation techniques based on either knowledge graphs (Liu et al., 2020), Grad-CAM equivalents (Gorski et al., 2020), or through interactive visualizations of the model's attention (Vig, 2022). ### Training and Prediction Time As mentioned earlier, many companies require that models are trained on the companies' own resources so as to ensure that the data is kept private. This limits the amount of computing resources which might be dedicated to the task, thus emphasizing the importance of efficient training and prediction times for a model. Previously, Google's bert-base-uncased showed the most promise for the traceability task (Lin et al., 2022), but a model of this size can take up to days to train on large industrial-sized datasets. For example, it took an average of 8 hours to train a bert-base model with 4 NVIDIA T4 GPUs on the largest dataset (cf. Section 3 for more details). Although much of the pre-training and transfer-learning can be completed prior to handing off a model to a company, the fine-tuning and prediction times can still be quite lengthy and many companies expect immediate results. Interestingly, VSM, which scales linearly with an increasing number of trace-links, is able to achieve high MAP scores for many natural language datasets. Due to its efficiency, there may be benefits to using VSM in domains where high accuracy is less critical and fast-predictions are prioritized. In addition, using knowledge distillation or smaller transformer architectures has been shown to reduce required computational resources while still providing quality results (Tay et al., 2022). We are actively exploring how these can be applied to traceability. ### Model Performance Although models have been shown to reach high F2 and MAP scores in some instances, the results are highly variable across datasets. For example, within our experiment results in Table 3, there is a difference of 48.4% MAP between the lowest (D-PL) and highest (MIP) performing datasets. This has several likely root causes, including the lack of quality trace links and the domain specific jargon, as discussed in Sections 4.1-4.2. Additionally, there is a variable level of quality across datasets, and automated traceability can be a difficult, if not impossible, task on lower quality datasets. For example, requirements that are ambiguous or source Figure 2: Explanation of trace link from CM1 slice. code that contains poorly named variables and methods can cause a model to fail. Furthermore, during fine-tuning of the model, all artifacts which are not explicitly linked in the project are assumed to be negative links (not traced). In reality, however, these may be true, yet missing links, and a large number of mis-labeled negative labels may prohibit the model from being able to learn correct patterns in the data. We can clearly see the effects of mis-classified negative labels on the results of CM1. Across random splits of the data, CM1 has the most significant variance with regard to the pre-trained model performance, ranging from an MAP as low as 63% all the way up to 82%. It also under-performs in comparison to all other datasets aside from D-PL. Upon further inspection, we found 26 artifacts in the dataset which had no ground truth links, hereby referred to as orphan artifacts. Since the expectation of the CM1 project is for all higher-level requirements to be fulfilled by lower-level requirements, orphan artifacts are likely a sign of forgotten links. When we removed them, performance was boosted to 79% MAP. Noteably, this new average is quite close to the highest MAP previously seen across random seeds. We speculate that this variance was due to the number of orphans falling into the training set, where a large number of mis-labeled links made it difficult for the model to pick up on meaningful patterns. Indeed, the run of the model without orphans, resulted in more consistency across dataset splits as can be seen in Figure 3. Inspired by this finding, we also discovered that D-PL, the dataset with the lowest MAP, had over 300 orphan artifacts. Due to time constraints, we were unable to re-run this experiment on the dataset, but we plan to explore how this might improve performance in the future. Although we can detect some forgotten links in the dataset by identifying orphan artifacts, there are likely other missing links that might be challenging to identify. We are presently investigating how we might automatically prune links that are likely mis-labeled (Pleiss et al., 2020) or make the models more invariant to noise (Abdar et al., 2021). ## 5 Conclusions Clearly, automated traceability continues to present challenges that must be faced during the commercialization of the SAFA platform. Nevertheless, there are many potential advancements that might be made by employing state-of-the-art NLP techniques, some of which have been identified in this paper. By collaborating with researchers within NLP, automated traceability tools, such as SAFA, have the potential to reach new levels of performance. Ultimately, this may facilitate a transformation of engineering processes into the long-awaited ideal of ubiquitous traceability. ### Limitations The main limitation of our paper is the low number of datasets used throughout our evaluations. This was primarily chosen due to time constraints, but limited data availability is a problem as described Section 4.1. To improve the breadth of our analysis, we attempted to use projects with diverse artifacts across four domains. The artifact types encompass requirements, design definitions, use cases, java code, and jsb code. However, to make more definite conclusions, we hope to utilize additional data in future works. Further limitations include the the lack of reproducibility of our experiment, as our code is intellectual property of SAFA and cannot be released to the public. The datasets used are, however, open source and can be found at [http://coest.org](http://coest.org). ### Ethics Statement Ethical considerations for the lack of open source code must consider that successful commercialization projects are a viable path for Figure 3: The range of scores across three random seeds for each dataset. impacting the current practice of safety critical systems. As SAFA is still a pre-seed company, our go-to-market plan does not (at least initially) include open sourcing our code base. We plan on releasing different domain-specific models that the community can benefit from. Currently, we have published two models used in this paper [https://huggingface.co/ANONYMOUS/nl-bert](https://huggingface.co/ANONYMOUS/nl-bert) and [https://huggingface.co/ANONYMOUS/pl-bert](https://huggingface.co/ANONYMOUS/pl-bert).
2308.06464
A One-dimensional HEVC video steganalysis method using the Optimality of Predicted Motion Vectors
Among steganalysis techniques, detection against motion vector (MV) domain-based video steganography in High Efficiency Video Coding (HEVC) standard remains a hot and challenging issue. For the purpose of improving the detection performance, this paper proposes a steganalysis feature based on the optimality of predicted MVs with a dimension of one. Firstly, we point out that the motion vector prediction (MVP) of the prediction unit (PU) encoded using the Advanced Motion Vector Prediction (AMVP) technique satisfies the local optimality in the cover video. Secondly, we analyze that in HEVC video, message embedding either using MVP index or motion vector differences (MVD) may destroy the above optimality of MVP. And then, we define the optimal rate of MVP in HEVC video as a steganalysis feature. Finally, we conduct steganalysis detection experiments on two general datasets for three popular steganography methods and compare the performance with four state-of-the-art steganalysis methods. The experimental results show that the proposed optimal rate of MVP for all cover videos is 100\%, while the optimal rate of MVP for all stego videos is less than 100\%. Therefore, the proposed steganography scheme can accurately distinguish between cover videos and stego videos, and it is efficiently applied to practical scenarios with no model training and low computational complexity.
Jun Li, Minqing Zhang, Ke Niu, Yingnan Zhang, Xiaoyuan Yang
2023-08-12T04:51:04Z
http://arxiv.org/abs/2308.06464v1
# A One-dimensional HEVC video steganalysis method using the Optimality of Predicted Motion Vectors ###### Abstract Among steganalysis techniques, detection against motion vector (MV) domain-based video steganography in High Efficiency Video Coding (HEVC) standard remains a hot and challenging issue. For the purpose of improving the detection performance, this paper proposes a steganalysis feature based on the optimality of predicted MVs with a dimension of one. Firstly, we point out that the motion vector prediction (MVP) of the prediction unit (PU) encoded using the Advanced Motion Vector Prediction (AMVP) technique satisfies the local optimality in the cover video. Secondly, we analyze that in HEVC video, message embedding either using MVP index or motion vector differences (MVD) may destroy the above optimality of MVP. And then, we define the optimal rate of MVP in HEVC video as a steganalysis feature. Finally, we conduct steganalysis detection experiments on two general datasets for three popular steganography methods and compare the performance with four state-of-the-art steganalysis methods. The experimental results show that the proposed optimal rate of MVP for all cover videos is 100%, while the optimal rate of MVP for all stego videos is less than 100%. Therefore, the proposed steganography scheme can accurately distinguish between cover videos and stego videos, and it is efficiently applied to practical scenarios with no model training and low computational complexity. Video Steganography, Video Steganalysis, Motion Vector prediction, Motion Vector Difference, Advanced Motion Vector Prediction, Local optimality. ## I Introduction Steganography aims to embed secret messages in multimedia such as picture, audio, and video without arousing suspicion, thus enabling covert communication. On the other hand, the purpose of its adversary steganalysis is to detect the presence of embedded secret messages in ordinary media. Video is the ideal cover for steganography, and there are different steganography methods according to the embedding location [1] in video, mainly intra-frame prediction modes [2, 3, 4], inter-frame prediction modes [5, 6, 7, 8], MVs [9, 10, 11], transformation coefficients [12, 13], etc. Since many MVs are available for message embedding in video coding, more methods are based on the MV domain. Thus MV-based steganalysis technique is a current research hotspot. With the gradual popularization and application of the HEVC standard [14], the research of MV-based video steganography and steganalysis techniques based on the HEVC is particularly important. Yang et al. [15] proposed a steganography method based on MV space coding for HEVC. They gave the construction and coding process of MV space. They defined the mapping relationship between the set of MVs and the points in the space, which can achieve the effect of embedding a 2N+1 binary number by changing at most one component among N MV components. Guo et al. [10] first counted the motion trend of each frame and established a Motion Trend Based (MTB) mapping strategy between the MV and the binary bitstream, and then used the Sum of Transform Difference (SATD) difference before and after the MV modification as steganographic distortion for message embedding. Hu et al. [16] first proposed a new steganography method Steganography by Advanced Motion Vector Prediction (SAMVP) using the AMVP technique in HEVC. SAMVP uses the MVP index in the AMVP technique of inter-frame prediction as the embedding cover, which has a sizeable embedding capacity and is lossless. Liu et al. [17] proposed the Adaptive-SAMVP (A-SAMVP) based on SAMVP by defining the cost function and combining it with Syndrome Trellis Code (STC) [18]. Since AMVP encodes MVs by MVP index values and MVDs, A-SAMVP embeds the information in the index values of the candidate list and uses the code rate difference between two candidate MVPs to define the cost function. The overall performance of the algorithm is improved. The MV-based steganography algorithm is a modification of the MV and its associated information, which inevitably destroys the optimality of specific parameters in the video coding process, so some traditional H.264/AVC-based steganalysis methods are still effective to some extent in HEVC, such as Adding or Subtracting One(AoSO) [19], Near Perfect Estimation for Local Optimality (NPELO) [20], Motion Vector Consistency (MVC) [21]. Nevertheless, to improve the detection efficiency of steganographic algorithms for HEVC, researchers have tried to design steganalysis features by combining the characteristics of HEVC. Shanableh et al. [22] extended the idea of the MVC approach to HEVC. They redefined the concept of block group based on the coding depth according to the characteristics of HEVC standard and proposed the feature sets based on MV non-consistency. Huang et al. [23] introduced the convolutional neural network to the MV domain video steganalysis based on the HEVC standard and proposed the Video Steganalysis Residual Network (VSRNet) structure. The method constructs independent VSRNet sub-networks for different embedding rates and finally connects all sub-network structures to form a quantitative steganalysis convolutional neural network. Based on VSRNet, they further introduce information such as Selection-Channel-Aware [24] and MVD [25] to improve the performance of steganalysis. In the new type of MV modification strategy [16, 17] based on the HEVC standard, it is possible to modify only the MVP index without changing the MV itself. So, the traditional MV-based steganalysis features are ineffective for this new type of steganography algorithm. However, if the MVP index is modified, the local optimality of the MVP in the candidate list may be destroyed. Based on this observation, Liu et al. [26] constructed steganalysis features based on local optimality on the MVP candidate list and MV, and they proposed the Local Optimality in Candidate List (LOCL) method, which effectively improves the detection performance in HEVC. However, existing MV-based video steganalysis methods still have some significant shortcomings. Firstly, the current methods ignore the disturbance caused by MV steganography to the local optimality of MVP in HEVC, which leads to low detection effectiveness in current video steganalysis. Secondly, existing methods are based on machine learning models that require a significant amount of training to achieve an ideal detection model. However, these trained steganalysis models often have low robustness, as they tend to exhibit noticeable performance degradation in the presence of cover or algorithm mismatches. Based on the above analysis, this paper focuses on the local optimality of the MVP candidate list in HEVC and fully explores the statistical differences before and after message embedding to design the steganalysis feature. First, either the traditional steganography of modifying MVDs or the new steganography of modifying MVP indexes may have perturbations on the local optimality of MVP. Second, we propose a steganalysis feature with a dimension of only one based on the local optimality of the MVP, which is defined as the optimality rate of the MVP in HEVC codestreams. The optimality rate of the MVP is 100% in all cover videos and below 100% in all stego videos. Based on this feature, we can accurately determine whether or not the video has been modified by steganography. The main contributions of this paper can be summarized as follows: 1. We analyzed whether information embedding is based on MVP index or MVD, it may cause disturbance to the optimality of MVP in AMVP technique. 2. The optimal rate of the MVP with a dimension one is defined as the steganalysis feature, which is the lowest dimension among the existing MV domain steganalysis features. 3. The proposed scheme does not require redundant model training during execution, so our method has the advantages of low computational complexity and applicability to practical application scenarios. The rest of the paper are organized as follows. The second part introduces the basics knowledge of AMVP technology. The third part analyzes the effect of on the MVP by message embedding based on MVP index and MVD, and defines the optimal rate of MVP as a feature for steganalysis. Then it is proved theoretically that the optimal rate is 100% in the cover video and below 100% in the stego video. The experimental results and analysis are given in the fourth part. Finally, the paper is concluded. ## II Preliminaries ### _The Technology of Advanced Motion Vector Prediction_ AMVP is an MV prediction technique for inter-frame encoding proposed in HEVC. AMVP uses the correlation of MVs in the spatial and temporal domains to build a list of candidate MVPs (including \(mvp_{0}\) and \(mvp_{1}\)) for the current coding Prediction Unit (PU). The optimal MVP \(mvp_{idx},idx\in\{0,1\}\) is selected from the candidate list, and the final optimal MV \(mv\) is obtained by whole-pixel and sub-pixel motion estimation starting from the \(mvp\). Then The MVD \(mvd\) is obtained by \[mvd=mv-mvp_{idx}. \tag{1}\] \(mvd\) is finally encoded using 0-th order Exp-Golomb codes [14]. The decoder recovers the \(mv\) of the current PU by building the same list of candidate MVPs, and only needs the index value \(idx\) of \(mvp\) in the candidate list and the \(mvd\). So that the recovered \(mv=mvd+mvp_{idx}\). ### _The Local Optimality of the MVP_ HEVC adopts the Lagrangian optimization algorithm to achieve encoding control in selecting the optimal MVP from the candidate list. The definition of Lagrangian rate-distortion is as follows: \[J_{motion}(mv)=D+\lambda*R, \tag{2}\] where \(D\) represents the pixel distortion caused by encoding using the current \(mv\). The distortion \(D\) is usually calculated using the Sum of Absolute Difference (SAD) or Hadamard Sum of Absolute Transformed Difference (SATD). \(\lambda\) is a Lagrangian parameter that controls the balance between bit rate and distortion. \(R\) represents the number of bits required to encode the current \(mv\), which is actually the number of bits required to encode \(mvd\) and MVP index \(idx\): \[R=Bits(mvd)+Bits(idx)=Bits(mvd)+1, \tag{3}\] where \(Bits(idx)=1\) is the number of bits required to encode the \(idx\), and \(Bits(mvd)\) is the number of bits required to encode \(mvd\) using the 0-th order Exp-Golomb codes. According to the Lagrangian optimization model, without loss of generality, assuming that the optimal MVP selected by the encoder in the candidate list is \(mvp_{idx}\), then \(mvp_{idx}\) must meet **the local optimality of the MVP**: \[J_{motion}(mvp_{idx})\leq J_{motion}(mvp_{idx}), \tag{4}\] where \(\overline{idx}\) represents the values in the set \(\{0,1\}\) that is different from \(idx\). Formula (4) means the rate-distortion of the MVP corresponding to index \(idx\) must be the smallest in the candidate list. Due to the fact that the optimal MVP has been determined by the encoder during the final confirmation of the MVP, which means the reference block is determined. Therefore, the distortion \(D\) of the two candidate MVPs is the same, so the local optimality of the MVP in Formula (4) can be simplified as: \[R(mvp_{idx})\leq R(mvp_{\overline{idx}}), \tag{5}\] That is to say the number of bits encoding the optimal MVP \(mvp_{idx}\) is lower than that of another candidate \(mvp_{\overline{idx}}\). ## III The Proposed Steganalysis Method In this section, we first analyze the security risk of the HEVC steganography method using MVP index and MVD, i.e., both of them can perturb the local optimality of the MVP. Then a steganalysis feature is designed based on the optimality of the MVP in AMVP. ### _Motion Vector Domain based Steganography in HEVC_ Based on the analysis in the previous section, the Lagrangian rate-distortion optimization model first finds the optimal \(mvp\) from the candidate list of MVPs and then finds the optimal \(mv\) by motion estimation. Thus the selected \(mvp\) is optimal in the sense of rate-distortion in the candidate list. MV-based steganography in HEVC can use the MVP index \(idx\) or MVD \(mvd\) as cover. The effect of these two embedding methods on the optimality of the MVP is analyzed below. #### Iii-A1 Using the Index of MVP for Message Embedding Each PU encoded with the AMVP technique has an MVP index \(idx,idx\in\{0,1\}\). SAMVP [16] and A-SAMVP [17] are the new type of steganographic approaches with \(idx\) as the cover. In the method of SAMVP, when the secret message is the same as \(idx\), the information of the corresponding PU block does not have to be modified. When the secret message differs from \(idx\), the value of \(idx\) must be modified to \(\overline{idx}\), and then the corresponding \(mvp\) is also modified. According to \(mvd=mv-mvp\), since \(mvp\) is changed while \(mv\) remains unchanged, \(mvd\) correspondingly needs to be modified. According to the analysis of Formula (2), firstly, as \(mv\) is unchanged, then the corresponding best matching block is not changed, so there is no change in pixel distortion \(D\), i.e., no visual distortion for this message embedding; Secondly, the bit rate has changed, mainly due to the change in the number of bits required for \(mvd\) encoding using 0-th order Exp-Golomb codes. Therefore, although SAMVP is lossless in visual quality, it may increase the bit rate. To reduce the impact of embedding operations on the bit rate, A-SAMVP constructs an adaptive steganographic method using STC. The scheme improves the performance of the steganography method by taking the differences in bit rate before and after message embedding as the cost function. Although the schemes in SAMVP and A-SAMVP can be lossless in visual quality, an obvious security risk exists. According to the Lagrangian optimization model, the encoder must satisfy the local optimality in Formula (4) after selecting the optimal \(mvp_{idx}\) from the MVP candidate list. Therefore, if the \(mvp_{idx}\) is artificially modified to \(mvp_{\overline{idx}}\), there will be evident modification traces from the decoder. Fig. 1\((a),(b)\) show the scenarios where the local optimality of the MVP is corrupted due to message embedding using \(idx\). Fig. 1(a) shows the normal case before message embedding. The optimal \(mv\) by motion estimation is \((3,9)\), the two candidates \(mvp_{0}\) and \(mvp_{1}\) in the MVP candidate list are \((3,8)\) and \((3,9)\), respectively, and the corresponding \(mvd\) are \((0,1)\) and \((0,0)\), respectively. Calculated according to fomula (3), the number of encoding bits corresponding to the two MVPs is \(4\) and \(3\), respectively. So the optimal MVP index is \(idx=1\), i.e., \(mvp_{1}\) will be selected as the optimal MVP. Fig. 1(b) shows the situation after message embedding. Assuming that \(idx\) changes from \(1\) to \(0\), which means the \(mvp_{0}\) is selected for AMVP. And the number of bits needed to encode the two candidate MVPs at the decoding side is still \(4\) and \(3\), respectively. As a result, the optimal MVP index \(idx\) should be 1 and \(mvp_{1}\) should be selected as the optimal MVP in theory. However, in practice, the \(idx\) obtained by decoder is \(0\), thus destroying the optimality of the MVP. Based on the above analysis, using the MVP index \(idx\) as the message embedding cover could destroy the MVP's local optimality. #### Iii-A2 Using the MVD for Message Embedding Using the MVD as a cover for message embedding is a traditional steganography method [9, 15] in H.264/AVC and HEVC. In HEVC, according to \(mvd=mv-mvp\), since \(mvd\) is modified to \(mvd^{\prime}\) but \(mvp\) remains unchanged, which means the \(mv\) needs to be modified: \[mv^{\prime}=mvd^{\prime}+mvp, \tag{6}\] That is to say, the optimal matching block corresponding to the current PU has changed. Although these steganography methods do not directly modify the MVP index \(idx\), they may still destroy the MVP's local optimality. This is because Fig. 1: An example for the local optimality of MVP in HEVC. The gray background represents the actual situation of MVP observed by the decoding end. (a) Cover video, the MVP satisfies the local optimality. (b) Stego video by modifying the MVP index \(idx\), the MVP do not satisfies the local optimality. (3) Stego video by modifying the \(mvd\), the MVP do not satisfies the local optimality. the rate-distortion of the corresponding two candidate MVPs will be changed after the \(mvd\) modification. Still, Fig. 1(a) shows the PU of cover video before embedding, and Fig. 1(c) shows the case after the \(mvd\) of this PU block is modified. Suppose the \(mvd\) changes from \((0,0)\) to \((0,-1)\) after message embedding(usually, only one component is modified, and the modification amplitude is one). At this time, \(idx=1\) remains unchanged, and according to Formula (6), the corresponding \(mv\) at the decoding side will be changed to \((3,8)\). From the decoding end, the \(mvd\) at \(idx=0\) becomes \((0,0)\). According to Formula (3), the number of bits required to encode \(mvp_{0}\) and \(mvp_{1}\) is \(3\) and \(5\), respectively. So the optimal MVP index \(idx\) should be \(0\) theoretically, but it is actually \(1\), thus destroying the optimality of the MVP. From the above analysis, the steganography methods using the MVD as the cover may also destroy the MVP's optimality. ### _The Proposed One-dimensional Steganalysis Feature based on the local optimality of the MVP_ According to the analysis in Section III-A, in the HEVC standard, both the traditional steganography method using MVD as cover and the new steganography method using MVP indexes as cover may perturb the local optimality of the MVP. Based on the observation, this paper defines **the optimal rate of MVP** as the steganalysis feature: \[Optimal(mvp)=\frac{\sum\limits_{i=1}^{N}\delta(J_{motion}(mvp_{idx_{i}}),J_ {\min})}{N}\times 100\%, \tag{7}\] where \(N\) is the total number of all PUs encoded with the AMVP technique in a video sequence. \(\delta\) is a check function, \(\delta(x,y){=1}\) when x equals y and \(\delta(x,y){=0}\) otherwise. \(J\) is the Lagrangian rate-distortion calculated according to Formula (2), and \(J_{\min}=\min\{J_{motion}(mvp_{idx_{i}}),J_{motion}(mvp_{\overline{idx_{i}}})\}\). **Property 1:** The optimal rate of MVP in the cover video is 100%. **Proof:** According to the HEVC standard, AMVP select the one with the minimum Lagrangian rate-distortion in the MVP candidates list\(\{mvp_{0},mvp_{1}\}\) as the optimal MVP for the current PU. Without loss of generality, assuming that the optimal MVP in PU is \(mvp_{0}\), then according to the rate-distortion minimization rule, there must be: \[J_{motion}(mvp_{0})\leq J_{motion}(mvp_{1}), \tag{8}\] and then there are: \[\begin{array}{c}J_{\min}=\min\{J_{motion}(mvp_{0}),J_{motion}(mvp_{1})\}\\ =J_{motion}(mvp_{0}),\end{array} \tag{9}\] so that \(\delta(J_{motion}(mvp_{0}),J_{\min})=1\),and then: \[\begin{array}{c}Optimal(mvp)=\frac{\sum\limits_{i=1}^{N}\delta(J_{motion }(mvp_{0}),J_{\min})}{N}\times 100\%\\ =\frac{\sum\limits_{i=1}^{N}}{N}\times 100\%=100\%\end{array}, \tag{10}\] finally \(Optimal(mvp)=100\%\), and the proof is completed. **Property 2:** If the local optimality of MVP of some PUs in the stego video is broken, the optimal rate of the MVP in the stego video is less than 100%. **Proof:** Without loss of generality, it is assumed that AMVP chooses \(mvp_{0}\) as the optimal MVP before message embedding. According to the analysis in Section III-A, message embedding using either the MVP index or the MVD may destroy the local optimality of the MVP. Case (1). For the steganography method of using the MVP index as cover. If the local optimality of the MVP of some PUs is corrupted after embedding, i.e., the selected optimal MVP in the encoder becomes \(mvp_{1}\) after embedding(see section III-A1), and \(J_{motion}(mvp_{0})\leq J_{motion}(mvp_{1})\). So that \(J_{\min}=\min\{J_{motion}(mvp_{0}),J_{motion}(mvp_{1})\}=J_{motion}(mvp_{0})\), and \(\delta(J_{motion}(mvp_{1}),J_{\min})=0<1\). So the optimal rate of MVP in the Decoder is: \[\begin{array}{c}Optimal(mvp)=\frac{\sum\limits_{i=1}^{N}\delta(J_{motion }(mvp_{1}),J_{\min})}{N}\times 100\%\\ <\frac{\sum\limits_{i=1}^{N}1}{N}\times 100\%=100\%\end{array}, \tag{11}\] Case (2). For the steganography method of using the MVD as cover. If the local optimality of the MVP of some PUs is corrupted after embedding, the optimal MVPs selected by these PUs remain unchanged, but according to the analysis in section III-A2, the MVDs have changed. Therefor, \(J_{motion}(mvp_{1})<J_{motion}(mvp_{0})\), and \(J_{\min}=\min\{J_{motion}(mvp_{0}),J_{motion}(mvp_{1})\}=J_{motion}(mvp_{1})\). And then there is \(\delta(J_{motion}(mvp_{0}),J_{\min})=0\), so the optimal rate of MVP in the Decoder is : \[\begin{array}{c}Optimal(mvp)=\frac{\sum\limits_{i=1}^{N}\delta(J_{motion }(mvp_{0}),J_{\min})}{N}\times 100\%\\ <\frac{\sum\limits_{i=1}^{N}1}{N}\times 100\%=100\%\end{array}, \tag{12}\] Combining formulas (11) and (12), the proof of Property 2 is completed. **Corollary 1:** Given a video sequence, if its optimal rate of MVP is less than 100%, the sequence is a stego video. **Proof:** According to Property 1, if this video is a cover video, its optimal rate of MVP must be equal to 100%. On the contrary, if its optimal rate of MVP is lower than 100%, it means that the optimality of the MVP of some PUs is perturbed, which is an abnormal phenomenon. This perturbation comes from the message embedding, so that the video can be judged as stego. Through the analysis of Properties 1, 2 and Corollary 1, we can use the optimal rate of MVP \(Optimal(mvp)\) as the steganalysis feature for determining whether the video sequence of HEVC has been modified by steganography. The proposed steganalysis process is shown in Fig. 2. First, a given HEVC compressed video sequence is decoded to obtain the decoding parameters. Next, all PU units encoded using the AMVP technique and their corresponding parameters (MVs, MVP candidate lists, etc.) are collected. Then, the optimal rate of MVP \(Optimal(mvp)\) for the video sequence is calculated based on Formula (7). Finally, the value of \(Optimal(mvp)\) is used for judgment. If \(Optimal(mvp)=100\%\), it indicates that the optimal MVPs of all PU units encoded using the AMVP technique are intact, and the video sequence is a normal cover video. If \(Optimal(mvp)<100\%\), it indicates that the optimal MVPs of some PU units have been damaged, and the video sequence is a stego video. ## IV Experiments and Analysis In this section, some different setups are presented to evaluate the performance of the proposed scheme. ### _Experiments Setup_ #### Iv-A1 Video Databases TABLE I shows two video databases used for the experiments. The database DB1 contains 34 well-known standard test sequences [27] with a CIF resolution(352\(\times\)288), and each video sequence is cut into a fixed length by selecting its first 240 frames (so the total number of frames for experiments is 8160). Another database, DB2, contains 80 standard test sequences with different resolutions(from 416\(\times\)240 to 2560\(\times\)1600), which are downloaded from the internet, and each sequence is cut into a fixed length by selecting its first 100 frames (so the number of total frames is 8000). All the video sequences in DB1 and DB2 are stored in uncompressed file format, with YUV 4:2:0 color space. #### Iv-A2 Steganography Methods Three state-of-the-art typical MV-based steganography methods for HEVC are used for message embedding to evaluate the detectability of video steganalysis in the MV domain. The first one is Yang's method [15](denoted as Tar1), the second one is Hu's method [16](denoted as Tar2), and the last one is Liu's method [17](denoted as Tar3). Due to the different design principles of the above three methods, their embedding capacities are evaluated differently. The embedding strength \(e\) in Tar1 is a decimal whose range is 0 to 1, representing the probability of whether the secret information is embedded in CTU, which shall be set at 0.1, 0.2, 0.3, 0.4, and 0.5. The embedding threshold \(T\) in Tar2 is defined as \(T=abs(abs(H_{1})-abs(H_{0}))+abs(abs(V_{1})-abs(V_{0}))\), where \(H_{0}\), \(V_{0}\) represents the horizontal and vertical components of \(mvp_{0}\), \(H_{1}\), \(V_{1}\) represent the horizontal and vertical components of \(mvp_{1}\), and \(abs(x)\) is the absolute value of \(x\). The \(T\) will be set at 0, 1, 5, 20, and 1000 for experiments. The embedding capacity for Tar3 is bap (Bits Per AMVP PU), which shall be set at 0.1, 0.2, 0.3, 0.4, and 0.5. All the steganography methods are implemented based on the official test model HM16.9 [28]. #### Iv-A3 Competitor Steganalysis Methods There are two types of competitor steganalysis methods used for the experiments. The first type is the parallel porting of classical methods proposed for H.264/AVC to HEVC, including AoSO [19] and NPELO [20]. The other type is the MV-based steganalysis methods proposed based on HEVC, including the neural network-based VSRNet method [23] and the local optimality in the candidate list-based LOCL method [26]. All the steganalysis feature sets are extracted based on the official test model HM16.9. #### Iv-A4 Training and Classification It is worth noting that the steganalysis scheme proposed in this article does not require the use of machine learning methods for training and classification, as we can determine whether there exist message embedding based on the proposed optimal rate of MVP. To implement training and classification for various competitive steganalysis approaches, we use a Gaussian-kernel SVM (support vector machine) [29], whose penalty factor C and kernel factor are established via a five-fold cross-validation. Additionally, the accuracy rate--which is calculated as the proportion of correctly identified samples to all samples--is used to gauge the effectiveness of the detection process. Ten randomly selected database splits are used to get the final accuracy rate. Each iteration uses 40% of the cover-stego video pairings for testing, and 60% are randomly selected for training. A desktop computer with a 3.1GHz Intel Core i9 CPU and 64 GB RAM is used to conduct all of the tests. Fig. 2: The steganalysis process of the proposed scheme. ### _The Optimal Rate of MVP for Cover video_ This experiment verifies the applicability of Property 1 of Section III-B under different conditions, i.e., the optimal rate of MVP is calculated according to Formula (7) for cover video in DB1 and DB2 databases. To detect the impact of different encoders, the cover videos in this experiment are compressed using two different encoders. The first encoder is HM16.9, and the cover videos are encoded with quality parameters (QP) of 20, 25, and 30. The GOP (Group Of Picture) structure used for HM is IPPPPPPPPP...., and the experimental results are shown in TABLE II. Two metrics are counted in the table, the first is the average of the optimal rate of MVP of all videos in the database, and the second is the proportion of videos with a 100% optimal rate to the whole videos. The experimental data shows that both DB1 and DB2, \(Optimal(mvp)\) have a mean value of 100% at different QPs. The proportion of videos with a 100% optimal rate to the whole videos is also 100%. The experimental data indicate that among the cover videos encoded with HM, all the PU encoded with AMVP meet the local optimality of MVP, i.e., they satisfy the Property 1. The second encoder is the efficient x265 [30]. The parameters used for x265 are the same as HM above. The experimental results are shown in Table III, and it can be seen that both in DB1 and DB2, \(Optimal(mvp)\) have a mean value of 100% at different QPs. The proportion of videos with a 100% optimal rate to the whole videos is also 100%. The experimental result indicates that the cover video compressed by x265 also satisfies the Property 1. The results of the above two experiments show that both the official reference software HM and the optimized high-performance encoder x265 have an optimal rate of 100% of MVP for cover videos under different encoding parameters, which follows the Property 1. ### _The Optimal Rate of MVP for Stego video_ This experiment analyzes the detection performance of the proposed method on stego video. We use different steganography methods for message embedding on DB1 and DB2 databases, with an encoder of HM16.9 and a GOP structure of IPPPPPPP.... The experimental results for the steganography algorithm Tar1 [15] are shown in TABLE IV. Taking the embedding strength \(e=0.1\) at QP=20 in the DB1 database as an example, the average value of \(Optimal(mvp)\) is 99.70%, which means the optimality of the MVP of 0.3% PUs is corrupted. The proportion of videos with \(Optimal(mvp)\) equal to 100% of the total videos is 0%, which indicates that no video fully satisfies the MVP optimality. As a whole from TABLE IV, the average value of \(Optimal(mvp)\) is lower than 100% for different databases and QPs, and the proportion of the number of videos with \(Optimal(mvp)\) equal to 100% is 0%, which indicates that the optimality of MVP of all videos is perturbed. Based on the above analysis, it can be determined that all video sequences in this experiment are stego videos, and the reason is that the embedding operation performed by Tar1 on the MVDs destroys the local optimality of the MVP, which is consistent with the theoretical analysis in Section III-A2 and Property 2 in Section III-B. The experimental results for the steganography algorithm Tar2 [16] are shown in TABLE V. With different databases and QPs, the average value of \(Optimal(mvp)\) is 100% when the embedding threshold \(T=0\) and the proportion of videos with \(Optimal(mvp)\) equal to 100% of the total videos is 0%. According to the definition of \(T\) in Section IV-A2, \(T=0\) means that \(mvp_{0}\) is the same as \(mvp_{1}\), so modifying the MVP index does not change the optimality of the MVP. That means when \(T=0\), the scheme of this paper is invalid. In fact, at \(T=0\), the Tar2 algorithm only selects those PUs whose \(mvp_{0}\) is the same as \(mvp_{1}\) for message embedding, and its embedding capacity is smaller. When \(T\neq 0\), the first indicator in the experimental results are not 100%, and the second indicator are 0%, which indicates that the optimality of the MVP of all videos is perturbed. We can determine these videos as stego based on these two statistical indicators. The reason for the above experimental results is that the embedding operation performed by Tar2 on the MVP index destroys the MVP's local optimality, which is consistent with the theoretical analysis in Section III-A1 and Property 2 in Section III-B. The experimental results for the steganography algorithm Tar3 [17] are shown in TABLE VI. The experimental results are similar to those of Tar2, but the Tar3 method is an adaptive embedding in MVP index with STC. The experimental results show that the proposed scheme can ideally detect the damage to the local optimality of the MVP. The underlying reason is that the embedding operation performed by Tar3 on the MVP index destroys the local optimality of the MVP, which is also consistent with the theoretical analysis in Section III-A1 and Property 2 in Section III-B. Through the above analysis, for the three state-of-the-art steganography methods, the proposed steganalysis feature in this paper is invalid only in the case of \(T=0\) for the Tar2. In all other conditions, we can accurately distinguish the cover video from the stego video by whether the optimal rate of MVP is equal to 100%. The above experimental findings also verify the correctness of Property 2 and Corollary 1 in Section III-B. ### _Comparison with other Machine Learning-based Steganalysis Methods_ To compare the detection performance of existing steganalysis methods against the above three steganography algorithms, this section uses the existing state-of-the-art steganalysis methods to detect the stego video in Section IV-C. Due to limitations in the length of the paper, we only list some experimental data on database DB1 in Table VII. The feature set of AoSO [19] uses the SAD to describe the MV's local optimal. From the experimental data, AoSO has some detection effect on Tar1 because Tar1 is a steganography method that directly modifies the MV and destroys the local optimality of the MV. AoSO is ineffective on Tar2 and Tar3 because these two steganography methods embed messages in the MVP index, and the original MV remains unchanged. NPELO [20] is a steganalysis feature set based on the local optimality of MV rate-distortion. The experimental data shows that NPELO performs better in detecting Tar1 than Tar2 and Tar3 for reasons similar to AoSO. The overall performance of NPELO is better than that of AoSO because NPELO considers rate-distortion (including pixel distortion and code bit), making it more reasonable. VSRNet [23] is a neural network-based steganalysis method, which is effective in detecting Tar1 but ineffective for Tar2 and Tar3. The experimental results indicate that VSRNet cannot yet capture the perturbations caused by the steganography method, which embeds messages in the index of MVP. LOCL [26] is a feature set that combines the NPELO and the optimal MVP candidate list, and its performance is better than AoSO, NPELO and VSRNet overall. However, LOCL considers the MVP's optimality together with the MV's optimality when designing the features, but the optimality of the MVP still needs to be fully exploited. From the above analysis, the detection accuracy of traditional steganalysis features for the three steganography algorithms is low (mostly below 80%). In contrast, according to the analysis in Section IV-C, the proposed optimal rate of MVP can perfectly distinguish cover vector video from stego video in most cases. And more importantly, the proposed method in this paper does not need to train the classifier as the experiments in this section, so it is more practical and efficient. ### _Applicability in B-Frames_ The proposed method is based on the AMVP technique and can be applied to all videos encoded with the AMVP technique. Therefore, although the inter-Frame coding frames used in the previous experiments are P-Frames, they are also theoretically applicable to B-Frames. If there are two reference lists for the PU block encoded by the AMVP technique in the B-Frame, the MVP on both reference lists satisfies the properties of Section III-B. To verify the applicability of the proposed scheme on B-Frames, we use the GOP structure of IBBBBBBBBL... for this experiment. The experiment is performed on the DB1 database with the steganography method as Tar2 (it is worth noting that Tar1 and Tar3 are also applicable and are not listed in this paper due to space limitation), and other parameters are the same as in Section IV-C. TABLE VIII shows the statistical results of the optimal rate of MVP for the cover video and stego video. As can be seen from the data in the table, the average value of \(Optimal(mvp)\) for the cover videos at different QPs is 100%, and the videos with 100% optimal rate account for 100% of the whole dataset, indicating that the experimental results at B-Frame also satisfy the Property 1 in Section III-B. The results at the embedding threshold \(T=0\) are consistent with those at \(T=0\) in Table V, again because only the PUs whose \(mvp_{0}=mvp_{1}\) are used for message embedding, so the optimality of the MVP is not destroyed. In contrast, when \(T\neq 0\), \(Optimal(mvp)\) in the experimental results is not 100%, and the proportion of the videos with a 100% optimal rate to the whole videos is 0%, which indicates that the optimality of the MVP of all videos is perturbed. In summary, for the HEVC videos compressed with B-Frames, except for \(T=0\), the proposed optimal rate of MVP can still effectively distinguish between cover and stego videos. ### _The Complexity Analysis of the Proposed Feature_ In order to analyze the computational complexity of the proposed scheme, this subsection compares the time required for feature extraction with different QPs. Table IX shows the dimensionality of the four steganalysis features and the average time needed to extract a video sequence (see Section IV-C for parameter settings, CIF format, 240 frames). The experiments are run on a desktop computer with a 3.1GHz Intel Core i9 CPU and 64 GB RAM. The data in the table shows that the feature dimension of the proposed scheme in this paper is only 1, which is the lowest among all methods. Regarding computational complexity, both AoSO and NPELO need to compute the 1-neighbourhood optimality of the MV, and the computational complexity is close. The highest complexity is LOCAL because it has to calculate not only the optimality of the MVP but also the optimality of the MVP's candidate list. The extraction time of the proposed scheme is only about 1/2 of the other algorithms, because the proposed scheme does not need to calculate the 1-neighborhood optimality of the MV, but only the rate-distortion of the two MVPs. In addition, the smaller the QP, the larger the running time of all algorithms. This is because the smaller the QP, the finer the division of coding blocks, the more MVs in the code stream, and the more data to be processed. Overall, the computational complexity of the scheme in this paper is minimal because the feature dimension is only one, and the rate-distortion of only two MVPs needs to be calculated. More importantly, the proposed scheme does not require extensive machine learning based training. Therefore, the proposed method is very efficient and can be applied to practical scenarios. ## V Conclusion The development of video coding standards always aims to reduce redundancy and increase compression performance while ensuring visual quality [14, 31]. In contrast, steganography's fundamental starting point is exploiting data redundancy to embed messages. Therefore although new coding standards provide more coding elements (e.g., MVP index in AMVP techniques) for message embedding, the reduction of redundancy also poses challenges to steganography. For example, with the widespread adoption of HEVC video standard, video steganography and steganalysis based on HEVC have received more and more attention. Although the AMVP technique provides more embedding space for steganography, it also exposes risks. We analyze that either the MVP index or the MVD for message embedding may lead to perturbations in the optimality of the MVP. Based on the above observation, we design an optimal rate of MVP with dimension only one as a steganalysis feature. This feature can accurately distinguish cover videos from stego videos and has the advantage of low complexity. In the subsequent work, we will focus on taking advantage of the AMVP technique to embed messages in HEVC while ensuring that the optimality of the MVP is not destroyed, and achieving the goal of improving the embedding capacity while enhancing the security against steganalysis. ## VI Acknowledgements This work is supported by the National Natural Science Foundation of China (Grant No.62272478, No.62202496, No.62102450).
2302.05136
Ponderomotive force due to the intrinsic spin for electrostatic waves in a magnetized plasma
We study the contribution from the electron spin to the ponderomotive force, using a quantum kinetic model including the spin-orbit correction. Specifically, we derive an analytical expression for the ponderomotive force, applicable for electrostatic waves propagating parallel to an external magnetic field. To evaluate the expression, we focus on the case of Langmuir waves and on the case of the spin-resonance wave mode, where the classical and spin contributions to the ponderomotive force are compared. Somewhat surprisingly, dependent on the parameter regime, we find that the spin contribution to the ponderomotive force may dominate for the Langmuir wave, whereas the classical contribution can dominate for the spin resonance mode. Naturally, this does not prevent the opposite case from being the more common one.
Haidar Al-Naseri, Gert Brodin
2023-02-10T09:40:52Z
http://arxiv.org/abs/2302.05136v1
# Ponderomotive force due to the intrinsic spin for electrostatic waves in a magnetized plasma ###### Abstract We study the contribution from the electron spin to the ponderomotive force, using a quantum kinetic model including the spin-orbit correction. Specifically, we derive an analytical expression for the ponderomotive force, applicable for electrostatic waves propagating parallel to an external magnetic field. To evaluate the expression, we focus on the case of Langmuir waves and on the case of the spin-resonance wave mode, where the classical and spin contributions to the ponderomotive force are compared. Somewhat surprisingly, dependent on the parameter regime, we find that the spin contribution to the ponderomotive force may dominate for the Langmuir wave, whereas the classical contribution can dominate for the spin resonance mode. Naturally, this does not prevent the opposite case from being the more common one. ## I Introduction During the last decades, there has been an increasing number of works, see e.g. the reviews [1; 2; 3; 4; 5] and references therein, studying quantum plasma physics. The motivation behind the works includes various applications, for example, quantum wells [6], plasmonics [7] and spintronics [8], as well as astrophysics [9; 10], strong field dynamics, and general theoretical interest. As a first rule of thumb, a quantum description of plasmas is needed in the low-temperature high-density regime, as displayed in temperature density plots made e.g. in Refs. [1; 2]. However, it should be noted that quantum plasma behavior also can be introduced by a strong magnetic field such as in astrophysics (e.g. causing Landau quantization), and by strong laser fields inducing spin-polarization [11; 12]. The ponderomotive force is the main source behind broad classes of nonlinear plasma phenomena. Concrete examples include e.g. wake-field generation [13; 14], soliton formation [15], self-focusing [16], and the subsequent nonlinear wave collapse [17]. Pioneering work regarding the classical expression for the ponderomotive force in a magnetized plasma was made by Karpman and Washimi [18] based on fluid theory, which was later generalized to include kinetic effects. The generalization of the ponderomotive force in magnetized plasma to include quantum effects, in particular, due to spin, has been done in Refs. [19; 20]. However, these works considered the effects due to non-relativistic spin dynamics. Moreover, in Ref. [21], the ponderomotive force due to semi-relativistic spin dynamics in unmagnetized plasma was calculated. In this work, we calculate the ponderomotive force due to semi-relativistic spin dynamic in magnetized plasmas. To be more specific, we consider electrostatic waves using the kinetic equation derived by Asenjo et al. [22]. Firstly, we study the linear electrostatic wave propagation in a magnetized plasma. This results in deriving the dispersion relation for the electrostatic waves. In addition to the common Langmuir mode, even for the case of immobile ions, it should be noted that in a magnetized plasma linearized theory allows for a new spin-dependent wave mode referred to as the spin resonance mode. In section II C, we use perturbation theory based on linear calculations in order to calculate the ponderomotive force. Next, in section III, the general result is evaluated, comparing the magnitude of the classical and of spin-dependent contributions. This comparison is split into two parts, depending on whether the linear wave mode is a Langmuir wave or a spin resonance mode. Finally, in section IV, the results are summarized and the conclusions are drawn. ## II Basic equations and derivations In this section, we first present the basic quantum kinetic theory to be used throughout the manuscript. The theory is then used to investigate the linearized eigenmodes in a magnetized plasma in an electrostatic field geometry with the wave vector parallel to the external magnetic field. In the next sub-section, we perform nonlinear perturbation theory based on previous results, in order to deduce the ponderomotive force for electrostatic waves. ### Basic equations Different quantum kinetic theories have been put forward in the literature, see e.g. the reviews given in [5]. In particular, two models that have been proven to be equivalent, based on the weakly relativistic limit of the Dirac Hamiltonian, have been derived in Ref. [22] and in Ref. [23]. We will make use of the former formulation, based on a scalar distribution function, where the usual phase space is extended by a dependence on the independent spin-variable [22]. Specifically, we will use the governing equation \[\frac{\partial f}{\partial t}+\Big{[}\frac{\mathbf{p}}{m}+\frac{ \mu}{2mc}\mathbf{E}\times(\mathbf{s}+\nabla_{s})\Big{]}\cdot\nabla_{x}f\\ +q\bigg{(}\mathbf{E}+\frac{1}{c}\Big{[}\frac{\mathbf{p}}{m}+\frac {\mu}{2mc}\mathbf{E}\times(\mathbf{s}+\nabla_{s})\Big{]}\times\mathbf{B}\bigg{)} \cdot\nabla_{p}f\\ +\frac{2\mu}{\hbar}\mathbf{s}\times\Big{(}\mathbf{B}-\frac{ \mathbf{p}\times\mathbf{E}}{2mc}\Big{)}\cdot\nabla_{s}f\\ +\mu\nabla_{x}\bigg{[}(\mathbf{s}+\nabla_{s})\cdot\Big{(}\mathbf{ B}-\frac{\mathbf{p}\times\mathbf{E}}{2mc}\Big{)}\bigg{]}\cdot\nabla_{p}f=0, \tag{1}\] where \(f(x,p,s,t)\) is the quasi-distribution function in phase-space, extended by the independent spin-variable \(\mathbf{s}\), defined to have unit length, \(m\) is the electron mass, \(\mu=\hbar q/2mc\) is the electron magnetic moment and \(q=-e\) is the electron charge. This equation describes the dynamics of an ensemble of spin-1/2 particles in the Hartree approximation, i.e. the derivation applies mean-field theory neglecting correlations and exchange effects. While this model contains most dynamical effects related to the electron spin, such as the magnetic dipole force, spin precession, and the spin-orbit interaction, the evolution equation still neglects particle dispersive effects. This is a valid approximation in the regime of relatively long scale-lengths, fulfilling \(\hbar^{2}\nabla_{x}^{2}\nabla_{p}^{2}\ll 1\). Note that we have also omitted the Darwin-term in the original kinetic equation derived by [22] since it is smaller than the other terms in the regime of consideration, with \(\hbar^{2}\nabla_{x}^{2}\ll m^{2}c^{2}\). Furthermore, since the model is semi-relativistic, the relation between \(\mathbf{v}\) and \(\mathbf{p}\) is non-trivial, reading \[\mathbf{v}=\frac{\mathbf{p}}{m}+\frac{3\mu}{2mc}\mathbf{E}\times\mathbf{s} \tag{2}\] This relation is important when the sources in Maxwell's equation is computed. The relations needed to close the system are as follows \[\nabla\cdot\mathbf{E} =4\pi\rho \tag{3}\] \[\nabla\times\mathbf{B} =\frac{1}{c}\frac{\partial\mathbf{E}}{\partial t}+\frac{4\pi}{c} \mathbf{J}, \tag{4}\] where \(\rho\) and \(\mathbf{J}\) are the charge and current density \[\rho =\rho_{f}+\nabla\cdot\mathbf{P} \tag{5}\] \[\mathbf{J} =\mathbf{J}_{f}+\nabla\times\mathbf{M}+\frac{\partial\mathbf{P}} {\partial t}, \tag{6}\] where \[\rho_{f} =q\int d\Omega f \tag{7}\] \[\mathbf{P} =-3\mu\int d\Omega\frac{\mathbf{s}\times\mathbf{p}}{2mc}f\] (8) \[\mathbf{J}_{f} =q\int d\Omega\Big{[}\frac{\mathbf{p}}{m}+\frac{3\mu}{2mc} \mathbf{E}\times\mathbf{s}\Big{]}f\] (9) \[\mathbf{M} =3\mu\int d\Omega\,\mathbf{s}f \tag{10}\] are where the expressions represent the free charge density, the polarization, the free current density, and the magnetization, respectively. Here, we have used \(d\Omega=d^{3}pd^{3}xd^{2}s\). In this work, we express the momentum \(\mathbf{p}\) in cylindrical coordinates \((p_{\perp},\phi_{p},p_{z})\) while for spin \(\mathbf{s}\), we use spherical coordinates \((\phi_{s},\theta_{s})\). Before we proceed with the analysis, let us point out that the closely related model derived by [23] does not use spin as an independent variable, but instead has a classical type of (scalar) distribution function for the charge density, and a vector-valued distribution function for the magnetization. The relation between these two models has been described in some detail in Refs. [1; 5]. We stress that although the models are technically different, they have been shown to be formally equivalent. ### Linear theory As a prerequisite to computing the ponderomotive force, we first study the linearized theory. Specifically, we concentrate on electrostatic waves propagating parallel to an external magnetic field. Thus we first divide the distribution function into \(f(x,p,s,t)=f_{0}(p^{2},\theta_{s})+f_{1}(x,p,s,t)\), where \(f_{0}\) is the background distribution function which is homogeneous (see e.g. Ref. [5; 24] for a discussion of possible background functions) and \(f_{1}\) is the perturbed distribution function. The dependence \(f_{0}(p^{2})\) assures that the momentum dependence of the background is isotropic. For our case of electrostatic waves propagating parallel to an external magnetic fields we have \[\mathbf{E} =E\,\hat{\mathbf{z}}\] \[\mathbf{B} =B_{0}\,\hat{\mathbf{z}}\] \[\mathbf{k} =k\,\hat{\mathbf{z}}.\] To proceed, we use spherical coordinates in spin space \(\phi_{s},\theta_{s}\), with the length \(|s|=1\). Thus the Cartesian components are written \(\mathbf{s}=(\cos\phi_{s}\sin\theta_{s},\sin\phi_{s}\sin\theta_{s},cos\theta_{s})\). Next, we linearize Eq. (1) and expand \(f_{1}\) using the following ansatz \[f_{1}=\frac{1}{2\pi}\sum_{n,n^{\prime}=-\infty}^{\infty}g_{n,n^{\prime}}e^{in \varphi_{p}}e^{in^{\prime}\varphi_{s}}e^{i(kz-\omega t)} \tag{11}\] Applying this ansatz to the linearized version of Eq. (1), after some algebra we find an explicit expression of \(f_{1}\) in terms of the unperturbed function \(f_{0}\) of the form \[f_{1}=A+B_{+}e^{i(\varphi_{p}-\varphi_{s})}+B_{-}e^{-i(\varphi_{p}-\varphi_{s})} \tag{12}\] where \[A =-\frac{iqE}{\omega-kp_{z}/m}\frac{\partial f_{0}}{\partial p_{z}} \tag{13}\] \[B_{\pm} =-i\frac{q\mu B_{0}E/4mc}{\omega-kp_{z}/m\mp\Delta\omega_{ce}} \Big{(}\sin\theta_{s}+\cos\theta_{s}\frac{\partial}{\partial\theta_{s}}\Big{)} \frac{\partial f_{0}}{\partial p_{\perp}}\] \[\pm i\frac{k\mu Ep_{\perp}/4mc}{\omega-kp_{z}/m\mp\Delta\omega_{ce }}\Big{(}\sin\theta_{s}+\cos\theta_{s}\frac{\partial}{\partial\theta_{s}} \Big{)}\frac{\partial f_{0}}{\partial p_{z}}\] \[+i\frac{\mu Ep_{\perp}/2\hbar mc}{\omega-kp_{z}/m\mp\Delta\omega_ {ce}}\frac{\partial f_{0}}{\partial\theta_{s}}. \tag{14}\] Here, \(\Delta\omega_{ce}=\omega_{cg}-\omega_{ce}\), \(\omega_{ce}=qB/m\) is the cyclotron frequency and \(\omega_{cg}=(g/2)\omega_{ce}\) is the spin precession frequency and \(g\approx 2.002318\) is the electron g-factor. Note that in the classical ( \(\hbar\longrightarrow 0\)) limit, we get \(B_{\pm}=0\) and we have the standard classical expression for electrostatic Langmuir waves. Next we calculate the dispersion relation by using Ampers law Eq. (4), where the total current \(\mathbf{J}\) is given by Eq. (6). In the integration process when calculating the currents, we expand the denominators in Eq. (12) to the first non-vanishing order of \(p_{z}\), as is appropriate for a low or modest temperature. This condition is also necessary to avoid strong wave-particle interaction leading to appreciable wave-damping. Moreover, we use the following expression of the background distribution \(f_{0}\) \[f_{0}(p^{2},\theta_{s})=\sum_{\pm}(1\pm\cos\theta_{s})f_{0\pm}(p^{2}), \tag{15}\] where \(f_{0\pm}(p^{2})\) is the unperturbed distribution function for the particles in spin up/down state. Thus we have \(\int d\Omega f_{0\pm}(p^{2})=n_{0\pm}\), where \(n_{0\pm}\) is the number density for spin up/down state. To carry the momentum integration, we need to specify the background distribution function \(f_{0\pm}\). For a non-degenerate plasma, where the Fermi temperature is well below the thermodynamic temperature, the appropriate distribution function is the Maxwell-Boltzmann distribution function with a spin-dependent part [24] \[f_{0\pm}=\frac{1}{N_{m}}\,e^{-p^{2}/m^{2}v_{th}^{2}}\,e^{\pm\mu B_{0}/K_{B}T}, \tag{16}\] where \(K_{B}\) is the Boltzmann constant, \(T\) is the temperature, the thermal velocity \(v_{th}\) fulfills \(mv_{th}^{2}/2=k_{B}T\), and \(N_{m}=8m^{3}v_{th}^{3}\pi^{5/2}\cosh\left(\mu B_{0}/K_{B}T\right)\) is the normalization factor. After carrying out the spin- and momentum-integration, we obtain the dispersion relation \[\omega^{2}\Bigg{(}1+\frac{\hbar^{2}\omega_{p}^{2}\Delta\omega_{ce }}{8m^{2}c^{4}}\Bigg{[}\frac{\omega_{ce}}{\omega^{2}-\Delta\omega_{ce}^{2}}+ \frac{k^{2}v_{th}^{2}/2(3\omega^{2}\omega_{ce}+\Delta\omega_{ce}^{2}\omega_{ce })}{(\omega+\Delta\omega_{ce})^{3}(\omega-\Delta\omega_{ce})^{3}}\Bigg{]}+ \frac{\omega mvt_{th}^{2}}{\hbar\Delta\omega_{ce}(\omega^{2}-\Delta\omega_{ce }^{2})}\tanh\frac{\mu B_{0}}{K_{B}T}\\ +\frac{k^{2}v_{th}^{2}\omega}{(\omega+\Delta\omega_{ce})^{2}( \omega-\Delta\omega_{ce})^{2}}\Bigg{)}=\omega_{p}^{2}\Big{(}1+\frac{3}{2}\frac {k^{2}v_{th}^{2}}{\omega^{2}}\Big{)} \tag{17}\] where we have used \[\omega_{p}^{2}=\frac{q^{2}}{m}\sum_{\nu}\int d\Omega f_{0\nu}, \tag{18}\] as the definition of the plasma frequency. Taking the classical limit by letting \(\hbar\to 0\) in Eq. (17), most terms disappear and we get the classical Langmuir dispersion relation. While the coefficients in front of the spin-dependent terms are usually small (unless we have very high densities and/or magnetic field strengths), nevertheless the quantum terms can be important for wave-frequencies close to \(\Delta\omega_{ce}\). The effects of spin-resonances, i.e. frequencies fulfilling \(\omega\approx\Delta\omega_{ce}\) will be explored below. ### The ponderomotive Force The aim of this sub-section is to generalize the linearized treatment to the weakly nonlinear regime, in order to deduce the ponderomotive force for electrostatic waves. For this purpose, we use the following ansatz \[f(x,p,s,t)=f_{0}(p^{2},\theta_{s})+f_{lf}(z,t,p,\theta_{s})\\ +\frac{1}{2}\Big{[}\tilde{f}_{1}(z,t,p,s)e^{ikz-i\omega t}+ \tilde{f}_{1}^{*}(z,t,p,s)e^{-ikz-i\omega t}\Big{]}. \tag{19}\] to calculate the weakly nonlinear low-frequency response to electrostatic waves. Here \(f_{lf}\) is the low-frequency response due to quadratic nonlinearities, \(\tilde{f}_{1}\) represents the slowly varying high-frequency wave and the star denotes the complex conjugate. As usual, "slowly varying" means that the amplitude derivatives are small compared to the rapidly oscillating scale at \((\omega,k)\). Using this ansatz in Eq. (1), keeping up to quadratically nonlinear terms, and averaging to isolate the low-frequency scale, we obtain \[\Big{[}\partial_{t}+\frac{p_{z}}{m}\partial_{z}\Big{]}f_{lf}=-qE_{lf} \frac{\partial f_{0}}{\partial p_{z}}-\frac{\mu}{2mc}\Big{[}\tilde{\mathbf{E}} \times(\mathbf{s}+\nabla_{s})\Big{]}\cdot\nabla_{x}\tilde{f}_{1}^{*}\\ -\frac{q\tilde{E}}{4}\frac{\partial\tilde{f}_{1}^{*}}{\partial p _{z}}-\frac{q\mu}{8mc}\Big{(}\big{[}\tilde{\mathbf{E}}\times(\mathbf{s}+ \nabla_{s})\big{]}\times\mathbf{B}_{0}\Big{)}\cdot\nabla_{p}\tilde{f}_{1}^{*}\\ +\frac{\mu}{4hmc}\Big{[}\mathbf{s}\times(\mathbf{p}\times\tilde{ \mathbf{E}})\Big{]}\nabla_{s}\tilde{f}_{1}^{*}+\frac{\mu}{8mc}\nabla\Big{[}( \mathbf{s}+\nabla_{s})\cdot(\mathbf{p}\times\tilde{\mathbf{E}})\Big{]}\cdot \nabla_{p}\tilde{f}_{1}^{*}\\ +c.c. \tag{20}\] The high frequency response \(\tilde{f}_{1}\) is obtained by making the substitution \[\omega\rightarrow\omega+i\partial_{t}\\ k\to k-i\partial_{z}\] in the linear solution of \(f_{1}\) in Eq. (12), where \(i\partial_{t}\) and \(i\partial_{z}\) can be treated as small perturbations due to the slowly varying amplitudes. Now having an implicit expression for \(f_{lf}\), we will calculate the total low-frequency current \[J_{lf}=J_{lf}^{f}+J_{lf}^{p}, \tag{21}\] where \[J_{lf}^{p} =-3\mu\partial_{t}\int d\Omega\frac{p_{\perp}}{2mc}\sin\theta_{s }\Big{(}\cos\varphi_{s}\sin\varphi_{p}\] \[J_{lf}^{f} =q\int d\Omega\frac{p_{z}}{m}f_{lf} \tag{22}\] are the free and polarization low-frequency current respectively. Note that the low frequency free current looks simpler than the expression in Eq. (9) since the current is directed along \(\hat{z}\). Now we want to use the expression of \(f_{lf}\) in Eq. (20) to calculate the current Eq. (22). But since Eq. (20) does not provide an explicit expression of \(f_{lf}\), we need to make some further calculations. We note that for low-frequency free current in Eq. (22), we have the following relation \[\partial_{t}J_{lf}^{f}+q\int d\Omega\frac{p_{z}^{2}}{m^{2}}\frac{\partial f_{lf }}{\partial z}=q\int d\Omega\frac{p_{z}}{m}\Big{[}\partial_{t}+\frac{p_{z}}{ m}\partial_{z}\Big{]}f_{lf}. \tag{23}\] The term in the square brackets in Eq. (23) is the same as in the right hand side of Eq. (20). However we have the integral in the left hand side of Eq. (23) that we need to deal with. Due to the proportionality of \(p_{z}^{2}\), this term is small in the low temperature limit and we will use this for a perturbative calculation in the next step. Taking the time-derivative of Eq. (23), we get \[\partial_{t}^{2}J_{lf}^{f}\approx q\partial_{t}\int d\Omega\frac{ p_{z}}{m}\Big{[}\partial_{t}+\frac{p_{z}}{m}\partial_{z}\Big{]}f_{lf}\\ -q\partial_{z}\int d\Omega\frac{p_{z}^{2}}{m^{2}}\Big{[}\partial_ {t}+\frac{p_{z}}{m}\partial_{z}\Big{]}f_{lf} \tag{24}\] Note that we added \(p_{z}^{3}/m^{3}\partial_{z}^{2}f_{lf}\) in the last term. This term turns to be a higher order thermal correction to the rest of the terms, but we added it in order to use the implicit expression of \(f_{lf}\) in Eq. (20). Doing the same procedure for the polarization current, we get \[\partial_{t}^{2}J_{lf}^{p}\approx-\frac{3\mu}{2mc}\partial_{t}^{2 }\int d\Omega p_{\perp}\sin\theta_{s}(\cos\varphi_{s}\sin\varphi_{p}-\sin \varphi_{s}\cos\varphi_{p})\Big{[}\partial_{t}+\frac{p_{z}}{m}\partial_{z} \Big{]}f_{lf}\\ +\frac{3\mu}{2mc}\partial_{t}\int d\Omega p_{\perp}\sin\theta_{s} (\cos\varphi_{s}\sin\varphi_{p}-\sin\varphi_{s}\cos\varphi_{p})\frac{p_{z}}{m} \partial_{z}\Big{[}\partial_{t}+\frac{p_{z}}{m}\partial_{z}\Big{]}f_{lf} \tag{25}\] Now we can calculate the integrals in Eq. (24) and Eq. (25). In doing that, we use the Eq. (15) for \(f_{0}\). Calculating the \(\phi_{s}\), \(\phi_{p}\) and \(\theta_{s}\)-integrals, we get \[\partial_{t}^{2}J_{lf}^{f}=-2(2\pi)^{2}q^{2}\sum_{\nu}\int p_{\perp} dp_{\perp}dp_{z}\Big{[}\frac{p_{z}}{m}\partial_{t}-\frac{p_{z}^{2}}{m^{2}} \partial_{z}\Big{]}\Bigg{[}E_{lf}\frac{\partial f_{0\nu}}{\partial p_{z}}\\ +i\left|E\right|^{2}\frac{\partial}{\partial p_{z}}\bigg{(}\frac{1 }{\omega-k\frac{p_{z}}{m}-i(\partial_{t}+\frac{p_{z}}{m}\partial_{z})}-\frac{1 }{\omega-k\frac{p_{z}}{m}+i(\partial_{t}+\frac{p_{z}}{m}\partial_{z})}\bigg{)} \frac{\partial f_{0\nu}}{\partial p_{z}}\\ +i\left|E\right|^{2}\frac{\mu B_{0}}{32mc}\frac{\partial}{ \partial p_{\perp}}\sum_{\pm}\bigg{(}\frac{q\mu B_{0}\partial_{p_{z}}\int_{0 \nu}\mp(k-i\partial_{z})\mu p_{\perp}\partial_{p_{z}}f_{0\nu}+2\nu\mu p_{\perp }/\hbar f_{0\nu}}{\omega-k\frac{p_{z}}{m}-i(\partial_{t}+\frac{p_{z}}{m} \partial_{z})\mp\Delta\omega_{ce}}\\ -\frac{q\mu B_{0}\partial_{p_{\perp}}f_{0\nu}\mp(k+i\partial_{z}) \mu p_{\perp}\partial_{p_{z}}f_{0\nu}+2\nu\mu p_{\perp}/\hbar f_{0\nu}}{ \omega-k\frac{p_{z}}{m}+i(\partial_{t}+\frac{p_{z}}{m}\partial_{z})\mp\Delta \omega_{ce}}\bigg{)}\Bigg{]} \tag{26}\] and \[\partial_{t}^{2}J_{lf}^{p}=\frac{(2\pi)^{2}q^{2}\mu^{2}\left|E \right|^{2}\partial_{t}}{4m^{2}c^{2}}\sum_{\nu,\pm}\int p_{\perp}^{2}dp_{\perp} dp_{z}\Big{[}\partial_{t}-\frac{p_{z}}{m}\partial_{z}\Big{]}\frac{\partial}{ \partial p_{z}}\\ \Bigg{[}\frac{\pm B_{0}\partial_{p_{\perp}}f_{0\nu}-(k-i\partial_ {z})p_{\perp}/q\partial_{p_{z}}f_{0\nu}\pm 2\nu p_{\perp}/\hbar qf_{0\nu}}{ \omega-k\frac{p_{z}}{m}-i(\partial_{t}+\frac{p_{z}}{m}\partial_{z})\mp\Delta \omega_{ce}}\\ +\frac{\pm B_{0}\partial_{p_{\perp}}f_{0\nu}-(k+i\partial_{z})p_{ \perp}/q\partial_{p_{z}}f_{0\nu}\pm 2\nu p_{\perp}/\hbar qf_{0\nu}}{\omega-k\frac{p_{z}}{m} +i(\partial_{t}+\frac{p_{z}}{m}\partial_{z})\mp\Delta\omega_{ce}}\Bigg{]} \tag{27}\] Expanding the denominators in Eq. (26) and Eq. (27) to lowest non-vanishing order of \(p_{z}\), this is consistent with the approximation made in Eq. (24) and Eq. (25). Then, we integrate over \(p_{z}\) and \(p_{\perp}\) and use Ampere's law \[\Big{(}\frac{\partial^{2}}{\partial_{t}^{2}}+\omega_{p}^{2}\Big{)} E_{lf}=-\frac{2q\omega_{p}^{2}}{m\omega^{2}}\Bigg{[}1-\frac{7\mu B_{0}\, \hbar\omega^{2}\Delta\omega_{ce}}{64m^{2}c^{3}(\omega^{2}-\Delta\omega_{ce}) }\Bigg{]}\frac{\partial\left|E\right|^{2}}{\partial z}\\ +\frac{\mu B_{0}\hbar kq}{16m^{3}c^{3}}\,\frac{\omega\Delta\omega _{ce}\omega_{p}^{2}}{(\omega+\Delta\omega_{ce})^{2}(\omega-\Delta\omega_{ce}) ^{2}}\frac{\partial\left|E\right|^{2}}{\partial t}, \tag{28}\] Taking the classical limit \(\hbar\to 0\), we get \[\Big{(}\frac{\partial^{2}}{\partial_{t}^{2}}+\omega_{p}^{2}\Big{)}E_{lf}=- \frac{2q\omega_{p}^{2}}{m\omega^{2}}\frac{\partial\left|E\right|^{2}}{ \partial z}\equiv\frac{qn_{0}}{\epsilon_{0}}f_{p} \tag{29}\] where \(f_{p}\) is defined by the second equality, such that \(f_{p}\) gives us the classical ponderomotive force. Due to the velocity perturbation being parallel to the external magnetic field, the unperturbed magnetic field does not influence the result in the classical case. However, quantum mechanically, due to the spin-orbit interaction, there is a contribution that modifies the classical ponderomotive force rather significantly, as seen by Eq. (28). In particular, in addition to a term proportional to the spatial intensity gradient, we get a term proportional to the temporal intensity gradient. Even more importantly, the quantum mechanical terms contain spin-resonances, that will be investigated in the next section. ## III Comparison of classical and non-classical contributions to the ponderomotive force The purpose of this section is to illustrate the importance of the spin contributions in Eq. (28) by comparing the new terms to the classical contribution. However, the relative magnitude of the spin terms depend to a considerable degree on the linear wave properties of the electrostatic pulse, as described by the dispersion relation Eq. (17). To simplify the expression of the ponderomotive force in Eq. (28), we use that, to the lowest order approximation, the pulse is stationary in a frame moving with the group velocity \(v_{g}=\frac{\partial\omega}{\partial k}\), such that the approximation \[\frac{\partial\left|E\right|^{2}}{\partial t}\simeq-v_{g}\frac{\partial\left|E \right|^{2}}{\partial z} \tag{30}\] can be applied to compare the magnitude of the terms in Eq. (28). Thus, as a prerequisite to studying the ponderomotive fore, we need to analyze the linear dispersion relation to deduce the group velocity. While the general behavior of Eq. (17) can be complicated, our analysis is simplified by the fact that for most naturally occurring plasmas, we can treat \(\hbar\omega_{p}/mc^{2}\) and \(\hbar\omega_{c}/mc^{2}\) as small parameters. For this case, that we focus on below, the solutions of Eq. (17) separates into two modes. One mode resembling the classical Langmuir mode to a good approximation, and another mode with a frequency close to the spin resonance, approximately given by \(\omega\simeq\Delta\omega_{ce}\). We will simply refer to these modes as the Langmuir mode and the spin resonance mode, respectively. Below we compare the relative contribution of the classical and non-classical terms in Eq. (28) for the Langmuir mode and for the spin resonance mode. ### The Langmuir mode For a case where the linear dispersion is approximately classical, the frequency \(\omega\) cannot be too close to the spin resonance. If the spin resonance is avoided, however, the magnitude of the quantum terms in Eq. (28) will also be somewhat limited. One would expect, perhaps, that the condition for neglecting the spin contribution to Eq. (28) would be the same as for dropping the spin contribution in Eq. (17). As it turns out, however, this is not quite true. To the contrary, it is possible to have a situation where the linear dispersion relation is approximately classical, although the spin terms dominate the expression for the ponderomotive force. This require an intermediate regime, where the wave frequency is fairly close to the spin resonance, in order for the spin contributions of Eq. (28) to be magnified. Still, the wave frequency must be sufficiently far from the spin resonance, in order not to invalidate the classical approximation of Eq. (17). Firstly, we analyze the linear dispersion relation Eq. (17), comparing the classical terms with the dominant spin term. For the classical Langmuir dispersion relation to hold approximately, we must have the strong inequality \[\frac{\hbar^{2}\omega_{p}^{2}}{8m^{2}c^{4}}\frac{\omega_{ce}}{\tilde{\omega}}\ll 1 \tag{31}\] fulfilled, where \(\tilde{\omega}\equiv\omega-\Delta\omega_{ce}\). Assuming this to hold, we can neglect all of the spin-terms in the dispersion relation Eq. (17). However, although we cannot be too close to the spin resonance (as implied by Eq. (31), we cannot be too far from the resonance, as otherwise the spin terms will not be significant in Eq. (28). In practice, for the Langmuir wave mode (and with \(\hbar\omega_{p}/mc^{2}\ll 1\) and \(\hbar\omega_{c}/mc^{2}\ll 1\)), spin terms are significant only for a rather small wave number spectrum narrowly centered around \(k\simeq k_{c}\), where \(k_{c}\) is the critical wave number where the classical Langmuir dispersion coincides with the spin resonance frequency. Thus, we will here be concerned with wave numbers \(k\simeq k_{c}\), where \(k_{c}\) fulfills: \[\omega^{2}=\omega_{p}^{2}+\frac{3}{2}k_{c}^{2}v_{th}^{2}\approx\Delta\omega_{ ce}^{2}. \tag{32}\] Evaluating the ponderomotive expression Eq. (28) in a narrow wave number spectrum centered around \(k=k_{c}\) (such that \(\omega\), approximately given by the classical Langmuir dispersion relation is centered around \(\Delta\omega_{ce}\)), we evaluate temporal derivatives according to Eq. (30). More specifically, in Fig. 1, we plot the ratio of the total ponderomotive force and its classical contribution only, for a narrow frequency spectrum surrounding the spin resonance. We assume Eq. (31) to be fulfilled, such that the classical Langmuir dispersion relation can be used to evaluate the group velocity. Moreover, the small deviation of \(k\) from \(k_{c}\) has been neglected in the plot. While the region where Eq. (31) is violated from the plot must be discarded (the region inside the two dashed vertical lines shown in the first panel of Fig. 1), we note that the spin terms of Eq. (28) magnifies the ponderomotive force in a wider region frequency region than the one that must be excluded. In other words, there is a narrow frequency band where the linear wave properties are classical to a good approximation, but where the nonlinear properties need to be evaluated with the spin terms included. In the second panel of Fig. 1, we see a similar plot, but for a somewhat weaker magnetic field (normalized magnetic field \(B_{n}=\mu B_{0}/mc^{2}=0.01\)), in which case the resonance region becomes slightly more narrow. The narrowing applies in a much higher degree to the validity condition. Thus, in the second panel of Fig. 1, the region violating the inequality Eq. (31) is too narrow to be displayed. To be concrete, if the two vertical lines were given as in Fig. 1, but for the new parameter values, the vertical lines would be centered too close to the precise resonance at \(\omega_{n}=0\) to be separable in the given resolution. Finally, to show the role of a varying density, three curves for different values of \(R\equiv\omega_{p}/\Delta\omega_{ce}\) are shown in Fig. 2. We can see that the curve with the higher value of \(R\) has the most narrow resonance. Finally, before we turn our attention to the spin resonance mode, we note that for the Langmuir mode, the second spin term (proportional to \(\partial\left|E\right|^{2}/\partial t\)) in Eq. (28) always dominate over the first (proportional to \(\partial\left|E\right|^{2}/\partial z\)), since the resonance is of a higher order for the second Figure 4: The fraction of the spin and classical ponderomotive force \(\alpha\) is plotted versus the normalized wavelength \(k_{n}=kv_{th}\Delta\omega_{ce}\). The solid curve is the first spin-term, the star-curve is the second spin term and the dashed curve is the total spin-force. In the first panel we have \(B_{n}=0.1,R=0.9\) and in the second we have \(B_{n}=0.1,R=0.7\). Figure 3: The fraction of the spin and classical ponderomotive force \(\alpha\) is plotted versus the normalized wavelength \(k_{n}=kv_{th}\Delta\omega_{ce}\). The solid curve is the first spin-term, the star-curve is the second spin term and the dashed curve is the total spin-force. In the first panel we have \(B_{n}=0.1,R=2\), in the second we have \(B_{n}=0.1,R=0.9\) and in the third \(B_{n}=0.1,R=0.1\). term. ### The Spin resonance mode Next, we will consider the complimentary frequency regime close to the spin resonance where Eq. (31) is violated. Specifically, we focus on the long wavelength regime \(\left|kv_{th}/(\omega-\Delta\omega_{ce})\right|\ll 1\), in order to avoid strong Landau damping of the mode. Since we are focusing on the spin resonance mode, we can let \(\omega=\Delta\omega_{ce}\) when evaluating the ponderomotive force terms Eq. (28), except in the denominators, where, obviously, a more accurate expression must be used. As a prerequisite for further analysis, we calculate the frequency for the mode at \(k=0\), which will deviate slightly from \(\Delta\omega_{ce}.\) Approximating the denominators of \(1/(\omega^{2}-\Delta\omega_{ce}^{2})\) of Eq. (17) as \(1/[2(\omega-\Delta\omega_{ce})\Delta\omega_{ce}]\), we compute the frequency for \(k=0\) for the spin resonance mode as \[\omega=\Delta\omega_{ce}\left(1+\delta\right) \tag{33}\] where \[\delta=\frac{\hbar^{2}\omega_{p}^{2}\omega_{ce}\Delta\omega_{ce}}{16m^{2}c^{4 }\left(\omega_{p}^{2}-\Delta\omega_{ce}^{2}\right)}\left[1+\frac{mv_{th}^{2}}{ \hbar\omega_{ce}}\tanh\left(\frac{\mu B_{0}}{k_{B}T}\right)\right] \tag{34}\] We note that \(\delta\ll 1\) holds to a very good approximation for most parameters of physical interest. Next, we consider the spin-resonance mode in the long wavelength regime \(\left|kv_{th}/(\omega-\Delta\omega_{ce})\right|\ll 1\), where the terms proportional to \(k^{2}\) of Eq. (17) are small corrections. The dispersion relation can then be approximated by \[\omega=\Delta\omega_{ce}\left[(1+\delta)+\frac{\hbar^{2}\omega_{p}^{2}}{128m^ {2}c^{4}}\frac{\omega_{ce}\Delta\omega_{ce}}{\left(\omega_{p}^{2}-\Delta\omega _{ce}^{2}\right)}\frac{k^{2}v_{t}^{2}}{\left(\omega-\Delta\omega_{ce}\right) ^{2}}\right] \tag{35}\] Apparently, since \(\delta\ll 1\), and the dispersive term proportional to \(k^{2}\) in Eq. (35) is small in the long wavelength regime, \(\omega\approx\Delta\omega_{ce}\) applies for the spin resonance-mode. Using Eq. (35), we can compute the group velocity for the spin resonance mode, and compare the magnitude of the classical and the spin terms in Eq. (28). The first spin term of Eq. (28) (proportional to \(\partial\left|E\right|^{2}/\partial z\)) and the second spin term (proportional to \(\partial\left|E\right|^{2}/\partial t\)) as well as the sum of both are plotted in Fig. 3 as a function of normalized wave-number \(k_{n}=kv_{th}/(\omega-\Delta\omega_{ce})\), where the validity condition of the plot require \(k_{n}\ll 1\). All contributions are normalized against the classical ponderomotive force, i.e. a contribution equal to \(-1\) is equal in magnitude to the classical ponderomotive force but has the opposite sign. The classical ponderomotive force for electrostatic fields is always directed from higher intensity to lower intensity, but, as seen in Fig. 3, this does not always hold for the spin contributions. Specifically, the first spin term, which dominate for the longest wavelength, can have the opposite sign as the classical term. However, while this term can be significant, it cannot be neagtive enough to revert the direction of the total ponderomotive force. Thus, independently of wave-number and parameter values, the total ponderomotive force for electrostatic fields is always directed from higher to lower intensities. One might expect that the spin terms are always important for the spin resonance mode. However, as shown in the first and second panels of Fig. 3, the classical ponderomotive force can be dominant for the longest wavelengths (both spin terms and the sum of them are well below unity). A thing to note when comparing the first and second panels of Fig. 3 is the change of sign of the first spin term. As it turns out, the first spin term has the same sign as \(1-R\). Next, as seen in the third panel of Fig. 3, we note that the relative importance of the ponderomotive force terms are rather sensitive to the plasma density. Decreasing the density, as captured by the \(R\)-parameter, we see that the first spin term becomes larger than the classical term, and will dominate in the long wavelength regime. In Fig. 3 we have only shown the results for really long wavelengths up to \(k_{n}<10^{-3}\). As the calculations of this section apply up to \(k_{n}<0.1\), the shorter wavelength regime is also of interest. Using the same parameters as in the second panel of Fig. 3, but extending the wave-number regime, we see in the first panel of Fig. 4 that the spin part of the ponderomotive force (due to the second term) will dominate for the shorter wavelengths. Decreasing the density further (as in the second panel of Fig. 4 ), the effects are even more pronounced, as the spin term can be more than a factor 50 larger than the classical term. Due to the scale, it is hard to read of the first spin-term that is small compared to the other terms in both panels of Fig. 4. This term varies little with wave-numbers and is close to 0.1 in the upper panel and 0.45 in the lower panel, for the whole spectrum. ## IV Summary and conclusions In the present paper, we have calculated the ponderomotive force for electrostatic waves propagating in a plasma parallel to an external magnetic field. The calculation has been performed using a quantum kinetic model, including the electron spin dynamics, covering effects such as spin-orbit interaction and Thomas precession. The model is of particular interest for strongly magnetized environments, as can be found in astrophysics. The ponderomotive force is of crucial importance for a large number of nonlinear phenomena, such as e.g. soliton formation, self-focusing, wake field generation, and particle acceleration. In section IV we have studied the relative magnitude of classical and quantum mechanical contributions to the ponderomotive force. An interesting finding is that many of the preliminary conclusions from linear theory does not translate into the nonlinear regime. Thus, even if the inequality Eq. (31) is fulfilled, such that the linear dispersion relation agrees to a good approximation with the classical Langmuir dispersion relation, in the vicinity of the spin resonance the quantum terms may still dominate the expression for the ponderomotive force. Similarly, even when the linear mode is a spin resonance mode (given by expression Eq. (35), which is quantum mechanical in nature, it may happen that the ponderomotive force is given by the classical expression. However, depending on the plasma parameters and the wave-number, it is also possible that the quantum contribution is larger than the classical one by orders of magnitude. Understanding the nonlinear spin dynamics in the simpler case of electrostatic fields is a first step towards understanding more complex nonlinear phenomena, such as e.g. spin polarization by intense laser pulses [11; 12]. Moreover, the findings of our paper are a necessary prerequisite for a more detailed analysis of nonlinear phenomena of astrophysical plasmas, in particular accretion discs surrounding objects such as pulsars and magnetars.
2308.08449
Improving CTC-AED model with integrated-CTC and auxiliary loss regularization
Connectionist temporal classification (CTC) and attention-based encoder decoder (AED) joint training has been widely applied in automatic speech recognition (ASR). Unlike most hybrid models that separately calculate the CTC and AED losses, our proposed integrated-CTC utilizes the attention mechanism of AED to guide the output of CTC. In this paper, we employ two fusion methods, namely direct addition of logits (DAL) and preserving the maximum probability (PMP). We achieve dimensional consistency by adaptively affine transforming the attention results to match the dimensions of CTC. To accelerate model convergence and improve accuracy, we introduce auxiliary loss regularization for accelerated convergence. Experimental results demonstrate that the DAL method performs better in attention rescoring, while the PMP method excels in CTC prefix beam search and greedy search.
Daobin Zhu, Xiangdong Su, Hongbin Zhang
2023-08-15T03:31:47Z
http://arxiv.org/abs/2308.08449v1
# Improving CTC-AED model with ###### Abstract Connectionist temporal classification (CTC) and attention-based encoder decoder (AED) joint training has been widely applied in automatic speech recognition (ASR). Unlike most hybrid models that separately calculate the CTC and AED losses, our proposed integrated-CTC utilizes the attention mechanism of AED to guide the output of CTC. In this paper, we employ two fusion methods, namely direct addition of logits (DAL) and preserving the maximum probability (PMP). We achieve dimensional consistency by adaptively affine transforming the attention results to match the dimensions of CTC. To accelerate model convergence and improve accuracy, we introduce auxiliary loss regularization for accelerated convergence. Experimental results demonstrate that the DAL method performs better in attention rescoring, while the PMP method excels in CTC prefix beam search and greedy search. Keywords:Speech recognition Hybrid CTC and attention Two-pass. ## 1 Introduction Due to its outstanding recognition performance, end-to-end (E2E) speech recognition has been increasingly applied in both academic and industrial fields. There are three mainstream E2E models in ASR, namely CTC-based [6, 9], Transducer-based [10, 11, 12], and AED-based models[13, 14, 15, 17, 19]. Based on whether the decoding stage considers the historical information of frame-wise outputs, they can be further classified into autoregressive (AR) [21] and non-autoregressive (NAR) [22, 24, 25, 26] models. AR models typically have higher accuracy but longer decoding time, while NAR models exhibit relatively poorer recognition performance but faster decoding speed. The CTC-based models have a decent decoding speed. However, due to their highly unreasonable assumption of context independence and the lack of language modeling capabilities, these models fail to meet practical requirements. Recently, a hybrid CTC-AED model [33] has been proposed. This model combines CTC and AED losses, applies dynamic chunk attention, and performs two-pass decoding. The second-pass decoding by AED (AR pattern) significantly improves the accuracy of the first-pass decoding by CTC (NAR pattern). However, in the training stage, the CTC and AED loss functions are separately computed, and the only relationship between CTC and AED is a weighted sum during the calculation of the total loss. To address this limitation, we propose a structure called integrated-CTC, which fuses the results from the attention-based decoder into CTC during the training stage. This approach helps alleviate the inherent weakness of CTC in language modeling to some extent. The AR method (attention mechanism) provides more contextual information to CTC and helps the encoder form richer modeling capabilities. By fusing the results of the AR and NAR methods during the training stage, we achieve competitive character error rate (CER) in the decoding stage without the need for a rescoring process. Furthermore, we found that using the direct addition of logits (DAL) method reduced the character error rate relatively (CERR) by 1% in the results obtained solely from the one-pass decoding, compared to the corresponding decoding approach in WeNet [1]. The method of preserving the maximum probability (PMP) achieves a CER of 4.79% through the ctc prefix beam search decoding, outperforming the result of 4.93% obtained by the same method in WeNet. The essence of two-pass decoding is to utilize the first pass for rapid decoding and improve accuracy in the second pass. Currently, mainstream two-pass hybrid models do not use a higher-accuracy AR model to correct the output of CTC during the training stage. Our proposed integrated-CTC allows CTC to reference the results from the attention-based decoder during output generation in the training stage. We achieve dimensional consistency between the attention-based decoder results and CTC outputs by employing the proposed adaptive affine algorithm to scale the dimensions of the AED results to match the dimensions of CTC. Through frame-level correction, the output of CTC is regularized, resulting in fairly good recognition results. The total loss of integrated-CTC is obtained by weighting the integrated-CTC loss and the attention loss. The specific experimental details regarding the impact of loss on CER will be shown in Table 1. Some recent work has focused on the regularization of the CTC loss, aiming to achieve better convergence by modifying the number of CTC layers or the structure of CTC [6, 22, 31]. This regularization of the CTC loss not only avoids significant computational overhead but also significantly reduces the difficulty of model training. To a certain extent, this represents a significant improvement for CTC, as it reduces computational costs and improves accuracy. However, relying solely on the CTC structure for overall improvement has limitations, and in [6], despite the use of a conformer model in the encoder, the CER for AISHELL-1 only reached 5.2%. This is also why we chose the CTC-AED model. Based on the CTC-AED model, we also propose auxiliary loss regularization to help the model achieve better recognition performance and faster convergence speed. Our experimental results show that using the integrated-CTC loss reduces training time by 5% compared to using the official loss in WeNet. The main contributions of this paper are as follows: 1. We propose a simple yet effective training method called integrated-CTC, which influences the output of CTC through the attention mechanism during the training stage. 2. We introduce an algorithm called adaptive affine that dynamically adjusts the dimensions of AED outputs. 3. We propose auxiliary loss regularization to facilitate faster convergence and improve model accuracy. 4. We demonstrate the impact of assigning different weights to posterior probabilities in attention and achieve a CER of 4.49% on the AISHELL-1 dataset using two-pass decoding. Furthermore, by using only greedy search decoding, we achieve a CER of 4.79%. ## 2 Hybrid CTC-AED Model with Two-pass Decoding ### Model Architecture The proposed hybrid CTC-AED model [14] with two-pass [28, 29] decoding is illustrated in Fig. 1 (_Left_). The speech input is fed into a shared encoder, attention decoder, and integrated-CTC module. The total loss is computed by combining the integrated-CTC loss and attention loss. The shared encoder produces the vector \(s\), which is then processed by the attention decoder to obtain \(h\). The integrated-CTC module, as shown in Fig. 1_(Right)_, involves obtaining \(s_{out}\) through CTC and transforming \(h\) into \(h_{out}\) using the adaptive affine algorithm. Finally, \(s_{out}\) and \(h_{out}\) are fused using two fusion methods. Figure 1: _(Left)_ shows the structure of our proposed model. The model consists of three parts: shared encoder, attention decoder, and integrated-CTC. Unlike other hybrid models, we integrate the output of the attention decoder to the CTC to help the CTC to decode accurately. _(Right)_ shows the internal construction of integrated-CTC. It consists of three parts: adaptive affine, CTC and integration algorithm. Among them, adaptive affine is responsible for aligning the output dimension of the attention decoder with the CTC and corresponding between frames. The posterior probability vectors of attention are integrated with the output of CTC by DAL or PMP algorithm after adaptive affine. #### 2.2.1 Shared Encoder of the Hybrid Model The shared encoder in the hybrid model consists of multiple encoder blocks, responsible for transforming speech features into feature vectors. The encoder can be based on Transformer or Conformer architecture, and in our experiments, we utilize Conformer [8]. The Conformer network primarily comprises of relative positional multi-head attention modules, position-wise feed forward modules, and convolution modules. The relative positional encoding in multi-head attention provides robustness and generalizability to inputs of different lengths. The position-wise feed forward network linearly maps inputs at each time step and changes the dimensions of input vectors through the average forward matrix. The convolution module employs causal convolution to ensure that the model does not rely on right-context information. The computation formula for a Conformer block is as follows: \[x_{i}^{\prime}=x_{i}+\frac{1}{2}\mathbf{FFN}(x_{i}), \tag{1}\] \[x_{i}^{\prime\prime}=x_{i}^{\prime}+\mathbf{MHA}(x_{i}^{\prime}), \tag{2}\] \[\hat{x}=x_{i}^{\prime\prime}+\mathbf{Conv}(x_{i}^{\prime\prime}), \tag{3}\] \[y_{i}=\mathbf{Batchnorm}(\hat{x}+\frac{1}{2}\mathbf{FFN}(\hat{x})), \tag{4}\] where \(x_{i}\in\mathbb{R}^{B\times L\times D}\), B, L and D indicate batch size, sequence length and dimension of speech features respectively. Feed forward network (FFN) contains two linear layers and a swish activation function. \(x_{i}^{\prime}\) is successively entered into three modules for calculation and residual connection. After a final layer of FFN the output is fed to a Batchnorm function. #### 2.2.2 Two-pass Decoder And Auxiliary Loss Regularization The output of the shared encoder yields an output \(\mathbf{O}\) with dimension \(T\times V\), where T represents the number of frames in the audio and V represents the size of the vocabulary. CTC finds N best paths (e.g., using beam search) with the highest probabilities based on the corresponding decoding algorithm. The attention decoder performs the second-pass decoding. The output \(\mathbf{O}\) is fed into the attention decoder, which performs rescoring in an AR manner. The conditional probability \(P_{AR}(Y|O)\) can be represented as: \[P_{AR}(Y|O)=P(y_{1}|O)\sum_{i=2}^{L}P(y_{i}|y_{<i},O), \tag{5}\] where \(y_{i}\) represents the i-th token of the predicted sequence with length L. The attention-based decoder uses a Transformer network structure. During the training process, both CTC and the attention-based decoder compute their respective losses, \(L_{CTC^{\prime}}\) and \(L_{AR}\), which are then combined using a weighted sum operation. \(L_{CTC^{\prime}}\) represents the loss after fusion of the output vectors from CTC and the attention decoder, while \(L_{CTC}\) is the loss from CTC without fusion. Additionally, we also tried adding \(L_{CTC}\) to the total loss, but found that it resulted in worse experimental performance. We speculate that too many CTC regularization terms may lead to overfitting, disregarding the intrinsic information in the speech data itself. The loss function can be expressed as follows: \[\mathcal{L}=\alpha\mathcal{L}_{CTC^{\prime}}+(1-\alpha)\mathcal{L}_{AR}, \tag{6}\] where \(\alpha\) is a hyperparameter. In the loss computation, we tested different values of \(\alpha\) ranging from 0.1 to 0.9, and found that 0.5 provided the most stable results. We employed two fusion algorithms to combine the output from the attention-based decoder with the output from CTC. The dimension of \(t\) is \([L,D]\), where L represents the length of the audio and D represents the size of the dictionary. Adding the probabilities of \(y_{AED}\) to \(y_{CTC}\) can help improve the recognition performance of CTC. Additionally, we designed an algorithm named PMP that saves the maximum probability for each frame and sets the probabilities of other positions to 0, aiming to reduce the number of operations. The experimental details will be presented in Section 4. #### 2.2.2 Regularized CTC As mentioned earlier, in most CTC-AED hybrid models (e.g., ESPNet and WeNet), the CTC loss and AED loss are computed separately during training. After passing through the shared encoder, the outputs are decoded by the CTC decoder and AED decoder, and their respective loss functions are weighted and summed to obtain the total loss for updating the model. However, the attention mechanism can help CTC obtain more accurate results, as demonstrated by the need for attention-based second-pass decoding to correct certain erroneous CTC outputs. However, the AR model increases decoding time. Delay rate is also important in speech recognition. Let's consider an alternative approach. If we use the attention mechanism to guide CTC decoding during training, we can achieve competitive recognition results without the need for second-pass decoding. This means that even with one-pass decoding (e.g., CTC beam search), we can obtain similar recognition results as with second-pass decoding. In the training process, we employ the same multi-objective learning framework as in [23] to improve the accuracy and robustness of the model. Specifically, we combine the fusion CTC loss \(L_{CTC^{\prime}}\) and the attention-based cross-entropy loss \(L_{AR}\). Furthermore, we use the adaptive affine algorithm to match the dimension of the attention-based decoder output \(y_{AED}\) with the dimension of the CTC decoder output \(y_{CTC}\). Here, \(y_{AED}\) is a vector of length L with a dimension of \(V\), while \(y_{CTC}\) is a vector of length \(P\) with a dimension of \(V\). In practice, \(P\) is often longer than \(L\), but the length of the speech sequence is not fixed, so we cannot set the dimension of the linear mapping as a fixed value. Therefore, we save the maximum probability or the complete posterior probability of each frame in \(y_{AED}\). By expanding the dimension of \(y_{AED}\) to match the dimension of \(y_{CTC}\), mapping \(L\) to \(P\), the final posterior probability of \(y_{CTC}\) is obtained by weighted summation of \(y_{CTC}\) and \(y_{AED_{L->P}}\). The expression for \(y_{CTC}\) is as follows: \[y_{CTC}\!:=\lambda y_{AED_{L\to P}}+y_{CTC}, \tag{7}\] where \(\lambda\) represents the hyperparameter for the weight of \(y_{AED}\). The detailed experimental details for this part will be presented in Table 1. ### Adaptive Affine Transformation The attention-based decoder and CTC decoder produce outputs of different dimensions, making direct fusion challenging. To address this issue, we propose an adaptive affine transformation algorithm to dynamically adjust the output dimension of the attention-based decoder to match that of the CTC decoder. This algorithm prepares for the subsequent step of using the attention-based decoder to correct the output of the CTC decoder. The output of the attention-based decoder corresponds to the output of the CTC decoder and is linearly combined with it. Algorithm 1 provides the pseudocode for the adaptive affine transformation algorithm. For the output of the CTC decoder, multiple consecutive frames may correspond to a single phoneme, while for the output of the attention-based decoder, one frame corresponds to one phoneme. Therefore, the output of the attention-based decoder needs to be extended to match the dimension of the CTC decoder's output. We uniformly expand the logits of the attention-based decoder to ensure that most frames correspond to the frames of the CTC decoder. In fact, although there are some frames in the output of the attention-based decoder (\(y_{AED}\)) that do not have a one-to-one correspondence with the output of the CTC decoder (\(y_{CTC}\)), they do not have a significant impact on the overall experimental results. The effect of the frames in the output of the attention-based decoder not strictly aligning with the frames of the CTC decoder can be alleviated by assigning weights to the dimension-matched vectors. ## 3 Experiments ### Dataset In our work, all experiments were conducted on AISHELL-1, a dataset recorded by AISHELL and spoken by 400 people for a total of 178 hours. The training set was recorded by 340 people, the test set by 20 people and the validation set by 40 people, each recording an average of 300 sentences. The words spoken by each person are placed in a folder. ### Experimental Setup For all our experiments, we use 80-dimensional log-mel filter bank (FBank) features computed on a 25ms window with a 10ms shift. The modelling unit consists of 4233 characters, of which 4230 are Chinese characters, a \(<\)blank\(>\) character for CTC output, a \(<\)unk\(>\) character for unknown characters, and a \(<\)sos/eos\(>\) for start-of-sentense and end-of-sentence.We also do speed perturb with 0.9, 1.0 and 1.1 on the whole data. SpecAugment [20] with 2 frequency masks where the maximum frequency mask F = 10, and 2 time masks where the maximum time mask T = 50. The hybrid model encoder consists of 12 conformer blocks, and the decoder consists of 6 transformer blocks. The multi-head attention has 4 heads. The subsampling network in the convolutional module includes 2D convolutional layers (subsampled by a factor of 4 in frame rate) with a Swish activation function. The kernel size is set to 3, stride is 2, and the number of channels is 256. The hidden units size of the feed forward neural layer is set to 2048. We used the Adam optimizer with a warm-up of 25,000 steps, and the learning rate scheduling described in [16] to train the models. Additionally, after 210 epochs, we selected the top 30 models based on their loss values and performed parameter averaging to export the best model. To evaluate the inference speed, we measured the real-time factor (RTF) on the test set. The reported RTF results are based on the test set, not the validation set. ## 4 Results ### Comparison of Different Decoding Coefficients In this section, we compare different values of the parameter \(\lambda\in(0.01,0.09)\) for the two fusion algorithms. The weight \(\lambda\) represents the degree to which the attention-based decoder influences the output of the CTC decoder. A higher value of \(\lambda\) corresponds to a greater influence of the attention-based decoder. As shown in Table 1, it is evident that the optimal value for \(\lambda\) is 0.05. Since the results from the attention-based decoder are generated first during training, it provides prior knowledge to the CTC model, and the weight adjustment determines the extent of its influence on the CTC model. When \(\lambda\) ranges from 0.03 to 0.07, the model achieves better recognition performance. Although using only one-pass decoding yields good results, the best performance is still achieved by finding the N best CTC paths and rescoring them with the attention-based decoder. By comparing the two fusion algorithms, it is apparent that the fusion method of DAL from \(y_{AED}\) to \(y_{CTC}\) is more effective for the attention-based decoding approach. Furthermore, the algorithm that PMP helps the CTC-based decoding approach achieve better recognition performance. We speculate that retaining only the highest probability results in the probability distribution allows CTC to quickly identify high-scoring paths, effectively performing pruning operations. This is particularly beneficial for the one-pass decoding method. ### Comparison of different models While studying the parameter influences on the models, we also enhanced the models to achieve better accuracy. Both models used 12 layers of encoders and 6 layers of decoders, and we maintained a consistent testing environment. All the models we compared were attention-based. All our experiments were conducted on the open-source toolkit WeNet [1]. On the AISHELL-1 dataset, WeNet achieved a CER of 4.64% using the Conformer as the encoder and attention rescoring. Under the condition of consistent experimental parameters, our model achieved a CER of 4.49%. Our proposed model achieved highly competitive recognition results compared to other hybrid models. Even with only one-pass decoding, our model produced a CER that was nearly identical to the two-pass decoding, demonstrating the effectiveness of our proposed algorithm. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{Weight \(\lambda\)} & 0.01 & 0.03 & 0.05 & 0.07 & 0.09 \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & attention decoder & 5.23 & 4.89 & **4.85** & 4.96 & 5.03 \\ & attention rescore & 4.76 & 4.52 & **4.49** & 4.55 & 4.56 \\ & ctc greedy search & 5.20 & **4.84** & **4.84** & 4.93 & 5.09 \\ & ctc prefix beam search & 5.20 & **4.83** & 4.84 & 4.93 & 5.09 \\ \hline \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & attention decoder & 5.12 & **4.87** & 4.92 & 4.91 & 4.87 \\ & attention rescore & 4.78 & 4.54 & **4.50** & 4.56 & 4.58 \\ & ctc greedy search & 5.11 & 4.76 & **4.79** & 4.92 & 4.92 \\ & ctc prefix beam search & 5.11 & 4.76 & **4.79** & 4.91 & 4.93 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparision of the two integration algorithms with weight \(\lambda\).(CER %) ### Comparison of Decoding Latency We built libtorch using C++ and exported the models corresponding to the two fusion algorithms, which were then quantized. The value of parameter \(\lambda\) for all models was set to 0.05 to ensure fairness and accuracy in the experiments. We presented the decoding and rescoring latency of different models and tested their RTF. The Runtime CER was measured on the test dataset. Directly summing the posterior probability distributions \(y_{AED}\) from AED and \(y_{CTC}\) from CTC achieved a lower CER because it is equivalent to providing soft labels for \(y_{CTC}\), which contains richer information. By only saving the maximum value of each frame from \(y_{AED}\) and adding it to \(y_{CTC}\), the decoding latency was relatively reduced. From Table 3, we can also see that if rescoring is not used, the RTF of the model is improved by approximately 1/4, but the CER is increased by 0.3%. Quantizing the model reduced the RTF by approximately 0.01, while the CER decreased by approximately 0.15%. ## 5 Conclusion We found that during model training, the attention-based decoder can positively influence CTC. The fusion of the output from the attention-based decoder and the prediction from CTC helps the model achieve better recognition results. Both fusion methods we proposed effectively improve the model's recognition \begin{table} \begin{tabular}{l c c} \hline \hline Model & LM & Dev & Test \\ \hline Autoregressive Transformer [31] & w/o 4.9 & 5.4 \\ ESPNet [23] & w/o 4.6 & 5.1 \\ Autoregressive Conformer [30] & w/o 4.4 & 4.7 \\ ESPNet2 & w/o 4.4 & 4.7 \\ SRU++ [32] & w 4.4 & 4.7 \\ WeNet [1] & w/o - & 4.6 \\ \hline integrated-CTC with add way & w/o **4.2** & **4.5** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparision with other conventional, hybrid models (CER%). All models in this table use SpecAugment to improve the performance. \begin{table} \begin{tabular}{c c c c} \hline \hline exp. & decode latency & rescoring latency & RTF & CER \\ \hline Add orig.(float 32) & 386ms & 88ms & 0.0863 & 4.49\% \\ Add quant.(int 8) & 337ms & 77ms & 0.0753 & 4.56\% \\ Maximal orig.(float 32) & 371ms & 86ms & 0.0829 & 4.50\% \\ Maximal quant.(int 8) & 316ms & 71ms & 0.0706 & 4.63\% \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of the decoding time and rescoring time of the two models performance. To ensure the fusion of CTC and the attention-based decoder, we proposed a simple but effective algorithm called adaptive affine. The algorithm of PMP consumes less time for decoding, while DAL can yield more accurate recognition results. Additionally, we compared the CER for four decoding methods, and the experimental results demonstrated that our proposed model achieves more competitive performance. In the future, we plan to expand the proposed methods to larger-scale datasets such as WenetSpeech and GigaSpeech.
2301.04778
Broad-Range Directional Detection of Light Dark Matter in Cryogenic Ice
We propose hexagonal ice (H$_2$O) as a new target for light dark matter (DM) direct detection. Ice, a polar material, is suitable for single phonon detection through DM scattering for which we consider light dark photon and light scalar mediator models. We report a rate sensitivity down to a DM mass of $\sim$keV, constituting a broader mass range than other promising candidates. We find better sensitivity for near-term experimental thresholds from the presence of high-frequency phonons. These advantages, and ice's availability, make it highly promising for single-phonon detection.
Nora Taufertshöfer, Maurice Garcia-Sciveres, Sinéad M. Griffin
2023-01-12T01:15:48Z
http://arxiv.org/abs/2301.04778v2
# Broad-Range Directional Detection of Light Dark Matter in Cryogenic Ice ###### Abstract We propose hexagonal ice (H\({}_{2}\)O) as a new target for light dark matter (DM) direct detection. Ice, a polar material, is suitable for single phonon detection through DM scattering for which we consider light dark photon and light scalar mediator models. We report a rate sensitivity down to a DM mass of \(\sim\) keV, constituting a broader mass range than other promising candidates. We find better sensitivity for near-term experimental thresholds from the presence of high-frequency phonons. These advantages, and ice's availability, make it highly promising for single-phonon detection. pacs: 74.70.-w, 74.70.-c, 74.70.-c, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.0.-b, 74.70.-b, 74.70.-b, 74.0-b, 74.70.-b, 74.70.-b, 74.70.-b, 74.0.-b, 74.70.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.0.-b, 74.0.-b, 74.0.0.-b, 74.0.0.-b, 74.0.-b, 74.0.0.-b, 74.0.0-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.0.-b, 74.0.0-b, 74.0.-b, 74.0.-b, 74.0.0-b, 74.0.0.-b, 74.0.-b, 74.0.-b, 74.0.0-b, 74.0.-b, 74.0.-b, 74.0-b, 74.0.0-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0.-b, 74.0-b, 74.0.0-b, 74.0.-b, 74.0.-b, 74.0-b, 74.0.0.-b, 74.0.-b, 74. \[R=\frac{1}{\rho_{\rm T}}\frac{\rho_{\chi}}{m_{\chi}}\int d^{3}vf_{\chi}({\bf v}) \Gamma({\bf v}) \tag{1}\] where \(\rho_{\chi}\) is the local DM energy density, \(m_{\chi}\) the DM mass, and \(f_{\chi}({\bf v})\) is the DM velocity distribution in the lab frame which is modelled with a boosted Maxwell-Boltzmann distribution 1. \(\Gamma({\bf v})\) is the scattering rate per non-relativistic DM particle which is defined by an integral over the momentum transfer \({\bf q}={\bf p}-{\bf p}^{\prime}\) with initial DM momentum \({\bf p}=m_{\chi}{\bf v}\): Footnote 1: Due to Earth’s rotation the orientation of the incoming DM particles with regard to the target crystal changes over a day, which leads to a directional modulation of the rate. We choose the z-axis in the crystal frame to be parallel to Earth’s velocity at daytime \(t=0\)[17]. \[\Gamma({\bf v})=\frac{\pi\bar{\sigma}}{\mu^{2}}\int\frac{d^{3}q}{(2\pi)^{3}}(q _{0}/q)^{4}S({\bf q},\omega_{\bf q}) \tag{2}\] Here, \(\bar{\sigma}\) is a model-dependent reference cross section and we define \(\bar{\sigma}:=\frac{\mu^{2}}{\pi}|\overline{\mathcal{M}(q_{0})}|^{2}\) with \(\mathcal{M}\) the target-independent \(2\to 2\) scattering matrix element (see Supplemental Material). \(\mu\) is the reduced mass of an electron or nucleon and DM particle for light-dark-photon- or light-scalar-mediated scattering, respectively. \(q_{0}\) is a reference momentum and \(S({\bf q},\omega_{\bf q})\) the target-dependent dynamic structure factor. For single phonon excitations we recall the dynamical structure factor: \[S({\bf q},\omega_{\bf q})=\] \[\frac{\pi}{\Omega}\sum_{\nu}\frac{1}{\omega_{\nu,{\bf k}}}\Big{|} \sum_{j}\frac{e^{-W_{j}({\bf q})}}{\sqrt{m_{j}}}\,e^{i{\bf G}\cdot{\bf x}_{j}^ {0}}\left({\bf Y}_{j}\cdot{\mathbf{\epsilon}}_{\nu,j,{\bf k}}^{*}\right)\Big{|}^{2 }\delta(\omega_{\bf q}-\omega_{\nu,{\bf k}}) \tag{3}\] where \(\omega_{\bf q}={\bf q}\cdot{\bf v}-\frac{q^{2}}{2m_{\chi}}\) explicitly depends on the energy deposition. The sums run over all phonon branches \(\nu\) and over all ions \(j\) in the primitive cell. The ionic masses are \(m_{j}\) with equilibrium positions \({\bf x}_{j}^{0}\) and \(\Omega\) is the primitive cell volume. \(\epsilon_{\nu,j,{\bf k}}\) defines the phonon polarization vectors. \({\bf G}\) is the reciprocal lattice vector that satisfies \({\bf G}={\bf q}-{\bf k}\) with \({\bf k}\) within the first Brillouin zone. DM couplings come in with the model-specific \({\bf Y}_{j}\) terms defined below. In the continuum limit for \({\bf k}\), the Debye-Waller factor is given by: \[W_{j}({\bf q})=\frac{\Omega}{4m_{j}}\sum_{\nu}\int_{1\rm BZ}\frac{d^{3}k}{(2 \pi)^{3}}\frac{|{\bf q}\cdot{\mathbf{\epsilon}}_{\nu,j,{\bf k}}|^{2}}{\omega_{\nu,{\bf k}}}. \tag{4}\] The materials-dependent quantities, namely the phonon frequencies \(\omega_{\nu,{\bf k}}\) for branch \(\nu\) and momentum \({\bf k}\), and the phonon polarization vectors \(\epsilon_{\nu,j,{\bf k}}\), can be calculated using first-principles methods such as Density Functional Theory[18]. Finally, we consider two well-motivated models of light DM, namely a kinematically mixed dark photon and a light scalar mediator[6]. In the former case, the standard model photon kinematically mixes with the dark photon owing to their same quantum numbers, resulting in'millicharged' DM under \(U(1)\) of the standard model. In the \(q\to 0\) limit valid for light DM scattering[16], \({\bf Y}_{j}\) is given by \[{\bf Y}_{j,\rm dark\ photon}=-\frac{q^{2}}{{\bf q}\cdot\varepsilon_{\infty} \cdot{\bf q}}({\bf q}\cdot{\bf Z}_{j}^{*}) \tag{5}\] where \(\varepsilon_{\infty}\) is the high-frequency dielectric tensor and \({\bf Z}_{j}^{*}\) is the Born effective charge tensor of the \(j\)th atom in the unit cell, both of which can be calculated using first-principles methods. We also consider DM that only couples to nucleons via a light scalar mediator with identical coupling for proton and neutron. The now scalar-valued \({\bf Y}_{j}\) is given by \[{\bf Y}_{j,\rm hadrophilic\ scalar}=A_{j}F_{N_{j}}(q) \tag{6}\] with \(A_{j}\) the atomic mass number and \(F_{N_{j}}\) an isotropic nuclear form factor of the \(j\)th atom, respectively. The derivation of (6) is valid for the \(q\to 0\) limit, which is fulfilled for the single phonon excitation regime. Further, for this type of scattering \(F_{N_{j}}\) can be set to one. The pervasiveness of liquid and solid H\({}_{2}\)O has generated extensive studies of the phase diagram of ice. So far, twenty polymorphs of crystalline H\({}_{2}\)O have been identified under various conditions of temperature and pressure, with the most recent, ice XIX reported in 2021[19]. Common to all of these is the local atomic-scale arrangement of hydrogen atoms in bonded units that fulfil the 'Bernal-Fowler ice rules': each oxygen forms a covalent bond with two hydrogens and a weaker van-der-Waals bond with two other hydrogens[20]. Since each oxygen is also shared between two hydrogen atoms, this results in an average oxygen bonding environment comprising 'two-in' (strong) and 'two-out' (weak) bonds with hydrogen. Such an atomic arrangement is geometrically frustrated on a tetrahedral lattice[21], resulting in disordered relative arrangements of the strong and weak bonding networks known as proton disorder. Common ice (I\({}_{h}\)) forms such a hydrogen-disordered network of H-O-H bonds at ambient pressure, giving rise to a residual entropy as first predicted by Pauling[22]. However, spontaneous ordering of the hydrogen networks can occur on cooling, or by the introduction of a dopant to overcome the kinetic barrier to ordering. For example KOH doping of I\({}_{h}\) results in the formation of a fully ordered phase of hexagonal ice, XI\({}_{h}\) below \(\sim 72\) K[23; 24]. In fact, the hydrogen ordering in the XI\({}_{h}\) structure gives it a net dipole moment, making it potentially ferroelectric[25]. Since XI\({}_{h}\) is the stable form of ice at ambient pressures, and can be synthesized in its ordered form, we focus on this polymorph for the remaining discussion, noting, however, that other structures of ice should have comparable DM reach for single phonon-based detection owing to their similar bonding networks. We also considered heavy ice D\({}_{2}\)O XI\({}_{h}\) but we did not find a significant difference to H\({}_{2}\)O (results reported in the Supplemental Material). We now present our DFT-calculated structural and phonon properties of ice XI\({}_{h}\). The full calculation details can be found in the Supplemental Material. With its combination of covalent bonding, hydrogen bonding, and van-der-Waals bonding, the challenge of treating ice H\({}_{2}\)O with DFT functionals has been explored extensively[26]. We benchmark our choice of exchange correlation functional for accurate structural and vibrational properties. Consistent with previous work[27], we find that the van-der-Waals corrected nonlocal functional (optPBE-vdw) of Klimes et al. performs best[28, 29], resulting in the lattice constants \(a=4.470\,\mathrm{\SIUnitSymbolAngstrom}\) and \(c=7.212\,\mathrm{\SIUnitSymbolAngstrom}\). This corresponds to 0.6% and 1.5% deviation from experimental measurements at T = 2 K[30]. In Fig. 1(b) we plot the calculated phonon dispersion for ice XI\({}_{h}\). All polymorphs of ice exhibit a large range of phonon frequencies as a result of the varied nature of bonding in H\({}_{2}\)O crystals[31, 32, 33]. The low-energy optical phonon range is made up of 'translational' modes where individual H\({}_{2}\)O molecules behave as atom-like clusters, and vibrate with respect to one another. Such low-frequency phonon modes are common in molecular crystals such as H\({}_{2}\)O where molecular units are weakly bonded to each other to form the crystal network[34]. The next highest energy set of phonon modes corresponds to libration and bending of H-O bonds, with the highest-frequency range caused by the stretching of the O-H bond. As the lightest element in the periodic table, hydrogen sets an upper limit on the range of these high-frequency stretch modes that can be found in a material. We present our calculated reach for light-dark-photon-mediated scattering of single phonons in ice XI\({}_{h}\) in Fig. 2, including comparisons to top-performing targets previously studied[14]. All reach calculations were done using the code PhonoDark[36, 37]. The reach of ice XI\({}_{h}\) extends further into both the low-mass DM range (well into the constrained regions) and has better reach in the high-mass DM range than any other previously suggested single target material. To understand the exceptionally broadband sensitivity of ice XI\({}_{h}\), we evaluate its performance with respect to our previously suggested quality factors for dark photon mediators in the high- and low-frequency phonon ranges[14]. The lowest DM mass accessible is determined by the energy of the lowest-frequency optical mode where \(m_{\chi}\sim\frac{1}{3}\omega_{0}^{min}\times 10^{6}\), suggesting that materials with low-lying optical phonons such as CsI Figure 1: (a) Primitive unit cell of hexagonal ice XI\({}_{h}\) and its first Brillouin zone with high-symmetry points labelled. (b) Calculated phonon band structure for hexagonal ice XI\({}_{h}\). The cartoons on the right hand side show the movement of ions for three selected phonon modes representative of the types of optical phonon modes in ice: translational modes (bottom), libration/bending (middle), and stretch (top). Figure 2: Projected reach for light-dark-photon-mediated DM scattering via single phonon excitations with 1kg-year exposure and no background. Solid, long-dashed and dashed lines refer to a 1 meV, 20 meV and 100 meV detector threshold. For comparison, additional reach data [14] is shown. Stellar constraints from red giants (RG) and the horizontal branch (HB) are taken from [35], the freeze-in as referenced in [14]. (\(\omega_{0}^{min}\sim 7\) meV) have the best low-mass reach. In our case, the van-der-Waals bonded molecular units also have very low lying optical modes (\(\omega_{0}^{min}\sim 6\) meV), making ice XI\({}_{h}\) optimal for the low-mass DM range. We expect all similar molecular crystals with such low-frequency translational phonon modes to have competitive reach for low-mass DM. For the high-mass range, a quality factor, \(Q\), is defined as[14]: \[Q\equiv\frac{Z^{*2}}{A_{1}A_{2}\varepsilon_{\infty}^{2}}\left(\frac{\text{meV} }{\omega_{LO}}\right) \tag{7}\] where \(Z^{*}\) are the Born effective charges, \(A_{1}\), \(A_{2}\) are the atomic masses, \(\varepsilon_{\infty}\) is the high-frequency dielectric constant, and \(\omega_{LO}\) is the longitudinal optical phonon frequency. A high \(Q\) corresponds to better reach in the high-mass DM regime. We calculate each of these values for ice XI\({}_{h}\) using DFT, with the full set of results given in the Supplemental Material. The best performing materials previously proposed include LiF with \(Q=270\times 10^{-7}\) and SiO\({}_{2}\) with \(Q=200\times 10^{-7}\). For ice XI\({}_{h}\), we find \(Q=533\times 10^{-7}\) for its highest-frequency stretch modes (\(>\)375 meV), steadily increasing for lower-frequency \(\omega_{LO}\) clusters. The substantially enhanced \(Q\) of ice XI\({}_{h}\) in the high-mass range is due to the low masses of hydrogen and oxygen. This optimized reach is dominated by hydrogen's extremely low mass, despite these small masses also resulting in higher frequency phonon modes. However, this quality factor does not take the detection threshold into account which is especially important for near-term experiments. We conclude that for optimal high-mass DM single-phonon-based detection with dark photon mediators, chemistries that include light elements such as H and He, and especially H-H bonds, will give the best reach in the high-mass range owing to their high-frequency single phonon modes. In the low-mass range, we find that both compounds containing heavy elements and molecular crystals that possess low-frequency optical phonon modes will give the best sensitivity. With ice combining both of these properties (molecular units and H-H bonds), it displays better sensitivity for dark photon mediators than previously suggested targets across a broad range of DM masses with CsI being the only exception for a very small low-mass range (Fig. 2). The reach for light-hadrophilic-scalar-mediated scattering of single phonons in ice XI\({}_{h}\) is plotted in Fig. 3 for several detector thresholds, and compared to previous top candidates from Ref. [14]. For the light scalar mediator, we also find that ice XI\({}_{h}\) has broad-band sensitivity for both low- and high-mass DM. Looking at the reach curves, we find a competition between which of diamond or ice is favorable for different mass ranges and thresholds. For all thresholds considered, we find that ice has sensitivity to lower masses than diamond, which become comparable by a threshold of 100 meV. In the low mass range, the lowest DM mass sensitivity is generally governed by \(\omega_{min}/c_{s}\) where \(\omega_{min}\) is the detector threshold and \(c_{s}\) is the material's speed of sound. Therefore, materials with higher \(c_{s}\) such as diamond will have the best sensitivity to lower-mass DM given low detector thresholds. We calculated the directional averaged speed of sound of ice XI\({}_{h}\) to be 4376 ms\({}^{-1}\) (details given in the Supplemental Material), which is a factor of three smaller than in diamond. However, for materials with smaller \(c_{s}\) the acoustic phonons can be kinematically inaccessible, especially as the threshold increases to higher energies. When this happens, the lowest DM sensitivity is then set by the lowest optical phonon available. For ice XI\({}_{h}\) we find that the minimum reachable DM mass from kinematic considerations is \(\sim 29\) keV for a 1 meV threshold, while the lowest optical phonon is 6 meV. This manifests as a change in the slope as seen in Fig. 3 where optical phonons, rather than acoustic phonons, dominate the low mass response. In the high-mass, high-threshold range (detector threshold of 400 meV), we only include ice as it is the only material with single phonons above \(\sim\)160 meV. We note that the definition of \(\bar{\sigma}_{n}\propto m_{\chi}^{-4}\) leads to the generally better reach at high masses[38]. We finally calculate the daily modulation rate assuming that the z-axis of the target crystal is aligned with the DM wind at \(t=0\). Each rate is normalised to the average rate \(\bar{R}\) over one day. For a dark photon mediator we find a strong modulation for a mass range of \(m_{\chi}=10^{4}-10^{5}\) eV (Fig. 4 (a)). We also find a significant daily modulation for the scalar mediator case (Fig. 4 (b)), especially in the low 10 keV range. Finally, we extract the phonon Figure 3: Projected reach for light-scalar-mediated DM scattering via single phonon excitations with 1kg-year exposure and no background. Solid, long-dashed, dashed and dotted lines refer to a 1 meV, 20 meV, 100 meV and 400 meV detector threshold. Additional reach data is taken from [14], light-colored lines show the expected curve progression for scale ranges not shown in the reference data. lifetimes \(\tau\) from measured phonon linewidths \(\Gamma_{\text{ph}}\) of ice XI[31, 39] from \(\tau=(\pi\Gamma_{\text{ph}})^{-1}\) giving a lifetime range of 0.5 - 2.1 ps (see Supplemental Material), consistent with other systems with high-frequency optical phonons[40]. Our results suggest that ice H\({}_{2}\)O is a competitive candidate target for single-phonon based light DM detection both in the low-mass and high-mass range (here we considered ice XI\({}_{h}\), but our conclusions are generally applicable to other ice polymorphs). Molecular units provide ultra-low-mass optical phonons (down to 6 meV) giving excellent reach in the low-mass DM range for dark photon mediators. We expect similar reach to low masses for polar semiconductors containing heavy ions (as e.g. CsI) or molecular crystals where the relative effective masses of the ions/molecules result in low-frequency optical phonons. For near-term experiments, we find that the broad range of single phonons in ice up to 400 meV gives it enhanced reach for higher experimental thresholds compared to alternative proposals such as multiphonon excitations. However, the lower density of ice due to its van-der-Waals bonding results in a reduced reach compared to other solid-state targets; this can be addressed by the growth of large single crystals of ice which is already well explored[41]. Finally, while ice H\({}_{2}\)O is cheap, earth-abundant, and has been extensively characterized, it remains an engineering challenge as to the best way to incorporate a material that is a liquid at room temperature into a (cryogenic) detector architecture. We thank Dan Carney, Tanner Trickle, Kathryn Zurek, Katherine Inzani and Thomas Harrelson for helpful discussions. This work was supported by the US Department of Energy under contract DEAC02-05CH11231 and Quantum Information Science Enabled Discovery (QuantISED) for High Energy Physics grant KA2401032. Computational resources were provided by the National Energy Research Scientific Computing Center and the Molecular Foundry, DOE Office of Science User Facilities supported by the Office of Science, U.S. Department of Energy under Contract No. DEAC02-05CH11231. The work performed at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under the same contract.
2302.02659
PAseos Simulates the Environment for Operating multiple Spacecraft
The next generation of spacecraft is anticipated to enable various new applications involving onboard processing, machine learning and decentralised operational scenarios. Even though many of these have been previously proposed and evaluated, the operational constraints of real mission scenarios are often either not considered or only rudimentary. Here, we present an open-source Python module called PASEOS that is capable of modelling operational scenarios involving one or multiple spacecraft. It considers several physical phenomena including thermal, power, bandwidth and communications constraints as well as the impact of radiation on spacecraft. PASEOS can be run both as a high-performance-oriented numerical simulation and/or in a real-time mode directly on edge hardware. We demonstrate these capabilities in three scenarios, one in real-time simulation on a Unibap iX-10 100 satellite processor, another in a simulation modelling an entire constellation performing tasks over several hours and one training a machine learning model in a decentralised setting. While we demonstrate tasks in Earth orbit, PASEOS is conceptually designed to allow deep space scenarios too. Our results show that PASEOS can model the described scenarios efficiently and thus provide insight into operational considerations. We show this in terms of runtime and overhead as well as by investigating the modelled temperature, battery status and communication windows of a constellation. By running PASEOS on an actual satellite processor, we showcase how PASEOS can be directly included in hardware demonstrators for future missions. Overall, we provide the first solution to holistically model the physical constraints spacecraft encounter in Earth orbit and beyond. The PASEOS module is available open-source online together with an extensive documentation to enable researchers to quickly incorporate it in their studies.
Pablo Gómez, Johan Östman, Vinutha Magal Shreenath, Gabriele Meoni
2023-02-06T09:57:09Z
http://arxiv.org/abs/2302.02659v1
# PAseos Simulates the Environment for Operating multiple Spacecraft ###### Abstract The next generation of spacecraft is anticipated to enable various new applications involving onboard processing, machine learning and decentralised operational scenarios. Even though many of these have been previously proposed and evaluated, the operational constraints of real mission scenarios are often either not considered or only rudimentary. Here, we present an open-source Python module called PASEOS that is capable of modelling operational scenarios involving one or multiple spacecraft. It considers several physical phenomena including thermal, power, bandwidth and communications constraints as well as the impact of radiation on spacecraft. PASEOS can be run both as a high-performance-oriented numerical simulation and/or in a real-time mode directly on edge hardware. We demonstrate these capabilities in three scenarios, one in real-time simulation on a Unibap iX-10 100 satellite processor, another in a simulation modelling an entire constellation performing tasks over several hours and one training a machine learning model in a decentralised setting. While we demonstrate tasks in Earth orbit, PASEOS is conceptually designed to allow deep space scenarios as well. Our results show that PASEOS can model the described scenarios efficiently and thus provide insight into operational considerations. We show this in terms of runtime and overhead as well as by investigating the modelled temperature, battery status and communication windows of a constellation. By running PASEOS on an actual satellite processor, we showcase how PASEOS can be directly included in hardware demonstrators for future missions. Overall, we provide the first solution to holistically model the physical constraints spacecraft encounter in Earth orbit and beyond. The PASEOS module is available open-source online together with an extensive documentation to enable researchers to quickly incorporate it in their studies. caption =.0, vations such as inter-satellite links (ISL) [7]. To this end, a growing corpus of research has been exploring novel applications suitable for this new paradigm, ranging from machine learning applications in remote sensing [8], [9] to federated learning in low-Earth orbit (LEO) constellations [10], [11]. However, there are fundamental constraints in the operations of LEO satellites related to, e.g., short communication windows with ground stations (resulting in large latencies and poor availability) [12], temperature, radiation, and power budgets [13]. In addition to a hostile environment, other challenges include increased hardware complexity, and absence of a communication network [14]. Similar constraints exist beyond LEO and, thus, for robust and realistic scenario planning, it is necessary to properly consider them already both in algorithmic design and the demonstration of space applications. Specialized simulation tools exist for many aspects of onboard operation, such as communications [15], [16] or orbital dynamics [17], but there is a lack of holistic simulation of the environment. Here, we present an open-source software tool called PASEOS that is dedicated to realistic modelling of various constraints induced by the harsh environment of space. PASEOS is based on a modular design and is compatible with existing code in both discrete-event and time-based simulations. It supports a variety of scenarios ranging from modelling individual satellites to large-scale constellations. It can be run like a classical numerical simulation minimizing time-to-solution or in a real-time mode based on a clock, e.g., the system clock the hardware is running on. PASEOS is implemented in Python and is completely agnostic to any machine learning frameworks, such as PyTorch [18] or TensorFlow [19], or even the specific application. In this work, we describe the system design and mathematical modelling inside of PASEOS. Further, we demonstrate its capabilities on three applications. First, by using actual satellite hardware (Unibap iX-10 100 satellite processor), we demonstrate real-time modelling of onboard detection of volcanic eruptions using a single satellite. Second, we model the operational behavior and constraints of a large LEO constellation limited by thermal, power and communication constraints. In the final example, we demonstrate a fully decentralized machine learning application on two satellites modelled with PASEOS using heterogeneous data. PASEOS source code is open-source available online.1 We hope to inspire and enable a broad variety of follow-up research by making PASEOS modular, user-friendly and efficient. Thus, the contributions of this work are: Footnote 1: [https://github.com/aidotse/PASEOS](https://github.com/aidotse/PASEOS) * Creation of the open-source Python framework PASEOS to model the impact of constraints imposed by the space environment * Demonstration that the framework can be used in a wide range of scenarios including operations on actual satellite hardware for real-time implementations, for modelling constellations, and to study distributed machine learning approaches * Detailed analysis of the runtime overhead of PASEOS models in a scenario studying the detection of volcano eruptions on Unibap iX-10 100 processors * Demonstration of the direct impact operational constraints can have on machine learning methods due to power budgets and communication windows ## II Methods This section describes the physical models implemented in PASEOS as well as some of the design considerations. In the following, any type of device, vehicle or similar, modelled in PASEOS, will be referred to as an actor. ### _Modelling of spacecraft constraints_ Operating spacecraft poses vastly different challenges than operating any type of vehicle or asset on Earth [12], [20], [21]. Here, we briefly outline the constraints considered by PASEOS and how they are modelled. Overall, PASEOS aims to strike a compromise between computational complexity and physical fidelity that enables it to be run in parallel to perform an operation, e.g., the training of a neural network, and yet still account for various constraints on different hardware devices, including embedded systems. Overall, both physical constraints, such as thermal management or the availability of communication links, as well as operational constraints, such as per-user allocated time slots in missions, can be modelled with PASEOS. An overview of the modelled constraints can be found in Fig. 1. In this section, most constraints are explained in relation to orbits around Earth. They translate, however, to other scenario, such as orbits around other celestial bodies or deep space missions. #### Ii-A1 Astrodynamics Modelling Many of the following constraints depend on the exact ephimerides, i.e., positions and velocities, of the spacecraft and relevant celestial bodies, such as the Sun or the body the spacecraft may be orbiting. In PASEOS, we assume each actor to have a central body that it is gravitationally bound to, e.g., the Earth or the Sun. In its current version 0.1.2, PASEOS uses pykep [22] to model Keplerian orbits around the central body. The JPL low-precision ephemerides2 are used to model the position of the Earth and Sun. Even though we focus on simple Keplerian orbits in this work, it is conceivable to employ more complex dynamics models ranging from propagators such as SGP4 [23] to high-fidelity modelling through orekit [17]. All positions and velocities in PASEOS are modelled in the inertial frame of the central body. #### 2.2.2 Communication Windows One of the central constraints in spacecraft operations, especially in LEO, are communication windows during which a spacecraft is able to communicate with ground stations on Earth or with other spacecraft. In practice, these windows can depend on a large variety of factors such as the type of communication channel (e.g. optical, radio), environmental factors, such as atmospheric conditions, and others [24]. In PASEOS, we consider communications to be limited by visibility. For simplicity and computational efficiency, the version 0.1.2 considers only this constraint for communication between spacecraft. Thus, it is computed whether the sphere bounding the central body is obstructing the view between actors. For the availability of a ground link between a spacecraft and ground station, we utilize the Skyfield Python module [25] to compute the angle of a spacecraft above the horizon. Further, Skyfield is also used to account for ground stations' movement due to the Earth's rotation. Ground stations on other celestial bodies are not yet supported. Thus, based on these assumptions, PASEOS can be used to compute the availability and duration of communication windows between actors. #### 2.2.3 Data Transmission Rate and the Communication Channel Aside from the availability of a communication link, a second factor for data transmission is related to the effects of the communication channel, that affect the effective data transmission rate available for the user and impact the quality of data transmitted. Especially optical links from satellites in orbit around Earth to ground are highly dependent on atmospheric conditions [24], [26]. Similarly, space weather events like coronal mass ejections or solar flares, e.g., can impact transmissions to Earth [27]. These effects can introduce errors in the communication, which could lead to re-transmissions of data packets or require additional data redundancy, which impact the effective downlink data rate available to the user. Additionally, other factors such as the used transmission modulation, channel encoding strategy, and need for pilot synchronization play a significant role on the effective user transmission data rate [28]. However, high-fidelity modelling of these factors can be tremendously complex requiring dedicated simulation software, such as ns-3 or OMNeT++ [15], [16]. In the long term, PASEOS may possibly provide interfaces for these packages. For the moment, however, users are required to manually specify an average data transmission rate to be considered for computing required transmission times, which is considered constant and, within PASEOS, independent of the distance between the actors and the communication link. Figure 1: Overview of the constraints modelled in PASEOS #### 4.1.4 Power Budgets Another constraint relevant for all kinds of space missions are the available power budget and power systems [12], [29]. Near the Sun and inner planets, spacecraft typically rely on solar power and batteries. Further out, spacecraft rely on radioisotope thermometric generators [30]. Land-based assets such as rovers typically also rely on either of these methods. We assume spacecraft to have a battery with a fixed capacity. For simplicity, we model solar power generation in PASEOS as an increase in the battery's state of charge (SoC) at a constant rate while a spacecraft is not in the eclipse of the central body relative to the Sun. For ground-based actors, power budget is currently not considered as we are focusing mostly on Earth-based ground segments. However, modelling of ground-based assets' power budgets can follow analogously to spacecraft if, e.g., rover operations are modelled. The SoC is reduced through the spacecraft activities, which--in the context of PASEOS--are user-defined operations that consume charge at a constant rate during operations. #### 4.1.5 Thermal Management Another critical constraint on the activities a spacecraft can perform is posed by managing the spacecraft's temperature, both in terms of low and high temperatures. Spacecraft hardware will typically have a limited operational and survival range in terms of temperatures [13]. Hence, it is important that such constraints are respected during any operations the spacecraft performs. Due to the surrounding near-vacuum, heat dissipation is typically a major concern. However, low temperatures can also pose problems, especially when the spacecraft is in the central body's eclipse or deep space. Therefore, it is necessary to rely on a thermal model to anticipate and prevent violation of these constraints. In PASEOS, we rely on a formulation similar to the one described by Martinez [31]. We model the change in the spacecraft's temperature T as \[\mathrm{mc}\frac{\Delta\mathrm{T}}{\Delta\mathrm{t}}=\delta_{\mathrm{a}} \dot{\mathrm{Q}}_{\mathrm{solar}}+\delta_{\mathrm{a}}\dot{\mathrm{Q}}_{ \mathrm{allebedo}}+\dot{\mathrm{Q}}_{\mathrm{IR}}+\dot{\mathrm{Q}}_{\mathrm{ activity}}-\dot{\mathrm{Q}}_{\mathrm{diss}} \tag{1}\] where m is the spacecraft mass, c its thermal capacity, \(\delta_{\mathrm{a}}=1\) if the spacecraft is in eclipse and 0 otherwise and the \(\dot{\mathrm{Q}}\) are heat fluxes. In particular, the individual heat fluxes are given as \[\dot{\mathrm{Q}}_{\mathrm{solar}}=\alpha_{\mathrm{a}}\,\mathrm{E}_{\mathrm{s }}\,\mathrm{A}_{\mathrm{a,Sun}} \tag{2}\] where \(\dot{\mathrm{Q}}_{\mathrm{solar}}\) is the radiative flux produced by the sun, \(\alpha_{\mathrm{s}}\in[0,1]\) is the spacecraft solar absorptance, \(\mathrm{A}_{\mathrm{s,Sun}}\) is the area facing the Sun and \(\mathrm{E}_{\mathrm{s}}\) the solar irradiance. \[\dot{\mathrm{Q}}_{\mathrm{allebedo}}=0.5\,\alpha_{\mathrm{a}}\,\rho_{\mathrm{ b}}\,\mathrm{E}_{\mathrm{s}}\,\mathrm{A}_{\mathrm{a,albedo}} \tag{3}\] where \(\dot{\mathrm{Q}}_{\mathrm{allebedo}}\) is the sun heat flux generated back-scattered by Earth, \(\mathrm{A}_{\mathrm{a,albedo}}\) being the area facing the albedo and \(\rho_{\mathrm{b}}\) being the central body's reflectance. The 0.5 factor stems from a simplification as we currently do not compute angles between the Sun, the central body and spacecraft to reduce computational costs. \[\dot{\mathrm{Q}}_{\mathrm{IR}}=\frac{\mathrm{r}_{\mathrm{b}}^{2}\,\epsilon_{ \mathrm{a}}\,\epsilon_{\mathrm{b}}\,\sigma\,\mathrm{T}_{\mathrm{b}}^{4}\, \mathrm{A}_{\mathrm{a,b}}}{\mathrm{r}_{\mathrm{a}}^{2}} \tag{4}\] where \(\dot{\mathrm{Q}}_{\mathrm{IR}}\) is the radiative flux due to the Earth black body emission, \(\mathrm{A}_{\mathrm{a,b}}\) is the spacecraft area facing the central body, \(\mathrm{\bar{r}_{\mathrm{a}}}\) is the spacecraft's distance to the central body center, \(\epsilon_{\mathrm{a}}\in[0,1]\) the spacecraft's infrared emissivity (i.e. absorptance), \(\sigma\) the Boltzmann constant and \(\mathrm{r}_{\mathrm{b}}\), \(\epsilon_{\mathrm{b}}\) and \(\mathrm{T}_{\mathrm{b}}\) are the central body's radius, infrared emissivity and temperature. Furthermore, \[\dot{\mathrm{Q}}_{\mathrm{activity}}=\kappa\mathrm{P}_{\mathrm{A}} \tag{5}\] where \(\dot{\mathrm{Q}}_{\mathrm{activity}}\) is the heat flux from the spacecraft hardware, \(\mathrm{P}_{\mathrm{A}}\) is the power consumption rate of an activity A and \(\kappa\) is a user-defined parameter describing conversion rate into heat. And finally, \[\dot{\mathrm{Q}}_{\mathrm{diss}}=\epsilon_{\mathrm{b}}\,\mathrm{A}_{\mathrm{b }}\,\sigma\mathrm{T}^{4} \tag{6}\] where \(\dot{\mathrm{Q}}_{\mathrm{diss}}\) is the heat emitted from the spacecraft. While PASEOS does not enforce consideration of the spacecraft temperature (except disallowing temperatures below 0K) it enables the user to query the current temperature and formulate abort conditions based on it. #### 4.1.6 Radiation Effects Another constraint and physical factor to consider are the effects of radiation, especially beyond LEO, where spacecraft are still fairly protected by the Earth's magnetic field [32], [33]. In practice, these events can lead to data corruption, software faults, or permanent hardware damage [6]. In PASEOS, we model three different types of effects on operations due to single event effects (SEEs) [12]: * Data corruption with a certain probability due to single event upsets. For that, PASEOS models flipped bits that occurs according to a Poisson-distribution with a rate \(\mathrm{r}_{\mathrm{d}}\) * Unexpected software faults leading to a random interruption of activities following a Poisson distribution with rate \(\mathrm{r}_{\mathrm{i}}\) * Device failures following a Poisson-distribution with rate \(\mathrm{r}_{\mathrm{f}}\), which can be imputed mostly to single event latch-ups Given the dependence on spacecraft-specific hardware and orbit, the definitions of these rates are left to the user. #### 7.1.7 Operational Constraints Finally, in addition to the constraints imposed due to physics and the space environment, missions typically have many objectives and there are various stakeholders for any spacecraft [34]. Therefore, there are often operational constraints to be considered that are imposed through the mission profile. PASEOS enables these by allowing user-defined constraints based on arbitrary parameters that are evaluated during activities and lead to interruption of the activity if the constraints are not met. This gives users a broad range of possibilities ranging from imposing strict hardware limits, such as having to respect a minimum SoC or certain temperature range, to requiring time limits or factors outside the PASEOS simulation. ### Software Design In the following section, we briefly elaborate on the design philosophy of PASEOS with regard to user interaction and validation of the models. Even though we already provide concrete application results in this work, the aim of PASEOS is to enable a variety of future applications and consequently a flexible and generic design is paramount. #### 7.2.1 Actors As an abstraction of the variety of different assets available on ground and in space we simulate them in PASEOS as so-called actors. Fundamentally, we distinguish between ground-based actors and spacecraft actors. In the current version, the main feature associated with the former is modelling the change in position due to the rotation of the central body. For spacecraft actors, a variety of models can be enabled describing all the physical and operational aspects described in Sec. 2.2. Initially, one only needs to define the type of actor, a name for it and the current local time of the actor. Depending on the desired models, additional parameters, e.g. position and velocity for orbits, need to be specified. #### 7.2.2 Activities Activities are the second central abstraction in PASEOS. They serve to describe any kind of operation the user wants to model. For example, we may want our spacecraft actor to capture data with one of its sensors and process that data. PASEOS allows the specification of an (asynchronous) function that is executed in the background while PASEOS models the physical constraints. Further, a constraint function, which is then evaluated repeatedly at a fixed timestep, can be used to interrupt an activity. For each activity, one has to specify the power consumption to allow PASEOS to model excess heat and the change in the battery's SoC. Users can either let PASEOS run operations asynchronously to model updates or run operations and then advance PASEOS' simulation time. #### 7.2.3 Discrete-Event vs. Time-based One additional challenge for PASEOS is that network-oriented simulations, such as ns-3 [16] typically focus on providing a discrete-events simulation allowing users to skip to the next event. Physical simulation modelling ordinary differential equations, such as the one in Equation 1, however, are usually solved in a time-based fashion with discrete time-stepping schemes [35]. In PASEOS, these two simulation paradigms meet as both kinds of simulations are addressed and part of PASEOS. PASEOS operates at the intersection of both as it performs numerical simulations of physical processes while modelling discrete events such as communication windows becoming available. One can choose a fully real-time operation of PASEOS, where it will run the physical models asynchronously while performing a user-defined activity. Alternatively, one can manually ask PASEOS to advance its simulation state to a specific time interrupted by potential events of interest to the user occurs. #### 7.2.4 Using PASEOS With the relevant terminology introduced, we can describe the main workflow of PASEOS. Figure 2 gives an overview of the high-level use of PASEOS. It fundamentally requires two definition steps where, first, the actors are modelled and, second, the modelled operations are defined. Operations can then be performed repeatedly in either the discrete-event or time-based fashion. At any point of the simulation, the user may benefit from detailed output logs on actor status and activities (written to a *.csv file) and/or from visualizations. #### 7.2.5 Validation Given the multitude of physical models in PASEOS as well as the complex interaction due to asynchronously running activities, a thorough validation of the software is both critical and challenging. We rely on a comprehensive suite of automated tests to validate the continued correctness of the implemented models. In many cases, the tests are based on realistic scenarios, such as Sentinel-2 communication windows. Test-driven development has been employed for the design of many of the individual components and no contributions are merged without code review, appropriate tests, and passed automated tests. A modular design approach enables testing of individual components, such as the thermal model or communication links. Further, PASEOS is available online as an open-source software to enable anyone to provide feedback and report bugs. ### Modelling Asynchronous Operations Between Multiple Spacecraft The PASEOS framework may be readily used to model operations involving multiple actors where each physically modelled actor contains a PASEOS instance. Note that this setting requires a change of perspective from Fig. 2, where there is a single actor of interest, to a system-level view where each actor is to be modelled. PASEOS does not aim to facilitate coordination, scheduling of activities, or other tasks specific to the modelled application. Instead, it aims to provide the user with the capability to model different scenarios, such as intermittent knowledge of other actors and both centralized (e.g. federated learning) and decentralized (e.g. decentralized learning) operations. Each actor is assumed to operate independently and any information about other actors should be acquired during the operation. Operations can either be performed independently, where the user takes the responsibility to advance the time of the PASEOS simulation manually, or they can operate in an asynchronous fashion with event-based activities. Flexibility is an important design goal of PASEOS, and the user is able to equip a PASEOS instance with arbitrary capabilities by registering activities, see Sec. II-B2, consisting of the actions to be performed, the power consumption, and constraints that must be satisfied for the activity to be performed. The action and constraint of an activity are coroutines, i.e., asynchronous functions, and rely on the asyncio Python module [36]. When an activity is performed, its action and constraint function are submitted to an event loop for execution. Meanwhile, PASEOS will monitor and update the internals of the actor, e.g., temperature and SoC, and advance the time. Note that PASEOS allows only one activity to be performed simultaneously. Given the arbitrary code execution inside the activity, a user can, however, perform different tasks within an activity, e.g. based on the actors state. To allow for simulation of multiple spacecraft, the ability for the actors to interact is imperative. In PASEOS, communications are achieved by encapsulating transmission and reception as activities. Such activities should be designed to comply with the communication device for the given actor, see Sec. II-A3, in order for the communication windows to be properly calculated. For simulation on a single machine, communications may be emulated by simply imposing a delay (and power consumption) in the corresponding activity. To use PASEOS directly on edge devices, e.g., spacecraft hardware, communication activities may be based on packages such as gRPC and zeroMQ to allow interaction over a network. When working with edge devices, time synchronization between the actors is paramount for correct operation. As the actors in PASEOS operate independently, they must make themselves known to others by, e.g., sending out heartbeats at given intervals containing the id of the actor and, optionally, metadata, e.g., position and velocity. An actor may also be unavailable due to low SoC, no line-of-sight connection, or a device failure. The absence of heartbeats entails the unavailability and other actors may respond accordingly. PASEOS does not automatically send heartbeats or check these factors but provides an API to serialize actors for network transmission and the user to tell each actor about its known and available peers. Thus, arbitrarily simple or complex connectivity constraints can be imposed. If one wants to perform specific operations in a synchronized manner it is necessary to synchronize actors in time by, e.g., providing a start time for an activity. Alternatively, time synchronization can be achieved by means of communication [37]. For example, one may utilize the Network Time Protocol [38] that relies on a master clock or decentralized approaches that rely on, e.g., the heartbeats [39]. ## III Results In this section, the usage of PASEOS is demonstrated for three distinct modelling scenarios: Earth observation, constellation design, and decentralized machine learning. The purpose is to illustrate the broad range of applications that are readily modelled with the aid of Figure 2: Overview of the workflow using PASEOS PASEOS. ## 1 Single Actor: Onboard Volcano Detection Onboard Sentinel-2 This experiment showcases how one can use PASEOS in real-time simulations to emulate satellite onboard-processing scenarios on actual space hardware. ### Setup We design an experiment in which we register an activity to process satellite payload data and utilize a constraint function to check the availability of a link to transmit data to the ground. During the simulation, we profiled our code on a Unibap iX-10 100 satellite processor [6] to evaluate the overhead of PASEOS, i.e., the time spent to update the physical models compared to the time required for checking constraints and running user activities. In particular, we consider onboard detection of volcanic eruptions applied to Sentinel-2 L1-C where the acquired data are processed directly onboard a satellite to spot possible volcanic eruptions and deliver early-alerts to ground [40], [41]. To set up the orbit of the spacecraft actor we used the ephemerides of the Sentinel-2B satellite at 2022-10-27T12:30:00Z, which were calculated by using two-line elements. Because of that, the orbit of our actor is sun-synchronous. To set up the ground station actor position, we used the European Space Agency ground station, located at Maspalomas, Gran Canaria (27.7629\({}^{*}\) latitude, -15.6338\({}^{*}\) longitude at an elevation of 205.1m). The ground station link is available if the satellite is 5\({}^{*}\) above the horizon. To check for the possibility to communicate, we used a constraint function that interrupts the user activity (i.e. the volcanic eruption detection) when the satellite is in line of sight (LOS) with the ground station. To have sufficient energy to process data onboard, the actor is equipped with a 0.162 MJ battery, with a SoC of 1.0 at the beginning of the simulation. We assume a charging rate of 10 W. Furthermore, to increase the computational cost due to the update of the physical models in PASEOS and test the system in the worst case, we also equipped the space actor with a thermal and radiation model to measure their run time impact. In particular, to avoid effects of radiation but track computational cost, we set up the rate of data corruption, restart, and device failure to be 0\(-\)this does not affect the computational cost. The simulation data consist of three Sentinel-2 L1C post-processed products showing volcanic eruptions of Etna (2021-08-30), La Palma (2021-09-30), and Mayon (2018-02-09), provided by Meoni et al. [42]. Each image is produced by cutting and mosaicing the 20m bands B8A (Near-Infrared) and the B11 and B12 (Short-Wave Infrared) of multiple tiles over the area of the band B8A of the correspondent Sentinel-2 L0 granule [43]. We artificially extended the simulation time by repeatedly evaluating the images until we get 100 images in total for the simulation. In addition, we assumed all data to be already acquired and available for processing to disregard hardware-related delays that would occur in reality as we focus here on profiling PASEOS. Code profiling was performed by using the Python module yappi [44]. In particular, we measured the Central Processing Unit (CPU) time on the iX-10 100 device for three different values (0.25 s, 0.5 s, 1 s) of the PASEOS timestep, i.e. the interval time for updating PASEOS, its physical models, and checking user constraints. We performed three runs for each choice of PASEOS timestep. During each run, we measured the CPU time for the user activity, to check user constraints (i.e., check of LOS with the ground station), time to update PASEOS overall, and the individual times spent to update the radiation, the thermal, and the battery charge models. Results were averaged over the three runs for each test case. All the tests were performed sequentially with two warm-up runs with PASEOS timestep of 1 s. ### Onboard volcanic eruption detection The detection is based on a simplified implementation of the algorithm [45], presented in [42], which produces a bounding box surrounding the detected volcanic eruption and the associated geographical coordinates. Since the aim of the activity is to provide an early alert to promptly notify and locate a detected volcanic eruption, a possible alert message will provide the coordinates of the bounding box center or top-left/bottom-right points. One example of a detected volcanic eruption on the island La Palma is shown in Fig. 3. #### 1.3.1 Results of profiling Tab. 1 shows the simulation results for the runs on the iX-10 100 device. In each test, the activity was interrupted after roughly 29 s of CPU time because the satellite was found in LOS with the ground station. Indeed, the time to perform the user activity is similar for each value of the PASEOS timestep, whilst the time to update PASEOS state and to check the user constraints grow almost linearly. However, even for the smallest PASEOS timestep, the latter are respectively 0.43 s and 0.29 s compared to 28.64 s for the user activity. Overall, PASEOS requires around the 1.45% of the total run time. This demonstrates that PASEOS is a lightweight solution that can be suitable for onboard processing use cases on embedded hardware. As can be seen in Tab. 1 the time for modelling battery charge is equal to the one for updating the thermal model. This is because the update of both the power and the thermal models require checking whether the spacecraft actor is in eclipse. This operation is carried out only once for both the models and run time of it is attributed equally to the thermal and power models. All the other operations to update the battery SoC and thermal models are negligible. ### Multiple actors: communications modeling of a constellation The capabilities of PASEOS to model the operational constraints of managing a constellation are demonstrated in this test case. Detailed results on the constellation's status over time are given and a simple scaling study is performed to investigate PASEOS scaling abilities to showcase the potential of modeling large constellations. #### 1 Setup The scenario investigated here is a LEO constellation consisting of sixteen spacecraft in a Walker pattern [46] in four planes at 550 km altitude with an inclination of \(10^{\circ}\). Operations of the constellation are modelled for eight hours--that equals slightly more than five revolutions. The satellites are presumed to be equally equipped with a 1 MJ battery initially at randomly uniform SoC between 0.1 and 1.0. Each satellite is equipped with solar panels charging at 50 W. Satellites are assumed to have a mass of 50 kg, to be at 273.15 K initially and have absorptance of solar and infrared light of 1.0. The areas facing the Sun and Earth are assumed to each be 2 m\({}^{2}\). The emissive (radiating) area is presumed to be 4 m\({}^{2}\). The thermal capacity is assumed to be 1000 Jkg\({}^{-1}\)K\({}^{-1}\). We assume half of the used wattage for satellite operations to be converted to heat. The Earth's temperature is assumed to be 288 K, its infrared emissivity at 0.6 and its solar reflectance to be 0.3. Solar irradiance is estimated as 1360 W. Satellites have two operational modes, one is a standby mode called Standby where they consume only 2 W and the other one is called Processing where they consume 100 W. The satellites automatically switch to Standby if their battery's SoC falls below 0.2 or their temperature above 330 K. In addition to the constellation, we monitor availability of communication links to a satellite in geosynchronous orbit called GeoSat and the European Space Agency ground station at Maspalomas, Gran Canaria. As for the single satellite scenario, the ground station link is available if satellites are \(5^{\circ}\) above the horizon. The geosynchronous satellite is reachable from the Maspalomas station. In PASEOS, we use a time step of 1 s for the thermal and power models. Satellites decide every 600 s whether they are ready for Processing. If the constraints for Processing are violated during the 600 s interval, they switch to Standby for the remainder of the interval. #### 2 Constellation Analysis We analyze the constellation regarding several operational factors. First off, in terms of time spent processing. As can be seen in Fig. 4, roughly half of the satellites are processing at any time, and both battery SoC and temperature limit the periods of operation. Given the circular LEO orbits of the constellation, the satellites spend a large time in eclipse with 25 to 50% of the constellation being in eclipse at any moment. This also influences the operational temperature--as seen in Fig. 6--of the satellites which rises quickly at the beginning when a large share begins processing and the temperature falls especially during eclipse. \begin{table} \begin{tabular}{|c|c||c|c|c|} \hline \multicolumn{2}{|c||}{All in [\(\sharp\)](\%)} & \multicolumn{4}{c|}{PASEOS timestep} \\ \hline \multicolumn{2}{|c||}{Type} & 0.25 & 0.5 & 1.0 \\ \hline \hline \multirow{4}{*}{Satellite} & User & \multirow{2}{*}{28.64 (97.47)} & \multirow{2}{*}{28.59 (98.66)} & \multirow{2}{*}{28.52 (99.27)} \\ & Activity & & & \\ \cline{2-2} \cline{5-5} & LOS & & & \\ \cline{2-2} \cline{5-5} & Constraint & & & \\ \hline \multirow{4}{*}{Satellite} & SoC & \multirow{2}{*}{0.11 (0.39)} & \multirow{2}{*}{0.062 (0.21)} & \multirow{2}{*}{0.034 (0.12)} \\ & & Radiation & & \\ \cline{2-2} \cline{5-5} & Thermal & 0.11 (0.39) & & \\ \cline{2-2} \cline{5-5} & Other & 0.19 (0.65) & & \\ \cline{2-2} \cline{5-5} & Subtotal & 0.43 (1.45) & & \\ \hline \multicolumn{2}{|c||}{Total} & \multirow{2}{*}{29.39} & \multirow{2}{*}{28.98} & \multirow{2}{*}{28.74} \\ \hline \end{tabular} \end{table} Table 1: Results of profiling on a Unibap IX-10 100 device for different PASEOS timesteps. Each column reports the CPU time in seconds spent for the user-defined activity to update the physical models of PASEOS and their individual times. Percentages with respect to the total time are reported in brackets. The last column provides the time spent for the check of the LOS. Figure 3: Detected volcanic eruption on the La Palma (Spain) on 2021-09-30. The displayed coordinates (longitude, latitude) correspond to the center of the detected bounding box. Even though PASEOS' SoC model, for now, is simplistic, complex dynamics for the power budgets can be observed in Figure 5. Satellites in the constellation fluctuate between a SoC of 0.2 and 1.0. The low standby consumption of the Standby activity means they never run the risk of reaching critical SoCs. Overall, the constellation's behavior in terms of satellites performing Processing and the constellation's temperature and SoC become stable and cyclic after roughly one orbital revolution. In terms of communication status, 50 to 75% of the constellation are not within LOS of either the ground station or satellite in geosynchronous orbit as can be seen in Figure 7. As the geosynchronous satellite is reachable from Maspalomas, it is also in LOS of a constellation satellite when the ground station is. #### 3.2.3 Performance and Scaling Running this scenario on a AMD Ryzen-5 consumer-grade CPU on a single thread requires about 200 s. Time is spent almost exclusively on the physical models and LOS checks. With 16 satellites, running the simulation for a simulation time of 600 s requires 8.3 s, 0.52 s per satellite. We investigated the scaling by increasing the number of satellite per plane. With a constellation of size 32, 16.4 s were necessary for 600 s simulation time, 0.51 s per satellite. The better-than-linear scaling is likely due to a constant initialization overhead in starting the simulation. With 512 satellites, the 600 s required 255.9 s computation time or 0.50 s per satellite. Thus, it is clear that PASEOS scales well even to a large number of satellites. Parallelization of these simulations is also trivial given the fully asynchronous nature of PASEOS that permits parallelization without any concern for shared memory of similar. ### Onboard Machine Learning in Orbit This test demonstrates how PASEOS can be employed to model and monitor constraints when solving a machine learning task in a decentralized setting. Limited communication windows, heterogeneous data, and power constraints are imposed while successfully solving a classification task. #### 3.2.1 Setup The operational scenario in this example consists of two satellites in circular LEO at an altitude of 550 km with an inclination of 98.62\({}^{\circ}\). They move in opposing directions and thus have only brief communication windows amongst them twice per orbital revolution. Both are equipped with a 0.1 MJ battery with an initial SoC randomly chosen between 0.6 and 0.8. They have solar panels which charge the battery at 50 W when not in eclipse. They are also equipped with inter-satellite links enabling them to transmit 1 Mbit per second amongst Figure 4: Overview of the satellites able to process and in eclipse over time. Figure 5: Battery SoC of the constellation. Different colors indicate percentage intervals around the median. Figure 6: Temperature of the constellation. Different colors indicate percentage intervals around the median. Figure 7: Overview of the satellites’ communication status over time. themselves when in LOS. The satellites are assumed to not communicate during the first ten orbital revolutions. The satellites are tasked with jointly learning a binary classification task identifying an inner and outer circle by leveraging data uniquely available to each satellite. The used dataset and its distribution among the satellites is depicted in Fig. 8. Notably the distributions are heterogeneous with the satellites only having access to data points with values in the first dimension above 0.5 and below -0.5, respectively. The test dataset is identical on both satellites and covers the complete feature space. A total of 4166 training samples are used and the test dataset consists of 3300 samples. A two-layer dense neural network with ten neurons is trained using stochastic gradient descent with a learning rate of 0.1 and a binary cross-entropy loss function. Training is modelled for a total of 30 orbital revolutions. In each revolution, if allowed, the satellites communicate twice and will train an aggregated model in the window between communications. On average, Satellite 1 is able to train 42.3 epochs and 11.6 epochs in the two different windows whereas Satellite 2 is able to train 4.1 epochs and 50.5 epochs. During operations, the satellites have three distinct operational modes they perform in descending priority: * When the SoC is below 0.5 the satellites stand by to conserve and charge their batteries. This mode drains 2 W. * When the satellites are in LOS of each other they exchange their models and aggregates via a federated averaging at a power consumption of 100 W. * If neither of the above takes place the satellites perform a training epoch at a power consumption of 100 W. #### 4.2.2 Learned Model and Communications Overall, training the models on the two satellites is successful, reaching an accuracy of over 91% on both satellites compared to a random guess accuracy of 50% and independent training that results in an accuracy below 70%. Figure 9 displays the test set accuracy over time and shows how both satellites struggle to obtain good results individually. However, when they exchange models (marked with gray vertical lines) they improve their accuracy noticeably. Notice that model transmission is virtually instant given the small models (only 1312 bits). It can also be seen that each satellite benefits after each communication round as the test accuracy rapidly increases. However, once the satellite starts training on local data again, the performance deteriorates due to catastrophic forgetting [47]. Furthermore, there is a large discrepancy between the test accuracy for the two satellites. This happens as Satellite 1 is able to do training after each model exchange because charging is initiated after one exchange and the satellite remains in LOS with the sun after the next exchange. Satellite 2, on the other hand, is charging before one exchange and has stopped charging during the next. Hence, Satellite 2 may not have battery to do training after one of the model exchanges and, therefore, its convergence is slower, as seen in Fig. 9. In terms of power consumption, which is displayed in Fig. 10, we can observe training in the intervals of rapid oscillation of the SoC when the battery is charged and consequently the training run. During eclipse the satellites go into standby to conserve SoC and drains only 2 W. Finally, the rapid increase in SoC occurs when a satellite is facing the sun and training is not allowed because SoC is below 0.5. ## 5 Discussion In general, the examples in Sec. 3 clearly demonstrate the viability of PASEOS for the three considered test cases. By running on an actual satellite processor in Figure 8: Training and test data for the binary classification task as well as their distribution among satellites. Note the heterogeneity of the data distribution in the training set. \(y=0\) and \(y=1\) are the classes of the problem. Figure 9: Test accuracy over 50 orbital revolutions. Vertical gray lines indicate communication between the satellites. Constant accuracy stems from the battery SoC being below 0.5. real time, we have also shown the feasibility of using PASEOS to model scenarios while utilizing real, prospective mission hardware. In the case of the LEO constellation, we can see that PASEOS is also capable of modelling large constellations on a longer time frame. Finally, we also showcased how PASEOS can be used to model constraints that impact the training of machine learning models directly in space. There are, however, some aspects that mandate further discussion and consideration. ### Impact of onboard constraints on machine learning One of the objectives of PASEOS is to study the impact of operational constraints when utilizing machine learning methods in orbit. Especially in the context of distributed and onboard learning paradigms it is an essential question what this impact will be [10, 11]. As demonstrated in our onboard learning results in Sec. III-C, PASEOS can provide concrete insights into the operational impact of factors such as orbital dynamics and power consumption. As can be seen in Fig. 9, the timing of communication windows and eclipse directly influenced the accuracy obtained during the distributed training. While the importance of factors such as temperature, battery SoC, or the timing of communication windows in relation to eclipse are well established parameters for spacecraft operators they have not been studied in the context of the application of machine learning methods in orbit. Even in-orbit demonstrations, such as \(\Phi\)-Sat-1 [48], did not explore the topic of continuous operations of these systems over a longer operational time frame where many of the constraints captured by PASEOS play an important role. For both inference and training in constellations, to the authors' knowledge, there is currently a lack of holistic modelling of these factors as offered by PASEOS. ### Scalability & performance As shown on the single device example using real space-rated embedded hardware in Sec. III-A, PASEOS is a lightweight tool that allows modelling the space environment and operational constraints with minimal overhead for the user activity. Results shown in Sec. III-A3 showcase that the computational complexity inside PASEOS models scales linearly with the PASEOS timestep while requiring less than 1.5% of the total CPU time compared to the user activity with the highest investigated update rate of PASEOS. This is due to the particularly efficient physical models that offer suitable trade-offs to be used on edge devices. This partially comes at the cost of model fidelity, for which the main limitations and possible future improvements are discussed in Sec. IV-C and IV-D. On the other end of the simulation spectrum, the LEO constellation example in Sec. III-B shows that even as PASEOS is able to handle large constellations of up to hundreds of satellites, performance does become a concern. If one wants to model the long-term viability of a constellation including aspects like station keeping, degradation of photovoltaic cells and batteries, and similar effects (which are yet to be implemented in PASEOS), computational time may become a concern. In the single-threaded, single-core run used for the example, the runtime would, at the moment, already be prohibitive for a constellation with hundreds of satellites on a timescale of months or years. Thus, a careful study of the parallelization possibilities of PASEOS will be required in the future. As PASEOS is implemented in a fully asynchronous manner, it is straightforward to employ tools such as the popular message passing interface (MPI) [49] to use compute clusters for PASEOS--this is shown in one of the online examples. A more detailed investigation of this is warranted. ### Fidelity considerations One key point in terms of fidelity that has to be taken into account are the user-provided specifications. They are not studied here in detail as they are beyond the scope of this initial release. However, given the holistic nature and the complex emergent behavior--see, e.g., the complex SoC curve stemming from just a simple, linear charge model in Figure 5--of PASEOS, the accuracy and quality of the input parameters is likely to become a critical factor in the fidelity of results produced by PASEOS. Notably, if all physical models are activated, the number of input parameters becomes fairly large. Indeed, just the thermal model requires eight parameters, the power model currently requires three, the radiation model another three, the orbital model seven and these parameters need to be defined for every single actor in PASEOS. Thus, even with the simplified models present in PASEOS, it is evident that constellation scenarios modelled with PASEOS are already highly complex Figure 10: SoC between orbital revolution 15 and 20. systems. In the future, this may require more detailed analysis of the system sensitivity to specific parameters to guide users as to which parameters are particularly critical. It is also conceivable to add noise terms in a variety of the PASEOS's models to account for this sensitivity. Even higher robustness code could be achieved by converting some of the parameters to be described by distributions instead of scalar parameters and thus include a probabilistic, Bayesian modeling component that allows consideration of prior assumptions about the accuracy of passed parameters. ### Limitations & prospective additions At the moment, there are some natural limitations to the fidelity of the models in PASEOS and supported scenarios. The two objectives of holistically modelling the operational environment in space and being able to execute PASEOS in the background in real-time on edge devices requires a careful balancing of computational cost and physical fidelity. In the future, this may be remedied by adding optional components to PASEOS that model the various physical aspects at a higher fidelity when enough computational resources are available. Initially, however, we have focused on breadth of considered physical aspects instead of high fidelity in individual aspects. In terms of astrodynamics modelling, Keplerian dynamics are a sufficient start for low-precision Earth orbits but insufficient to, e.g., model orbits around irregular bodies such as asteroids or comets. Similarly, phenomena such as station keeping and occasional losses of tracking cannot be model with them. Natural additions in the future would be a polyhedral gravity model [50] or the wrapping of a software like orekit [17]. The availability of communication windows in PASEOS is currently determined solely by whether the LOS is being blocked by an assumed sphere with a specific radius. More complex geometry could be integrated via meshes. Other factors such as success of tracking, distance of the actors and atmospheric conditions for optical communications and similar are also currently not modeled but sensible additions. Similarly, the communication bandwidth that is currently available is in reality a more complex and variable quantity dependent on a lot of the just mentioned factors such as distance, link conditions, and others parameters linked to the transmission chain (i.e., channel-encoding, modulation, additional use of synchronization pilots, etc.). A more thorough channel modelling is required to account for this. A potential way forward is to wrap ns-3 or OMNeT++ into PASEOS [15, 16]. In regards to power budgets there also several conceivable improvements, such as more rigorous modeling of the state-of-charge of the battery [51], a more thorough model for the charging via solar panels [52] and consideration of factors such as the age and temperature of the battery, devices, and solar panels. The thermal model in PASEOS would,in a similar vein, benefit from a more complex model that accounts for the thermal properties of various components such as solar panels, radiators and others. Concerning radiation, total ionization dose effects are not currently implemented in PASEOS, but they will become relevant for simulations in the time scales of years [12], especially for commercial off-the-shelf devices [12, 20, 53]. Another direction of improvements lies in expanding the the capabilities of actors. Ground-based actors are currently stationary and only supported for scenarios on Earth. Further, space-based actors cannot perform manoeuvres (although one can manually change the orbit). In the future, these capabilities could support more complex operational scenarios and activities. Overall, there is a virtually endless range of potential improvements in fidelity and modelled aspects. It will require careful analysis of which aspects are critical to enable realistic constraint consideration and which ones can be simplified to the degree as is currently the case in PASEOS. But, to even enable these comparisons, one needs to start with a baseline first, which is what we are providing here. ## V Conclusion PASEOS is a software package that enables holistic modelling of the onboard environment accounting for, e.g., power, thermal, radiation, and dynamics. The generality of PASEOS makes it a tool to study a plethora of operational scenarios, hardware configurations, or to be used in conjunction with other simulation tools. In particular, PASEOS is well suited to study constellations in space for emerging operational scenarios, such as edge computing, edge and decentralized learning, and artificial intelligence in space [54]. Overall, we have demonstrated that PASEOS provides the means to model a variety of constraints that spacecraft and their operators experience in orbit. Thus, we can explore the feasibility of onboard activities with a greater rigor before launch and/or form an understanding of how already operational assets may be repurposed or perform in the future. Given PASEOS's asynchronous and versatile setup, a broad range of scenarios ranging from one to multiple spacecraft (including ground-based actors) are possible. Both real-time at-the-edge execution and long-term simulation on a computing cluster or similar infrastructure are supported. The specifics of the modelled quantities can be adjusted to fit the particular scenario. However, there are of course intrinsic limitations to this process. At the moment, the models inside PASEOS exhibit comparatively lower fidelity than simulators for dedicated topics (e.g. ns-3 or OMNeT++) to enable rapid background execution. PASEOS's modular nature does provide a natural surface for extension and wrap ping of more complex physical models. In the future, we will conduct application-specific studies, explore more complex models, and demonstrate an operational scenario using PASEOS on multiple edge devices in parallel to solve a real-world task. ## Acknowledgment The authors would like to thank Unibap AB for providing the iX-10 100 device that was used for our experiments.
2304.09482
Standard quantum field theory from entangled relativity
Despite its non-linear form, entangled relativity possesses both general relativity and standard quantum field theory in a specific (but generic) limit. On one side it means that the theory is consistent with our current understanding of elementary physics. But on the other side it means that our current understanding might actually just be approximately valid: and this, surprisingly, goes for both \textit{general relativity} and standard quantum field theory together.
Olivier Minazzoli
2023-04-19T08:07:04Z
http://arxiv.org/abs/2304.09482v1
# Standard quantum field theory from entangled relativity ###### Abstract Despite its non-linear form, _entangled relativity_ possesses both _general relativity_ and standard quantum field theory in a specific (but generic) limit. On one side it means that the theory is consistent with our current understanding of elementary physics. But on the other side it means that our current understanding might actually just be approximately valid: and this, surprisingly, goes for both _general relativity_ and standard quantum field theory together. As we shall see in this communication, _entangled relativity_ is a general theory of relativity that is more economical than _general relativity_ coupled to matter fields when embedded in a quantum field theory framework, because it requires only two universal dimensionful parameters to be defined, whereas it has all the same ingredients otherwise. Indeed, let us start from its path integral: \[Z=\int{\cal D}g\prod_{i}{\cal D}f_{i}\exp\left[-\frac{i}{2\epsilon^{2}}\int d _{g}^{4}x\frac{{\cal L}_{m}^{2}(f,g)}{R(g)}\right], \tag{1}\] where \(\int{\cal D}\) relates to the sum over all possible field configurations, \(R\) is the usual Ricci scalar that is constructed upon the metric tensor \(g\), \({\rm d}_{g}^{4}x:=\sqrt{-|g|}{\rm d}^{4}x\) is the spacetime volume element, with \(|g|\) the metric \(g\) determinant, and \({\cal L}_{m}\) is the Lagrangian density of matter fields \(f\)--which could be the current _standard model of particle physics_ Lagrangian density, but most likely a completion of it. One can check that the dimension of what is historically called "the action" turns out to be an energy squared. Thus, the only parameter of the theory is a quantum of energy \(\epsilon\). This parameter and the causal structure constant \(c\)--hidden in the spacetime volume element--are the only two universal constants of the theory. At this stage, one can already deduce an important fact about this theory: it does not have a quantum of action. As a consequence, one can already deduce that the quantum of action \(\hbar\) can only be effective, rather than elementary. Obviously, this changes quite a bit from the XXth century picture of elementary physics. Another important fact that one can draw at this stage is that one cannot construct a length scale or a time scale from a quantum of energy (\(\epsilon\)) and a speed (\(c\)). Hence, right from the beginning, one can deduce that there is no notion of elementary units of space, or of time, in this theory. Given that all the real troubles of quantum _general relativity_ are linked, one way or another, to the existence of the Planck length and time [2], it is a rather unexpected and interesting fact about this theory. In particular, it means that there is no reason a priori to expect anything special happening to the smooth structure of spacetime at any scale--unlike what has been realized very early on in quantum _general relativity_[3]. This would be a good news, as one would likely not have to (fundamentally) come up with a discretization recipe for the computation of the path integral Eq. (1), such as in _causal dynamical triangulation_ or in _Regge calculus_ for (non-perturbative) quantum gravity--although such types of discretization procedures may still be necessary approximations in order to be able to actually evaluate the path integral numerically [2]. Therefore, quantum _entangled relativity_ should be a radically new direction to explore and to evaluate in the field of quantum gravity. A common naive comment that some may have when confronted for the first time to Eq. (1), among many, is simply that it cannot make sense, given the infamous \({\cal L}_{m}^{2}\) in the nominator. Indeed, if one assumes that gravity can be neglected at the scale of particle physics experiments, then \(R\) must be constant--if not much worse, \(R=0\)--and one ends up with a theory that does not make any sense from a quantum field theory point of view--nor from any point of view for that matter. But the comment is naive, because it assumes that neglecting gravity only implies that the variation of the metric tensor can be neglected at the scale of particle physics. But the classical theory that derives from the extremization of the quantum phase in Eq. (1) is not _general relativity_. Hence, one does not know, a priori, what neglecting gravity even means in this context. In order to figure it out, one has no choice than to study the gravitational (classical) phenomenology of the theory. Fortunately enough, the classical gravitational phenomenology of the theory turns out to correspond to a special case of a class of theories studied some ten years ago [1]. Indeed, let us start from the following action \[S\propto\int d_{g}^{4}x\frac{\Phi}{2\alpha}\left[R(g)-\frac{\omega(\Phi)}{ \Phi^{2}}(\partial\Phi)_{g}^{2}+\frac{2\alpha}{\sqrt{\Phi}}{\cal L}_{m}(f,g) \right], \tag{2}\] where \(\omega(\Phi)\) is an arbitrary function, and \(\alpha\) a normalization constant that is such that \(\alpha/\Phi_{0}=8\pi G/c^{4}\), where \(\Phi_{0}\) is the background value of the scalar-field in the solar system for instance. The field equations that derive from this action read \[R^{\mu\nu}=\alpha\frac{1}{\sqrt{\Phi}}\left[T^{\mu\nu}-\frac{1}{2}g^{\mu\nu}T \right]+\frac{1}{\Phi}\left[\nabla^{\mu}\partial^{\nu}\Phi+\frac{1}{2}g^{\mu \nu}\Box\Phi\right]+\frac{\omega(\Phi)}{\Phi^{2}}\partial^{\mu}\Phi\partial^{ \nu}\Phi, \tag{3}\] with \[\frac{2\omega(\Phi)+3}{\Phi}\Box\Phi+\frac{\omega_{,\Phi}(\Phi)}{\Phi}\left( \partial_{\sigma}\Phi\right)^{2}=\alpha\frac{1}{\sqrt{\Phi}}\left[T-{\cal L}_{ m}^{o}\right], \tag{4}\] where \({\cal L}_{m}^{o}\) is the on-shell value of the matter Lagrangian, and \[\nabla_{\sigma}T^{\mu\sigma}=\frac{1}{2}\left({\cal L}_{m}^{o}g^{\mu\sigma}-T ^{\mu\sigma}\right)\frac{\partial_{\sigma}\Phi}{\Phi}. \tag{5}\] This theory is well-defined for all \(\omega\neq-3/2\), and notably imply the same post-Newtonian parameters as _general relativity_--that are \(\gamma=\beta=1\)--for all \(\omega>-3/2\). It takes literally two lines of calculation to show that the case \(\omega=0\) is equivalent, at the classical level, to the theory that derives from the extremization of the quantum phase in Eq. (1), provided that \({\cal L}_{m}\neq\emptyset\) in the action.1 In the solar system for instance [1], the field equations from Eq. (2) are such that the gravitational field \(\Phi\) is constant--or, at least, varies much less than the spacetime metric--for all \(\omega>-3/2\). This as been named an _intrinsic decoupling_[1]. It turns out that this remains true in general for a universe mainly made of dust and null electromagnetic radiation such as ours. **But what does it mean for the theory in Eq. (1) that \(\Phi\) varies much less than the spacetime metric?** Well, it simply notably means that the ratio between \({\cal L}_{m}^{o}\) and \(R\) is constant whenever gravity can be neglected.2 Indeed, at the classical level, one has \(\sqrt{\Phi}=-\alpha\;{\cal L}_{m}^{o}/R\) for \(\omega=0\), as one can check from the derivation of the field equations. Let us specify our case to the \(\omega=0\) one for readability, and rewrite our action as follows \[S\propto\int d_{g}^{4}x\frac{1}{\kappa}\left(\frac{R(g)}{2\kappa}+{\cal L}_{m}( f,g)\right), \tag{6}\] with \(\kappa=\alpha/\sqrt{\Phi}\), whose solution is such that \(\kappa=-R/{\cal L}_{m}^{o}\). It turns out that this field definition is slightly more general than the previous one, as \(\kappa\) can in principle also be negative with this definition; whereas \(\sqrt{\Phi}\) could not, by construction. \(\kappa<0\) should not happen in the observable universe though. Indeed, \(\kappa\) for any local solution (e.g. neutron star, black-hole, solar system etc.) has boundary conditions that are such that \(\kappa=8\pi G/c^{4}\) at the boundary, usually corresponding to the constant background of the scalar-field at the local scale. From there, one can check that, even for the ultra relativistic density of neutron stars, \(\kappa\) varies a few percent only, at max \(\vec{5}\). Therefore, it has no chance of "crossing the line" for the densities of the celestial bodies that exist in the observable universe, and which are not hidden behind an horizon. Nevertheless, one cannot exclude that \(\kappa\) can in principle become negative in even denser situations than neutron stars. Rather than a bug of the theory, I think this might end up being a potential interesting feature of the theory, perhaps for instance for describing the primordial universe and/or the behavior of matter inside black-holes. **Fine, but what can we say about quantum field theory when gravity can be neglected in Eq. (1)?** We can say that the metric field does not vary at the scale of particle physics, as usual, but we can also say that the ratio between \({\cal L}_{m}\) and \(R\)--which is a gravitational degree of freedom (\(\kappa\) or \(\Phi\) in Eq. (4) with \(\omega=0\))--does not vary either. Hence, when one neglects gravity, Eq. (1) reduces to \[Z\approx\int\prod_{i}{\cal D}f_{i}\exp\left[\frac{i}{\kappa\epsilon^{2}}\int d ^{4}x{\cal L}_{m}(f)\right]. \tag{7}\] From this limit, one can now identify the quantum of energy \(\epsilon\) that was the only free parameter of the theory in Eq. (1). Indeed, in order to match with standard quantum field theory on "flat spacetime" 3, one must have Footnote 3: Let us stress the obvious: as far as standard physics goes, the concept of a flat spacetime is not something that exist in the universe (anywhere), since it is not solution of _general relativity_ with a cosmological constant. (Obviously, a flat spacetime is also prohibited in _entangled relativity_). Hence, in standard physics, a “flat spacetime” is just an useful approximation on scales at which the gradient of the spacetime metric can be neglected. Nothing more than that. So, quantum field theory on “flat spacetime” is just quantum field theory when gravity is neglected. \[\kappa\epsilon^{2}=c\hbar, \tag{8}\] such that \(\epsilon\) turns out to be the (reduced) Planck energy \(\epsilon=\sqrt{c\hbar/\kappa}\). This shows that, when gravity can be neglected, the weird looking non-linear phase in Eq. (1) recovers standard quantum field theory on "flat spacetime". This is far from being trivial, and it boils down to the gravitational (classical) phenomenology of the theory that implies that \(\kappa\) varies much less than the spacetime metric for a universe like ours. There is exactly one constant on each side of Eq. (8): \(\epsilon\) on the left hand side and \(c\) on the right hand side. Hence, as expected from the start of this communication, \(\hbar\) is not an elementary constant of nature in this framework, but rather emerges as such in some limit of the theory. Indeed, although Eq. (8) has been obtained in the \(\kappa=\)constant limit, the value of \(\kappa\) can change a few percent inside a neutron star for instance [5]. So, even if at the scale of a particle physics phenomenon, \(\kappa\) can be approximated to be constant on the relevant particle physics scale, it may change depending on the location of the phenomenon inside, or close to, the neutron star, a few percent over several km. Hence, it means that effectively, \(\hbar\) has a value that varies depending on the location. This is a novel prediction, which does not depend on any free theoretical parameter, and that eventually might be probed at the observational or experimental level--although the level of variation of \(\hbar\) should be minute in the observable universe due to the _intrinsic decoupling_ mentioned above. One can even conjecture that the increase of the apparent fine structure constant (potentially) observed in strong gravitational fields [7] might be related to that. Indeed, what is interpreted as a variation of the fine structure constant (\(\alpha_{e}\)) could likely be interpreted as a variation of Planck's quantum of action \(\hbar\) instead--keeping \(\alpha_{e}\) constant. (I thank my friend Aurelien Hees for pointing that out). **What does that mean for standard quantum field theory?** In standard quantum field theory, the path integral and canonical quantization are two sides of the same thing. Indeed, as Feynman famously first showed for quantum mechanics, a path integral ought to lead to the same results as canonical quantization. Hence, from Eq. (7), one can expect the usual commutation relation between conjugate quantities, but rewritten as follows \[[\hat{A},\hat{B}]\approx i\frac{\kappa\epsilon^{2}}{c}\mathbb{I}, \tag{9}\] where \(\hat{A}\) and \(\hat{B}\) are two arbitrary canonical conjugate quantities--as long as the spatio-temporal scales of the quantum phenomena considered are small with respect to the spatio-temporal variations of the background value of \(\kappa\) in the _semiclassical_ limit of the theory. However, this should no longer be correct beyond the \(\kappa=\)constant limit--that is, when \(\kappa\) varies significantly even at the scale of particle physics. But one does not even know how one could write Eq. (9) in that situation, given that the very notion of a constant quantum of action disappears. Therefore, one may expect that one looses the equivalence between canonical quantization and the path integral when \(\kappa\) can no longer be approximated to be constant at the scale of particle physics--that is, way before the quantum gravity regime. One may even conjecture that the procedure of canonical quantization might therefore only be approximately valid in the \(\kappa=\)constant limit, but actually not valid at an elementary level. This would be quite a blow for the various programs of canonical quantization of gravity, such as _Loop Quantum Gravity_. Studies of quantum _entangled relativity_ should therefore probably concentrate on the path integral formulation. In any case, because of the intertwined nature of the theory, one can expect that quantum _entangled relativity_ will also highly depend on the definition of \(\mathcal{L}_{m}\) that will be considered to be valid up to the Planck energy scale. This should drastically complicate any of its investigation as there is no longer any sense to consider the general theory of relativity and the theory of matter fields individually, and one can expect that only the complete theory--that is, Eq. (1) with an appropriate definition of \(\mathcal{L}_{m}\)--will make sense at the Planck energy scale.
2304.03811
Effect of Pt vacancies on magnetotransport of Weyl semimetal candidate GdPtSb epitaxial films
We examine the effects of Pt vacancies on the magnetotransport properties of Weyl semimetal candidate GdPtSb films, grown by molecular beam epitaxy on c-plane sapphire. Rutherford backscattering spectrometry (RBS) and x-ray diffraction measurements suggest that phase pure GdPt$_{x}$Sb films can accommodate up to $15\%$ Pt vacancies ($x=0.85$), which act as acceptors as measured by Hall effect. Two classes of electrical transport behavior are observed. Pt-deficient films display a metallic temperature dependent resistivity (d$\rho$/dT$>$0). The longitudinal magnetoresistance (LMR, magnetic field $\mathbf{B}$ parallel to electric field $\mathbf{E}$) is more negative than transverse magnetoresistance (TMR, $\mathbf{B} \perp \mathbf{E}$), consistent with the expected chiral anomaly for a Weyl semimetal. The combination of Pt-vacancy disorder and doping away from the expected Weyl nodes; however, suggests conductivity fluctuations may explain the negative LMR rather than chiral anomaly. Samples closer to stoichiometry display the opposite behavior: semiconductor-like resistivity (d$\rho$/dT$<$0) and more negative transverse magnetoresistance than longitudinal magnetoresistance. Hysteresis and other nonlinearities in the low field Hall effect and magnetoresistance suggest that spin disorder scattering, and possible topological Hall effect, may dominate the near stoichiometric samples. Our findings highlight the complications of transport-based identification of Weyl nodes, but point to possible topological spin textures in GdPtSb.
Dongxue Du, Laxman Raju Thoutam, Konrad T. Genser, Chenyu Zhang, Karin M. Rabe, Bharat Jalan, Paul M. Voyles, Jason K. Kawasaki
2023-04-07T18:36:26Z
http://arxiv.org/abs/2304.03811v1
# Effect of Pt vacancies on magnetotransport of Weyl semimetal candidate GdPtSb epitaxial films ###### Abstract We examine the effects of Pt vacancies on the magnetotransport properties of Weyl semimetal candidate GdPtSb films, grown by molecular beam epitaxy on c-plane sapphire. Rutherford backscattering spectrometry (RBS) and x-ray diffraction measurements suggest that phase pure GdPt\({}_{x}\)Sb films can accommodate up to 15% Pt vacancies (\(x=0.85\)), which act as acceptors as measured by Hall effect. Two classes of electrical transport behavior are observed. Pt-deficient films display a metallic temperature dependent resistivity (d\(\rho\)/dT\(>\)0). The longitudinal magnetoresistance (LMR, magnetic field \(\mathbf{B}\) parallel to electric field \(\mathbf{E}\)) is more negative than transverse magnetoresistance (TMR, \(\mathbf{B}\perp\mathbf{E}\)), consistent with the expected chiral anomaly for a Weyl semimetal. The combination of Pt-vacancy disorder and doping away from the expected Weyl nodes; however, suggests conductivity fluctuations may explain the negative LMR rather than chiral anomaly. Samples closer to stoichiometry display the opposite behavior: semiconductor-like resistivity (d\(\rho\)/dT\(<\)0) and more negative transverse magnetoresistance than longitudinal magnetoresistance. Hysteresis and other nonlinearities in the low field Hall effect and magnetoresistance suggest that spin disorder scattering, and possible topological Hall effect, may dominate the near stoichiometric samples. Our findings highlight the complications of transport-based identification of Weyl nodes, but point to possible topological spin textures in GdPtSb. ## I Introduction The lanthanide half Heusler compounds \(Ln\)PtBi and \(Ln\)PtSb are attractive due to their tunable topological and magnetic properties as functions of lanthanide substitution [1; 2; 3], strain [2], and strain gradients [4]. Compounds in this family of materials were among the first identified as zero bandgap topological semimetals via density functional theory (DFT) [2] with confirmation for LuPtSb [5], LuPtBi [6] and YPtBi [6] by angle-resolved photoemission spectroscopy (ARPES) measurements [5; 6]. More recently, bandstructure calculations and magnetotransport measurements suggest that GdPtBi [7; 8], TbPtBi [9], HoPtBi [9], and ErPtBi [9] compounds are magnetic-field-induced Weyl semimetals. In these materials, magnetic field, either directly through Zeeman splitting or indirectly through exchange splitting from field-induced magnetization, is expected to lift the degeneracy of quadratic bands that touch at \(\Gamma\), to create pairs of Weyl nodes [7; 8]. One experimental signature of these Weyl nodes is the chiral anomaly: charge pumping between Weyl nodes of opposite chirality when an applied magnetic field \(\mathbf{B}\) is parallel to the electric field \(\mathbf{E}\)[10; 11]. This appears as a negative longitudinal magnetoresistance (LMR) with a characteristic angle dependence \(\mathbf{E}\cdot\mathbf{B}\), and has been observed in several \(Ln\)PtBi compounds [7; 8; 9; 11]. A fundamental challenge, however, is that negative LMR is not unique to the chiral anomaly. Other mechanisms for negative LMR include current jetting [12; 11], conductivity fluctuations [12; 13; 14; 15], and spin-disorder scattering in magnetically ordered materials [16]. In half Heusler compounds, conductivity fluctuations may contribute because these materials are highly susceptible to natural nonstoichiometry [17] and variations in atomic site ordering [18], due to the low formation energies for point defects. Moreover, since \(Ln\)PtBi and \(Ln\)PtSb are typically antiferromagnetic below \(T_{N}\sim 10\) K, spin disorder scattering is likely to play a role. Here we explore the magnetotransport properties of GdPtSb, which like GdPtBi, is seen in bandstructure calculations to have quadratic bands that touch at \(Gamma\), taking into account the effects of naturally occurring nonstoichiometry. Specifically, we examine effects of Pt vacancies on magnetotransport of GdPtSb films, grown by molecular beam epitaxy on c-plane sapphire substrates. We find that the angle-dependent magnetoresistance depends strongly on Pt stoichiometry, and comment on the relative roles of chiral anomaly, conductivity fluctuations, spin-disorder scattering, and topological hall effect in GdPt\({}_{x}\)Sb. In particular, our observation of a plateau in the Hall resistivity in near stoichiometric GdPtSb suggests a topological Hall effect which may arise from topological spin textures that have not previously been observed in \(Ln\)PtSb or \(Ln\)PtBi systems. ## II Results We first establish the possibility of Weyl nodes for GdPtSb using density functional theory (DFT) calculations. In the well studied material GdPtBi, the essential feature is a quadratic band touching of four Bi \(6p\)\(j=3/2\) states near the Fermi energy (Fig. 1(b)). The Zeeman energy or exchange energy splits the \(m_{j}=\pm 3/2\) and \(m_{j}=\pm 1/2\) to create Weyl nodes near the Fermi energy (Fig. 1(c)) [7; 8]. Our nonmagnetic DFT calculations suggest that GdPtSb replicates this essential feature, with a quadratic touching of Sb \(5p\)\(j=3/2\) states at the Fermi energy at \(\Gamma\) (Fig. 1(a)). However, we note that for GdPtSb, there is an additional hole band approximately 60 meV below the charge neutrality point (0 eV) at \(\Gamma\) that may complicate the transport by providing an additional conduction channel. For GdPtBi, this hole band is pushed further down to \(\sim 700\) meV, presumably due to the larger spin-orbit coupling for Bi compared with Sb. We synthesize GdPtSb films by molecular beam epitaxy on (0001)-oriented Al\({}_{2}\)O\({}_{3}\) substrates, using conditions similar to Ref. [19]. The growth temperature is 600 \({}^{\circ}\)C. The Gd flux was supplied by a thermal effusion cell. A Sb\({}_{2}\)/Sb\({}_{1}\) mixture was supplied by a thermal cracker cell with a cracker zone operated at 1200 \({}^{\circ}\)C. The Pt flux was supplied by an electron beam evaporator. Fluxes were measured in-situ using a quartz crystal microbalance (QCM) immediately prior to growth. Absolute compositions were measured by Rutherford Backscattering Spectrometry (RBS). Due to the high relative volatility of Sb, GdPtSb films were grown in an Sb adsorption-controlled regime with a 30% excess Sb flux, such that the Sb stoichiometry is self regulated [20]. X-ray diffraction (XRD) and reflection high energy electron diffraction (RHEED) measurements reveal that that epitaxial (111)-oriented GdPt\({}_{x}\)Sb films with half Heusler structure are readily stabilized under Pt-deficient conditions (Fig. 2(a)). For \(x=0.85\) to 1, only the anticipated 111-type reflections are observed by XRD indicating phase pure half Heusler growth. The corresponding streaky RHEED patterns indicate smooth epitaxial films. Samples closer to stoichiometry show an enhanced intensity of the higher order 333 and 444 XRD reflections and a well ordered 3\(\times\) surface reconstruction by RHEED, compared to the 1\(\times\) periodicity for Pt deficient films (\(x=0.85\)). For highly Pt deficient conditions, Figure 1: DFT calculations. (a) DFT-GGA band structures of GdPtSb and (b) GdPtBi. The bands near the Fermi energy at \(\Gamma\) have strong Sb \(5p_{3/2}\) or Bi \(6p_{3/2}\) character. The red color denote the \(m_{j}=\pm 1/2\) states and the blue color denote the \(m_{j}=\pm 3/2\) states. (c) Cartoon showing how the quadratic band touching at \(\Gamma\) splits due to exchange or Zeeman splitting to form pairs of Weyl nodes with opposite chirality. Figure 2: Structures and basic transport properties of stoichiometric and Pt-deficient GdPtSb epitaxial films. (a) X-ray diffraction (Cu \(K\alpha\) of (111)-oriented GdPt\({}_{x}\)Sb thin films grown on Al\({}_{2}\)O\({}_{3}\) (0001). The “+” sign for \(x=0.80\) denotes a GdSb impurity phase. (b) Corresponding reflection high energy electron diffraction (RHEED) patterns. (c) Out-of-plane (\(d_{111}\), red) and in-plane (\(d_{110}\), blue) lattice spacings extracted from on axis 222 and off axis 202 reflections (Supplemental). (d) High angle annular dark field scanning transmission electron microscopy (STEM) image of GdPtSb, measured along a \(\langle 110\rangle\) zone axis. \(x\leq 0.8\), we observe precipitation of a secondary GdSb phase by XRD and rough three-dimensional growth by RHEED. For Pt-rich conditions \(x>1.01\) the higher order 222, 333, and 444 half Heusler XRD reflections disappear and we observe faint polycrystalline rings in the RHEED pattern. Focusing on the phase pure GdPt\({}_{x}\)Sb samples with \(x=0.85-1\), we observe a systematic increase for the out-of-plane \(d_{111}\) and in-plane \(d_{110}\) lattice spacings (Fig. 2(c)). The in-plane \(d_{110}\) spacings are calculated from measurements of an off-axis 202 reflection (Appendix Fig. 8). The observed Pt deficient samples are consistent with DFT calculations that predict Pt vacancies are the lowest energy defects for the related compound LuPtSb [21]. High angle annual dark field (HAADF) scanning transmission electron microscopy (STEM) measurements of the \(x=0.85\) sample shown in Fig. 2(d) are in agreement with the expected site ordering for GdPtSb in cubic half Heusler structure (space group F43m). Here, the brightest atomic columns correspond to columns of Pt atoms (which have the largest atomic mass), followed by columns of Gd and columns of Sb. Zero-field resistivity measurements of near stoichiometric GdPtSb films (\(x>0.9\)) display an insulator-like temperature dependence (\(d\rho/dT<0\), Fig. 3(a) purple). We attribute the insulator-like resistivity to the Fermi energy being near the quadratic band touching (Fig. 1). In contrast, for heavily Pt-deficient samples (\(x<0.9\)) we observe metallic transport (\(d\rho/dT>0\), Fig. 3(a), green). We attribute the more metallic transport to doping induced by Pt vacancies. We observe kinks in the temperature dependent resistivity that correspond to kinks at the same temperature in magnetic susceptibility, as measured by superconducting quantum interference device (SQUID) magnetometry (Fig. 3a insert). We attribute these kinks to the antiferromagnetic Neel transition \(T_{N}\). We find that \(T_{N}\) varies with Pt concentration \(x\): the insulator-like samples (\(x>0.9\)) have a \(T_{N}\approx 9\) K, whereas samples with a more metallic resistivity (\(x<0.9\)) have \(T_{N}\approx 14\) K (Fig. 4(a)). We speculate this jump in \(T_{N}\) may arise from an enhanced Ruderman-Kittel-Kasuya-Yosida (RKKY) coupling between Gd moments and the Fermi sea with increasing carrier density. Hall effect measurements reveal that Pt vacancies in GdPtSb are acceptors. The heavily Pt-deficient samples (\(x<0.9\)) show a positive and near linear dependence of the Hall resistivity \(\rho_{xy}\) on magnetic field (Fig. 3(c)), indicating dominant hole carriers. Closer to stoichiometry (\(x>0.9\)) the samples show stronger nonlinearities that are well fit by a two band model with one hole and one electron for \(|B|>2\) T (Fig. 3c). Fig. 4(b) summarizes the effective electron and hole densities versus Pt concentration \(x\), extracted from fitting to the two band model (Methods). Note that in reality there are 2-3 hole bands near the Fermi energy; however, since we are not able to distinguish these bands from a simple Hall effect fit we emphasize that these are "effective" carrier densities. We find that charge neutrality \(n=p\) appears as \(x\) approaches 1, and the effect of Pt vacancies is to increase the hole density and decrease the electron density. Based on a three-dimensional parabolic band model (Methods), we estimate that the Fermi energy for the \(x=0.85\) sample, which has effective hole density \(p=2.72\times 10^{20}\) cm\({}^{-3}\), lies 170 meV below the charge neutrality point. For the \(x=0.96\) sample (\(p=1.75\times 10^{19}\) cm\({}^{-3}\), \(n=2.5\times 10^{17}\) cm\({}^{-3}\)), we estimate the Fermi energy is approximately 40 meV below the charge neutrality point. We caution that while we call this \(x=0.96\) sample "near stoichiometric," the hole density is still nearly two orders of magnitude larger than the electron density (\(p>>n\)). Interestingly, the \(x=0.96\) sample displays low field (\(|B|<1\) T) hysteresis and nonlinearities consistent with the topological Hall effect (Fig. 3(b), insert), suggesting a nontrivial Berry phase. This topological Hall effect often indicates topological spin textures like skyrmions, e.g., in chiral \(B20\) compounds [22] and tetragonal Heusler compounds [23], but to our knowledge has not yet been reported in the \(R\)PtBi or \(R\)PtSb family. We now analyze the magnetoresistance as a function of the angle \(\theta\) between \(\mathbf{B}\) and \(\mathbf{E}\). The chiral anomaly, i.e. charge pumping between Weyl nodes, is expected Figure 3: (a) Zero-field resistivity for GdPt\({}_{x}\)Sb with different Pt concentration. Insert shows the magnetic susceptibility versus temperature for the \(x=0.84\) (green) and \(x=0.96\) (purple) samples measured by SQUID, showing the Neel transition. Kinks in the susceptibility coincide with kinks in the resistivity. (b) Transverse (Hall) resistivity for the \(x=0.96\) sample, measured at 1.8 K. At low field \(|B|<1\) T we observe nonlinearities suggestive of the topological Hall effect. Higher field (\(|B|>2\) T) nonlinearities are associated with ordinary Hall effect with multiple carriers. The red line is an ordinary Hall effect two band model fit. (c) Hall effect at 1.8 K for the \(x=0.85\) sample showing near linear behavior for \(|B|<6\) T. to produce an additional current that is proportional to \(\mathbf{B}\cdot\mathbf{E}\). Therefor, for Weyl semimetals the magnetoresistance \(\Delta\rho/\rho_{0}\) should be negative for \(\theta=0\) and become more positive with increasing \(\theta\)[24; 25; 26; 7]. Fig. 5(a) shows the angle dependent magnetoresistance of the Pt deficient \(x=0.85\) sample, measured at 1.8 K using voltage contacts along the edge of the sample and current sourced along the center of the sample (Fig. 5(c)). We find that the longitudinal magnetoresistance (LMR, \(\theta=0^{\circ}\)) is negative and the transverse magnetoresistance (TMR, \(\theta=90^{\circ}\)) is positive, as expected for the chiral anomaly. A continuously varying angular dependence, in a van der Pauw geometry, is shown in Appendix Fig. 9. The general angular dependence is qualitatively similar to previous studies of GdPtBi single crystals [7; 11]. However, magnitude of change for our epitaxial GdPtSb films is only a few percent, whereas the magnetoresistance change for GdPtBi crystals is \(\sim 80\%\). The \(x=0.85\) sample passes the "squeeze test" [11], suggesting that current jetting is not the primary origin of the negative LMR. Here, we find that measurements with contacts along the center of the sample (Fig. 5(b)) produce the same qualitative behavior as edge contacts (Fig. 5(a)), namely, negative LMR and positive TMR at modest magnetic field (\(|B|<6\) T). 2-point resistance measurements also produce the negative magnetoresistance for \(\theta=0\) that increases with \(\theta\) (Appendix Fig. 10), confirming that current jetting is not a dominant factor. Furthermore, current jetting effects are expected to be strongest for materials with high carrier mobility and anisotropic conduction [11]. Our samples have more modest Hall effect mobility (\(\mu\sim 50\) cm\({}^{2}\)/Vs) and are expected to be isotropic in the (111) plane, compared to reports of single crystals that have higher mobility of 1500 cm\({}^{2}\)/Vs at 6 K [7]. Near stoichiometric samples display the opposite angular dependence. Fig. 5(d) shows the angle-dependent magnetoresistance for a sample with composition \(x=0.96\), measured at 1.8 K. Besides a weak positive magnetoresistance at low field (\(B<4\) T) and low \(\theta\), the magnetoresistance is generally negative and becomes more negative with increasing \(\theta\). This is opposite the expected dependence for chiral anomaly, which is expected to produce the most negative magnetoresistance for \(\theta=0\) Figure 5: Chiral anomaly tests. (a,b) Magnetoresistance for a Pt deficient x=0.85 sample, as function of angle \(\theta\) between \(\mathbf{B}\) and \(\mathbf{E}\). Comparison of voltage contacts along the edge (a) with voltage contacts along the center (b) constitutes the “squeeze test” for analyzing the contribution of extrinsic current jetting. (c) Measurement geometry. (d) Angle dependent magnetoresistance for a near stoichiometric \(x=0.96\) sample. Insert shows hysteresis for \(\theta=90^{\circ}\). Figure 4: (a) Neel temperature (\(T_{N}\)) extracted from kinks in the temperature dependent magnetic susceptibility and resistivity. (b) Hall carrier density versus Pt concentration at 1.8 K, extracted from fitting to a model with one electron and one hole. The insert shows the approximate position of the Fermi energy for \(x=0.85\) in a two band model, which is clearly an over simplification. (c) Longitudinal (\(\mathbf{B}\parallel\mathbf{E}\)) and transverse (\(\mathbf{B}\perp\mathbf{E}\)) magnetoresistance at 1 T and 1.8 K for samples with varying Pt concentration. Additionally, the transverse magnetoresistance shows a weak hysteresis within the range \(|B|<1\) T (Fig. 5(d) insert), the same range as observed for the Hall resistance (Fig. 3(b) insert). Minimal hysteresis is observed in the magnetoresistance of the \(x=0.85\) sample. We summarize the TMR and LMR at field 1 T and temperature 1.8 K for several samples with varying Pt concentration in Fig. 4(c). For Pt deficient samples (\(x<0.9\)) the LMR (\(\mathbf{B}\parallel\mathbf{E}\)) is generally more negative than the TMR (\(\mathbf{B}\perp\mathbf{E}\)), as expected for chiral anomaly. For samples closer to stoichiometry (\(x>0.9\)) the LMR is generally more positive than TMR. This dependence on Pt concentration is opposite to the expectation for chiral anomaly, which we expect to be strongest for stoichiometric samples in which the Fermi energy is closer to the Weyl nodes (\(x\approx 1\)). Previous experiments on doped GdPtBi show that the negative LMR is maximized near the charge neutrality point [7]. ## III Discussion What explains the Pt stoichiometry dependence? For Pt deficient samples (\(x<0.9\)), the Pt vacancy disorder and position of the Fermi energy away from the Weyl nodes suggests that conductivity fluctuations rather than chiral anomaly may dominate. For inhomogeneous or disordered materials, spatial fluctuations in the conductivity can lead to a component of the carrier velocity that is perpendicular to \(\mathbf{B}\), even when the global \(\mathbf{E}\) is parallel to \(\mathbf{B}\)[12; 13; 15; 27; 28]. The resulting Lorentz force leads to a decrease in the LMR with increasing \(\mathbf{B}\). We expect Pt-deficient samples to show these fluctuations more strongly than stoichiometric samples. Moreover, the very Pt deficient samples have Fermi energy furthest away from the expected Weyl nodes, and thus are not anticipated to show strong effects from chiral anomaly compared to samples closer to stoichiometry (\(x\approx 1\)). For samples closer to stoichiometry, trivial bands near the Fermi energy and spin disorder scattering may explain the negative LMR. We first note that unlike the well studied Weyl semimetal GdPtBi, GdPtSb has an additional hole band approximately 60 meV below the charge neutrality point (Fig. 1(a)) that may contribute to the transport and obscure the chiral anomaly. Additionally, in magnetically ordered materials like GdPtSb (which is antiferromagnetic), field alignment of spins or other forms of spin-disorder scattering can also cause a negative LMR [16; 29; 30], and the angular dependence can arise from magnetocrystalline anisotropy or shape anisotropy. For reasons that are still unclear, the effects of magnetic ordering are more prominent for our near stoichiometric GdPtSb samples than for heavily Pt-deficient samples. First, the TMR of the \(x=0.96\) sample shows magnetic hysteresis that does not appear for the \(x=0.85\) sample (Fig. 5(b,d) inserts). Second, the temperature-dependent magnetoresistance for the \(x=0.96\) sample at low field diverges below \(T_{N}\sim 9\) K (Fig. 6(b)), where the LMR increases and the TMR decreases with decreasing temperature. In contrast, the LMR and TMR for the \(x=0.85\) sample do not diverge below \(T_{N}\) (Fig. 6(a)). Third, \(T_{N}\) abruptly jumps from \(\sim 14\) K for Pt deficient samples to \(T_{N}\sim 9\) K for near stoichiometric samples (Fig. 4(a)). This suggests an abrupt change in the exchange, and possibly magnetic ordering, as a function of \(x\). Further magnetization studies, as a function of magnetic field orientation, are required to understand the mechanisms for the angle-dependent magnetoresistance in near stoichiometric samples. Figure 6: Temperature dependence of the LMR and TMR. (a) \(x=0.85\) sample at 1 T. (b) \(x=0.95\) sample at 1 T. (c) \(x=0.85\) sample at 1 9 T. (e) \(x=0.95\) sample at 9 T. These measurements were performed in a van der Pauw geometry. Note the magnitudes are different than in Fig 5 due to the different contact geometry. Figure 7: (a) Low field Hall effect, showing possible topological hall effect. No background has been subtracted. (b) Low field magnetoresistance showing hysteresis over the same range as the possible topological component of the Hall effect (\(|B|<1\) T). More intriguingly, the near stoichiometric samples display indications of topological Hall effect (Fig. 3b insert) that do not appear for the very Pt deficient samples (Fig. 3(a,b)). We reproduce the low field transverse (Hall) and longitudinal magnetoresistance of the \(x=0.96\) sample in Fig. 7. In the transverse resistivity \(\rho_{xy}\) versus magnetic field, we observe low field nonlinearities and hysteresis that is reminiscent of the topological Hall effect. Topological Hall effect indicates a nontrivial Berry phase and is often an indication of chiral spin textures like skyrmions; however, a more detailed analysis is required to clearly identify this nonlinearity as topological Hall effect [31]. The low field magnetoresistance (Fig. 5(d) insert and Fig. 7(b)) shows the same width of hysteresis (\(\pm 1\) T). Systems that display topological Hall effect also display negative magnetoresistance [32; 33]. ## IV Conclusions We demonstrate that half Heusler GdPt\({}_{x}\)Sb epitaxial films grown on sapphire accommodate a large concentration of Pt vacancies, which act as acceptors. Samples with high Pt vacancy concentration have a metallic resistivity vs temperature and display a negative longitudinal magnetoresistance that becomes more positive as the magnetic field tilts away from the electric field, consistent with chiral anomaly. However, the large concentration of Pt vacancies and position of the Fermi energy away from the Weyl nodes suggests that conductivity fluctuations, rather than chiral anomaly, dominates. Samples closer to stoichiometry show an insulator-like resistivity versus temperature and TMR that is more negative than LMR, opposite the expected behavior of chiral anomaly. The low field Hall effect shows nonlinearities similar to topological Hall effect. Further detailed magnetotransport studies are required to understand the possible balance of Weyl nodes and topological spin textures in GdPtSb. ## V Methods **First-principles calculations.** Calculations were done with ABINIT using the PBE GGA exchange correlation potential and norm-conserving pseudopotentials from ONCVPSP-3.3.0, except for the Gd pseudopotential, which was constructed to have the f-orbitals in the core. The k point mesh used was \(18\times 18\times 18\) and the energy cutoff was 1400 eV (50 Hartree). Computed lattice parameters for the half Heusler structure were 6.647 A for GdPtSb and 6.777 A for GdPtBi in the conventional fcc unit cell. **Transport measurements and fitting.** Magnetotransport measurements for GdPt\({}_{x}\)Sb samples were performed using a Quantum Design Dynacool Physical Property Measurement System. Hall effect measurements were generally performed in a Hall bar geometry with typical dimensions 5 mm by 1 mm. For the \(x=0.85\) sample the Hall effect was measured in a van der Pauw geometry. Angle-dependent magnetoresistance measurements were performed using a horizontal rotator probe and contacts in a "squeeze test" geometry, as shown in Fig. 5(c). The "center" channel is a standard linear four point geometry with contacts in a line down the center of the sample, where the outer contacts are current and the inner contacts are voltage. The "edge" channel uses the same current contacts, but places the voltage contacts at the edge of the sample to test the effects of current jetting down the center of the sample. Nonlinear Hall data are fit with a two-band model of the following form \[\rho_{xy}(B)=\frac{B}{e}\frac{(n_{h}\mu_{h}^{2}-n_{e}\mu_{e}^{2})+(n_{h}-n_{e })\mu_{h}^{2}\mu_{e}^{2}B^{2}}{(n_{h}\mu_{h}+n_{e}\mu_{e})^{2}+(n_{h}-n_{e})^{ 2}\mu_{h}^{2}\mu_{e}^{2}B^{2}}\] where \(n_{h}\) (\(n_{e}\)) and \(\mu_{h}\) (\(\mu_{e}\)) are concentration and mobility of holes (electrons). We constrain the two-band Hall fit by also fitting to the zero magnetic field longitudinal resistivity \[\rho_{xx}(0)=\frac{1}{e(n_{h}\mu_{h}+n_{e}\mu_{e})}.\] We use the carrier concentration to estimate the Fermi level positions. For a rough estimation, we used 3D parabolic band density of states. \[D(E)=\frac{\sqrt{2}}{\pi^{2}}\frac{(m^{*})^{\frac{3}{2}}}{\hbar^{3}}\sqrt{E}\] where m\({}^{*}\) is the effective mass. \[m^{*}=\frac{\hbar^{2}}{d^{2}E/dk^{2}}\] We calculated the effective mass for the 3 valence bands near E=0 eV at \(\Gamma\) point, based on our band structure calculation in Fig. 1. The Fermi level is extracted from the following integral, \[\sum_{n=1}^{2or3}\int_{0}^{E_{F}}D(E)\,dE=\frac{N}{V}\] From the above calculation, the E\({}_{F}\) for the \(x=0.84\) sample with hole concentration \(2.72\times 10^{20}\) cm\({}^{-3}\) is about -170 meV and that for the \(x=0.96\) sample with hole concentration \(1.75\times 10^{19}\) cm\({}^{-3}\) is about -40 meV. **Magnetization measurements.** Magnetic properties were measured using a Quantum Design MPMS SQUID (Superconducting Quantum Interference Device) Magnetometer. The magnetic field is applied perpendicular to the sample surface. The net magnetization data for the GdPt\({}_{x}\)Sb thin films is extracted by subtracting a background measurement of the Al\({}_{2}\)O\({}_{3}\) substrate from the total magnetization signal. **Transmission electron microscopy.** GdPtSb cross-section samples were prepared with a Zeiss Ga-focused ion beam, followed by final thinning in a Fishione Model 1040 Nanomill using Ar ions at 900V. Samples were stored in vacuum and cleaned in a GV10x DS Asher cleaner at 20 W for 10 min to remove contamination and minimize the oxidization on the sample before being transferred into the STEM column. A probe corrected Thermo-Fisher Titan STEM equipped with CEOS aberration correction operated at 200 kV was used to collect the atomic resolution STEM images. A 24.5 mrad probe semi-angle, 18.9 pA probe current was used to collect HAADF image series with a Fishione 3000 annular detector covering collection angle ranging from 53.9 to 269.5 mrad. Each frame in the image series took about 0.6 second to acquire with 10 \(\mu\)s on each STEM scan position and 256-by-256 scan grid. Non-rigid registration [34] was used to compensate for the drift and distortions during image series acquisition, before the series was averaged to get one single frame with high signal-to-noise ratio as shown in Fig. 2(e). **X-ray diffraction.** X-ray diffraction measurements were performed using a Malvern Empyrean diffractometer with Cu \(K\alpha\) radiation. **Rutherford Backscattering Spectrometry (RBS).** RBS measurements were performed at the University of Minnesota Characterization Facility. ## VI Acknowledgment We thank Max Hirschberger for helpful discussions on chiral anomaly tests. We thank Greg Haugstad for performing RBS measurements. Special thanks to D. R. Hamman for providing the Gd pseudopotential. Heusler epitaxial film growth and magnetotransport at the University of Wisconsin were supported by the Air Force Office of Scientific Research (FA9550-21-0127). Preliminary synthesis was supported by the Army Research Office (ARO Award number W911NF-17-1-0254) and the National Science Foundation (DMR-1752797). Calculations by KR and KG were supported by the Office of Naval Research N00014-21-1-2107. Transport measurements at the University of Minnesota by L.R.T. and B.J. were supporteb by the National Science Foundation through the University of Minnesota MRSEC under Award No. DMR-2011401. TEM experiments by CZ and PMV were supported by the US Department of Energy, Basic Energy Sciences (DE-FG02-08ER46547) and used facilities are supported by the Wisconsin MRSEC (DMR-1720415). We gratefully acknowledge the use of x-ray diffraction facilities supported by the NSF through the University of Wisconsin Materials Research Science and Engineering Center under Grant No. DMR-1720415. We acknowledge PARADIM for annealed sapphire substrates.
2302.01350
How to determine the branch points of correlation functions in Euclidean space II: Three-point functions
The analytic structure of elementary correlation functions of a quantum field is relevant for the calculation of masses of bound states and their time-like properties in general. In quantum chromodynamics, the calculation of correlation functions for purely space-like momenta has reached a high level of sophistication, but the calculation at time-like momenta requires refined methods. One of them is the contour deformation method. Here we describe how to employ it for three-point functions. The basic mechanisms are discussed for a scalar theory, but they are the same for more complicated theories and are thus relevant, e.g., for the three-gluon or quark-gluon vertices of quantum chromodynamics. Their inclusion in existing truncation schemes is a crucial step for investigating the analytic structure of elementary correlation functions of quantum chromodynamics and the calculation of its spectrum from them.
Markus Q. Huber, Wolfgang J. Kern, Reinhard Alkofer
2023-02-02T19:00:01Z
http://arxiv.org/abs/2302.01350v1
How to determine the branch points of correlation functions in Euclidean space II: Three-point functions ###### Abstract The analytic structure of elementary correlation functions of a quantum field is relevant for the calculation of masses of bound states and their time-like properties in general. In quantum chromodynamics, the calculation of correlation functions for purely space-like momenta has reached a high level of sophistication, but the calculation at time-like momenta requires refined methods. One of them is the contour deformation method. Here we describe how to employ it for three-point functions. The basic mechanisms are discussed for a scalar theory, but they are the same for more complicated theories and are thus relevant, e.g., for the three-gluon or quark-gluon vertices of quantum chromodynamics. Their inclusion in existing truncation schemes is a crucial step for investigating the analytic structure of elementary correlation functions of quantum chromodynamics and the calculation of its spectrum from them. Article ## 1 Introduction Quantum chromodynamics (QCD) has a rich spectrum, and there are still many open questions about it. Functional methods are one of several nonperturbative methods that can be used to unravel its mysteries, see, e.g., [1, 2, 3, 4] for results on baryons, mesons, tetraquarks and glueballs. In recent years, much progress has been made in the calculation of elementary correlation functions using functional methods, see, e.g., [5, 6, 7, 8, 9, 10, 11, 12] and references therein. However, as far as top-down calculations, which start directly from the Lagrangian of QCD, are concerned, the most advanced calculational schemes for functional equations have been applied to space-like momenta only. For time-like momenta, calculations are more challenging due to the necessary adaptation of the numerical methods. Complementary lattice methods provide direct access to correlation functions only at space-like momenta, see [13, 14, 15, 16, 17, 18] for some exemplary results. For perturbative integrals, one can use the Landau conditions [19] to determine the branch points of a diagram. They are typically derived using the Feynman parametrization for the propagators. For dressed propagators, however, this is not a viable approach, and an analysis more along the lines of numerical calculations is required. Such an approach to access the analytic properties of correlation functions is provided by the contour deformation method (CDM). It deals with the intricacies introduced by time-like momenta by modifying the integration path in the integral appropriately. This enables numeric calculations but also leads to insights into the analytic structure of correlation functions. Originally it was devised for QED [20] for a special case and then subsequently generalized [21, 22, 23, 24, 25, 26, 27, 28, 29]. Other direct methods include the shell method [30], the use of the Cauchy-Riemann equations [31], the covariant spectator theory framework [32], the Cauchy method [33, 34], or spectral representations including the Nakanishi integral representation [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. Here we summarize the case of three-point functions systematically. In particular, we aim at an accessible description of the method in the spirit of [26] of which this article can be considered a follow-up, hence the title. Calculational details, while mathematically straightforward, can be found elsewhere [29]. First, we introduce the basic idea with the example of a two-point integral in Sec. 2.1. As a new feature, we pay particular attention to the possibility of deforming not only the integration contour in the radial variable but also in an angle, thereby deforming the branch cuts that require a deformation of the radial variable in the first place in Sec. 2.2. For the three-point function in Sec. 2.3, we start with simplified kinematics before we discuss the general case. ## 2 Contour deformation method In the following we will work with propagators with a generic mass \(m\). The analysis is valid both perturbatively, where \(m\) is the bare mass, but also nonperturbatively if the propagator features a single pole and \(m\) is the corresponding mass. Cuts can also be considered [29]. ### Basic example: The two-point integral For illustration purposes we consider the Euclidean one-loop two-point integral \[I_{2}(p^{2})=\int\frac{d^{d}q}{(2\pi)^{d}}\frac{1}{q^{2}+m^{2}}\frac{1}{(q-p)^ {2}+m^{2}}, \tag{1}\] see Figure 1. For conciseness, we fix the number of dimensions to four in the following, but the generalization is straightforward. Using hyperspherical coordinates, two angles can be integrated out, and the radial variable \(r=\sqrt{q^{2}}\) as well as one angle \(\theta_{1}\) remain. The integrand has poles at \(q^{2}=-m^{2}\) and \((q-p)^{2}=r^{2}+p^{2}-2\sqrt{p^{2}}\,r\cos\theta_{1}-m^{2}\) which must not be crossed during the integration. Performing the angle-integration first, the second propagator leads to branch cuts corresponding to the integration of \(\theta_{1}\) from 0 to \(\pi\). It can be parameterized as (\(z_{1}=\cos\theta_{1}\)) \[\gamma_{\pm}(z_{1};p^{2},m^{2})= \sqrt{p^{2}}\,z_{1}\pm i\sqrt{m^{2}+p^{2}(1-z_{1}^{2})}\] \[= \sqrt{p^{2}}\cos\theta_{1}\pm i\sqrt{m^{2}+p^{2}\sin^{2}\theta_{ 1}}, \tag{2}\] which is obtained by solving the quadratic equations \((q-p)^{2}=-m^{2}\) for \(r\). The analytic structure of the remaining integrand in \(r\) thus consists of the poles at \(\pm i\,m\) and these two cuts. The dependence on the external momentum \(p\) enters via the latter. We stress that we use the radial variable \(r=\sqrt{q^{2}}\) instead of \(q^{2}\) to avoid ambiguities later for the three-point function [29]. Figure 1: Momentum routing for the propagator’s one-loop selfenergy (left), the triangle diagram (center) and the swordfish diagram (right). The internal momenta \(k_{i}\) are combinations of external and loop momenta, see Eqs. (1) and (8). The integration of \(r\) starts at 0 and ends at the chosen UV cutoff. When the external momentum \(p^{2}\) is positive, neither the poles nor the cuts interfere, and the integration can be performed along the real axis. This changes when \(p^{2}\) is complex or negative. An example is shown in Figure 2. For the chosen value of \(\sqrt{p^{2}}\), the cut crosses the real axis. To avoid crossing the cut, the contour of the \(r\) integration needs to be deformed. A simple choice is to integrate along a straight line from the origin to \(\sqrt{p^{2}}\) and further out until a chosen stopping point. From there, the integration can be closed by continuing in an arc to the UV cutoff, see [28] for details on a systematic implementation called ray method. Analyticity of the integral means that it can be calculated in a neighborhood of a point by continuously deforming the involved integration contours. If this is not the case, a nonanalyticity is found. This happens in this example exactly then when an endpoint of a cut from one propagator touches the pole of the other propagator. The deformation on the \(r\) integration contour would then need to jump over the pole, thereby picking up its residue. The condition that the endpoint of the cut touches the pole is [26] \[\gamma_{\pm}(\pm 1;p^{2},m^{2})=\pm i\,m. \tag{3}\] The only nontrivial solutions leads to \(p^{2}=-4m^{2}\) which is the result expected from the Landau conditions [19]. ### Deformations of the angle integration contour Based on the left plot in Figure 2 one might wonder what happens when \(p\) moves onto the imaginary axis, or, equivalently, when \(p^{2}\) is negative. Does the branch cut then go over the pole? If the angular integration is performed in a straight line from 0 to \(\pi\), it indeed does, but we can also deform the integration contour for the angular integral. An example of this is shown in the right plot of Figure 2. Two observations should be made. First, this explains the relevance of the endpoints for Eq. (3) because they cannot be moved around in contradistinction to the integration path between endpoints. Second, if the contour is deformed to avoid a pole in one place, it introduces deformations elsewhere as well. When \(p^{2}\) is real and negative, the following values of \(\theta_{1}\) need extra care. First, it is possible that the pole lies on the branch cut. This happens at \[\theta_{1}^{*p\pm}=\arccos\left(\pm i\frac{\sqrt{p^{2}}}{2m}\right). \tag{4}\] Second, the two cuts touch each other on the imaginary axis at \[\theta_{1}^{*c\pm}=\arcsin\left(\pm i\frac{m}{\sqrt{p^{2}}}\right). \tag{5}\] A concrete example of how to avoid a point \(\theta_{1}^{*}\) in the integration of \(\theta_{1}\) from 0 to \(\pi\) is in the form of a semicircle with radius \(s\): \[\theta_{1}\rightarrow\theta_{1}^{\pm}=\begin{cases}\theta_{1}^{*}+s\,e^{\pm i \frac{\theta_{1}-\theta_{1}^{*}-s}{s}\frac{\theta}{2}}&|\theta_{1}-\theta_{1}^ {*}|<s\\ \theta_{1}&\text{otherwise}\end{cases} \tag{6}\] Note the free choice of the sign of the phase which corresponds to two directions the corresponding deformations in the \(r\) plane can have. For this reparametrization the pattern of evasive bulges in the complex \(r\) plane depicted in Figure 3 emerges. ### The triangle integral We turn now to three-point functions. They can have two different diagrams, a swordfish diagram and a triangle diagram, see Figure 1. By choosing the momentum routing appropriately, the former can be described on the same footing as the two-point integral. Hence, we only discuss the triangle diagram in the following. The triangle diagram with external momenta \(p_{a}\), \(p_{b}\) and \(p_{c}=-p_{a}-p_{b}\) has the following form: \[I_{3}(p_{a},p_{b},p_{c})=\int dr\,r^{3}f(q,p_{a},p_{b},p_{c}) \tag{7}\] with \[f(q,p_{a},p_{b},p_{c})=\int(\sin\theta_{1})^{2}d\theta_{1}\int \sin\theta_{2}d\theta_{2}\frac{1}{q^{2}+m^{2}}\frac{1}{(q-p_{a})^{2}+m^{2}} \frac{1}{(q+p_{b})^{2}+m^{2}}. \tag{8}\] With the chosen routing, one propagator creates poles at \(p=\pm i\,m\) and the other two cuts of the form \[\gamma_{a\pm}(z_{1};p_{a}^{2},m^{2}) = \gamma_{\pm}(z_{1};p_{a}^{2},m^{2}), \tag{9a}\] \[\gamma_{b\pm}(\bar{z};p_{b}^{2},m^{2}) = \gamma_{\pm}(-\bar{z};p_{b}^{2},m^{2}) \tag{9b}\] Figure 2: Examples for the singularity structure \(\gamma_{\pm}(z_{1};p^{2},m^{2})\) of the propagator \(r=\sqrt{q^{2}}\) integrand for \(p^{2}=(-3+0.2i)m^{2}\) (left) and \(p^{2}=-3m^{2}\) (right). The lines denote branch cuts stemming from the angular integral with the value of the angle \(\theta_{1}\) indicated by the color. In the left plot, the angle integral is performed in a straight line from \(0\) to \(\pi\). In the right plot, the angle integral is modified such as to avoid the poles at \(\pm i\,m\) and gaps are opened at \(\pm\sqrt{-2}m\) using the path of Eq. (6). The two bulges at \(\pm 2\,i\,m\) are a consequence of deforming \(\theta_{1}\) around \(\pm i\,m\). The green dots are the poles from the second propagator. The red dots indicate the external \(p^{2}\) and are only plotted for reference. with \[\tilde{z}=\cos\tilde{\theta}=\cos\theta\,\cos\theta_{1}+\sin\theta\,\sin\theta_{1} \,\cos\theta_{2}. \tag{10}\] The analysis of the analytic structure of the \(r\) plane is more complicated than that of the two-point integral due to the existence of twice as many cuts and the appearance of a second angle integral. As it is instructive and already shows the basic features, we will in the following discuss a simplified case first. #### 2.3.1 Restricted kinematics We restrict the kinematics by setting \(p_{b}^{2}=p_{a}^{2}=p^{2}\) and consequently \(p_{c}^{2}=2p^{2}(1+\cos\theta)\). As a consequence, the cuts \(\gamma_{b\pm}\) are a subset of \(\gamma_{a\pm}\). From the two-point integral we know that a branch point in the external momentum arises when a cut in \(r=\sqrt{q^{2}}\) cannot be deformed such as to avoid the pole. This happened for the end points of the cuts. Here, new possibilities arise because of the three present propagators. Deformations of integration contours now need to respect constraints from all three of them. Only one propagator depends on the angle \(\theta_{2}\). In that case, the end points \(\theta_{2}=0\) and \(\theta_{2}=\pi\) are relevant as they are fixed whereas we could perform an additional deformation of the \(\theta_{2}\) integration in between. For conciseness, we work with \(\theta_{2}=0\) in the following discussion. As already mentioned above, the cuts created by the propagators lie on top of each other. However, the important observation is that for a given value of the angle \(\theta_{1}\) they do not necessarily agree. To illustrate this, we add a third axis for \(\theta_{1}\) in the plots of the branch cuts. Two examples are depicted in Figure 4. There, one can see the four branch cuts (two from each propagator) and the points where cuts from different propagators cross. This happens when \[\theta_{1,c\pm}=\frac{\theta}{2}+\frac{\pi}{2}, \tag{11}\] Figure 3: Bulges from deforming the angle integration in \(\theta_{1}\) via \(\theta_{1}^{+}\) as given in Eq. (6) for the indicated values of \(\theta_{1}^{*}\). which is the solution of the condition that the two propagators agree: \[-\cos\theta_{1}=\cos\theta\cos\theta_{1}+\sin\theta\sin\theta_{1}\,. \tag{12}\] The left plot in Figure 4 corresponds to the case when the two cuts meet at a pole of the third propagator. Plugging Eq. (11) into the corresponding condition that the first cut agrees with the pole \(i\,m\), \[\gamma_{a,+}(z_{1};p^{2},m^{2})=i\,m, \tag{13}\] leads to the following branch point in \(p^{2}\): \[p_{B,1}^{2}=-4m^{2}\cos^{2}\!\left(\frac{\theta}{2}+\frac{\pi}{2}\right)=-4m^{ 2}\sin^{2}\frac{\theta}{2}. \tag{14}\] One can convince oneself with the help of Figure 3 that it is not possible to deform the angle integration in \(\theta_{1}\) such as to open a gap around the pole because each branch cut requires a different sign of the phase in Eq. (6). When \(p^{2}>-2m^{2}\), the pole lies outside of the semicircle parts of the branch cuts and does not interfere. However, for certain values of the _external_ angle \(\theta\), the four cuts meet on the imaginary axis, see the right plot in Figure 4. Again, there is no deformation possible and a Figure 4: Examples for the singularity structure of the triangle \(r=\sqrt{q^{2}}\) integrand. The lines denote branch cuts stemming from the \(\theta_{1}\) angle integral. The red line is \(\gamma_{2+}\), the orange line \(\gamma_{2-}\), the blue line \(\gamma_{b+}\), and the cyan line \(\gamma_{b-}\). The green dots are the poles from a propagator, the magenta ones indicate where the relevant crossings of cuts/poles are. Left: \(p^{2}=-3m^{2}\), \(\theta=2\pi/3\), \(\theta_{2}=\pi\), two cuts cross at \(i\,m\) so the magenta dot is at the same point as a green one. Right: \(p^{2}=-4m^{2}/3\), \(\theta=\pi/3\), \(\theta_{2}=\pi\), four cuts touch at \(-m^{2}/3\). The black lines are projections of the cuts into one plane. branch point is created at this \(p^{2}\). The four cuts meet at the same value of \(\theta_{1}\) when the second term in Eq. (2) vanishes and \(\theta_{1}\) is given by Eq. (11). This leads to the branch point \[p_{B,2}^{2}=-\frac{m^{2}}{\sin^{2}\left(\frac{\theta}{2}+\frac{\pi}{2}\right)}= -\frac{m^{2}}{\cos^{2}\frac{\theta}{2}}. \tag{15}\] The corresponding point in the \(r=\sqrt{q^{2}}\) plane is \[q_{c,2}^{2}=\gamma_{a\pm}^{2}(\theta_{1,c};p_{B,2}^{2},m^{2})=-m^{2}\tan^{2} \frac{\theta}{2}. \tag{16}\] The two potential branch points \(p_{B,1}^{2}\) and \(p_{B,2}^{2}\) are shown in the left plot of Figure 5. Since \(p_{B,1}^{2}>p_{B,2}^{2}\), one might think that \(p_{B,1}^{2}\) is the relevant branch point, but decisive is which singular point appears closer to the origin in the \(r\) plane. By singular point we refer to the value of \(r\) which forbids the contour deformation. In the first case, this is \(q_{c,1}^{2}=-m^{2}\), and in the second one \(q_{c,2}^{2}\) from Eq. (16). They are plotted as a function of \(\theta\) in Figure 5. For \(\theta\leq\pi/2\), we have \(q_{c,2}^{2}>q_{c,1}^{2}\). When starting with \(p^{2}\) at zero and then decreasing it (see left plot in Figure 5), the branch points in the \(r\) plane do not pose a problem as long as \(-m^{2}<p^{2}\) because no circular parts crossing the real line are created. When \(p^{2}\leq-m^{2}\), the singular point \(q_{c,2}^{2}\) forbids deforming the contour if \(\theta\leq\pi/2\) and the singular point \(q_{c,1}^{2}\) if \(\theta\geq\pi/2\) (see right plot in Figure 5). There is a simple way of finding the branch point \(p_{B,2}^{2}\). Since only two propagators are involved, we can change the routing such that these two propagators have the momentum arguments \(q\) and \(q-p_{i}\) with \(i=a,b,c\). The analysis is then equivalent to the two-point integral and we obtain the branch point \(p_{i}^{2}=-4m^{2}\). If we do this for the chosen kinematic situation, we obtain \(p_{c}^{2}=-4m^{2}\), which, in turn, leads to \(p^{2}=-2m^{2}/(1+\cos\theta)\). This is equivalent to the solution found above. While this is a direct and simple way to find the branch point, it is inconvenient for numeric calculations where one does not want to work with different momentum routings. Figure 5: Left: The positions of the two potential branch points from Eqs. (14) and (15). Right: The critical points as functions of \(\theta\). The dashed lines correspond to the irrelevant cases and the continuous ones to the physical solutions. To summarize, we have the following solution for the branch point of the triangle diagram as a function of \(\theta\) when \(p_{a}^{2}=p_{b}^{2}=p^{2}\): \[p_{B}^{2}=\left\{\begin{array}{cc}-4m^{2}\sin\left(\frac{\theta}{2}\right)^{2} &\frac{\pi}{2}\leq\theta\leq\pi\\ \frac{-m^{2}}{\cos\left(\frac{\theta}{2}\right)^{2}}&0\leq\theta\leq\frac{\pi }{2}\end{array}\right.. \tag{17}\] #### 2.3.2 General kinematics We now remove the restriction on the kinematics and discuss the case for \(p_{a}^{2}\neq p_{b}^{2}\). Again we have to distinguish between the case when both branch cuts meet at the pole and the case without the pole. For the first case, we know that \(r\) must be equal to \(\pm\,i\,m\). We plug this into the other two propagators and equate their denominators. This leads to the condition \[p_{a}^{2}\,p_{b}^{2}\,p_{c}^{2}= m^{2}(p_{a}^{4}+p_{b}^{4}+p_{c}^{4}-2(p_{a}^{2}\,p_{b}^{2}+p_{a}^{2} \,p_{c}^{2}+p_{b}^{2}\,p_{c}^{2})), \tag{18}\] where \(p_{c}^{2}=p_{a}^{2}+p_{b}^{2}+2\sqrt{p_{a}^{2}}\sqrt{p_{b}^{2}}\cos\theta\) was used. This equation has two solutions for \(\theta\). It remains to check if the contour deformations are possible or not. As it turns out, they are only for one solution, see Figure 6 for examples. Thus, we have found one surface in the space spanned by \(p_{a}^{2}\), \(p_{b}^{2}\) and \(p_{c}^{2}\) corresponding to a threshold: \[p_{c+}^{2}= \frac{1}{2m^{2}}\Big{(}2(p_{a}^{2}+p_{b}^{2})m^{2}+p_{a}^{2}\,p_{ b}^{2}+\sqrt{p_{a}^{2}}\sqrt{4m^{2}+p_{a}^{2}}\sqrt{p_{b}^{2}}\sqrt{4m^{2}+p_{ b}^{2}}\Big{)}. \tag{19}\] Figure 6: The cuts for \(p_{a}^{2}=-1.8m^{2}\), \(p_{b}^{2}=-2.4m^{2}\) and \(\theta_{1}\in[0,\pi/2]\). In the left/right plot, \(\theta_{+}/\theta_{-}\) is used. The contours are deformed around the point where the cuts touch. This opens a path for \(\theta_{-}\) but not for \(\theta_{+}\) as can be seen in the projections (black). Colors as in Figure 4. For the second situation one can directly obtain the thresholds by considering all possible pairs of propagators and adapting the routing to have the arguments \(q\) and \(q-p_{i}\), \(i=a,b,c\). This leads to the thresholds: \[p_{a}^{2} = -4m^{2}, \tag{20a}\] \[p_{b}^{2} = -4m^{2},\] (20b) \[p_{c}^{2} = -4m^{2}, \tag{20c}\] which correspond to walls in the space spanned by \(p_{a}^{2}\), \(p_{b}^{2}\) and \(p_{c}^{2}\). To find this result without changing the routing, we need to find the singular point creating the branch point. It turns out that this point is where the inner straight parts of the two cuts touch. We can determine that by choosing two values for \(p_{a}^{2}\) and \(p_{b}^{2}\) and plugging in the value for \(\theta\) determined from \(p_{c}^{2}=-4m^{2}\). If the cuts touch before the pole at \(-m^{2}\), this creates a branch point. If \(\theta\) is changed, the two cuts either do not touch or cross at two points. Exemplary situations are depicted in Figure 7 where it is also shown that a contour deformation can be found in the latter case. In the case where they only touch, no deformation is possible because there is only one critical point which would require opposite directions of the deformations for each cut. It remains to determine which of the two possibilities leads to the critical point closer to the origin and thus to the relevant threshold. For the case with two propagators, one can determine the touching point to be at \(\sqrt{(p_{a}^{2}+p_{b}^{2})/2+m^{2}}\). The case with three propagators, on the other hand, has the critical points at \(\pm i\,m\). Thus, they create the highest threshold if Figure 7: Cuts of the triangle diagram for \(p_{a}^{2}=-1.2m^{2}\), \(p_{b}^{2}=-2.2m^{2}\) and \(\theta_{1}\in[0,\pi/2]\). In the left plot, \(p_{c}^{2}=-4m^{2}\), and the cuts touch at \(\sqrt{(p_{a}^{2}+p_{b}^{2})/2+m^{2}}\). In the right plot, the value for \(\theta\) is slightly shifted compared to the left plot and the cuts cross at two points. However, a contour deformation can be found that allows to lead the integration out of the two circles as can be seen by the projected cuts in black. Colors as in Figure 4. \(p_{a}^{2}+p_{b}^{2}<-4m^{2}\), and the two propagators create them otherwise. The final threshold surface is thus parameterized by the walls at \(-4m^{2}\) and the surface created by \(p_{c+}^{2}\): \[\left\{\begin{array}{l}p_{c}^{2}=\frac{2m^{2}(p_{a}^{2}+p_{b}^{2})+p_{a}^{2}p_ {b}^{2}+\sqrt{p_{a}^{2}(4m^{2}+p_{a}^{2})}\sqrt{p_{b}^{2}(4m^{2}+p_{b}^{2})}}{ \text{for}}\quad-4m^{2}\leq p_{a}^{2},\frac{2m^{2}}{p_{b}^{2}\leq 0},\quad\text{ and}\quad p_{a}^{2}+p_{b}^{2}\leq-4m^{2}\\ \\ p_{a}^{2}=p_{b}^{2}=p_{c}^{2}=-4m^{2}\qquad\text{else}.\end{array}\right. \tag{21}\] The resulting surfaces are shown in Figure 8. We close this section with the remark that the case of three different masses can also be analyzed in the same way [29]. ## 3 Conclusions Contour deformations are a powerful tool to access the time-like region of correlation functions with functional methods. It encompasses the perturbative case for which the original results of the Landau analysis are recovered but can also be applied nonperturbatively. We have applied this method to three-point functions to extract their thresholds. In particular, we also find distinct cases depending on how many propagators are involved in the creation of the branch point reflecting the case of contracted diagrams of the Landau analysis. The method was applied in Ref. [29] to the system of propagator and vertex Dyson-Schwinger equations of \(\phi^{3}\) theory. For the propagator, we could extract both the pole mass shifted by the interactions as well as the branch point. The latter fulfills the Landau condition when the perturbative mass is replaced by the dynamical one. For the vertex, we also confirmed the validity of the Landau conditions in a nonperturbative calculation. Future applications are the three-point functions of quantum chromodynamics which are well-studied with functional equations but only for space-like momenta [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. **Author Contributions:** Conceptualization, R.A. and M.Q.H.; software, M.Q.H. and W.J.K.; investigation, R.A., M.Q.H. and W.J.K.; writing--original draft preparation, M.Q.H.; writing--review and editing, R.A. and W.J.K.; visualization, M.Q.H. All authors have read and agreed to the published version of the manuscript. Figure 8: Full solution for thresholds of the triangle diagram including contracted diagrams. **Funding:** This work was supported by the DFG (German Research Foundation) grant FI 970/11-2 and by the BMBF under contract No. 05P21RGFP3. **Conflicts of Interest:** The authors declare no conflict of interest.
2301.07271
Chip Guard ECC: An Efficient, Low Latency Method
Chip Guard is a new approach to symbol-correcting error correction codes. It can be scaled to various data burst sizes and reliability levels. A specific version for DDR5 is described. It uses the usual DDR5 configuration of 8 data chips, plus 2 chips for ECC and metadata, with 64-bit bursts per chip, to support whole-chip correction reliably and with high probity (reporting of uncorrectable faults). Various numbers of metadata bits may be supported with defined tradeoffs for reliability and probity. The method should correct all bounded faults of a single chip, with less than 1 in 10^12 chance of failing to correct unbounded faults in one chip, or less than 1 in 10^12 chance of failure to detect an uncorrected fault which affects multiple chips.
Tanj Bennett
2023-01-18T02:27:25Z
http://arxiv.org/abs/2301.07271v1
# Chip Guard ECC ###### Abstract Chip Guard is a new approach to symbol-correcting error correction codes. It can be scaled to various data burst sizes and reliability levels. A specific version for DDRS is described. It uses the usual DDR5 configuration of 8 data chips, plus 2 chips for ECC and metadata, with 64-bit bursts per chip, to support whole-chip correction reliably and with high probity (reporting of uncorrectable faults). Various numbers of metadata bits may be supported with defined tradeoffs for reliability and probity. The method should correct all bounded faults [1] of a single chip, with less than 1 in \(\mathbf{10}^{12}\) chance of falling to correct unbounded faults in one chip, or less than 1 in \(\mathbf{10}^{12}\) chance of failure to detect an uncorrected fault which affects multiple chips. ## 2 Introduction DDRS memory is intended to support high-capacity memory systems which require high reliability. Manufacturing tests screen out initial flaws and either map spare resources to replace the flaw or reject the chip. However, new flaws will develop over the years that a chip is in use. DDRS chips include a single-bit error correction mechanism [2] built into each chip which likely corrects 95% of errors over the useful life of the chip. The remaining uncorrected errors are almost entirely multi-bit faults. These are caused by flaws in structures like the word lines which are drawn at the finest resolution of any feature and interact with multiple memory cells. A modern server may be expected to operate continuously for 5 years or more in a production environment. Each CPU socket may be supported by hundreds of DRAM chips. Modern servers use an ECC mechanism which can correct any or all errors found in a single DRAM chip. This correction of multiple bits in one DRAM, combined with the in-DRAM correction of any single bit errors on the other chips, provides a high assurance of correct operation even in large memories. The operating system or hypervisor should eventually replace faulty locations with spare resources, but the ECC is essential to allow applications to continue operation until then, and to allow the data to be safely read and copied into the new spare. There are additional fault modes which may flip bits coming from multiple chips. These are most likely transients such as glitches in the power supply or electromagnetic interference from nearby sources acting upon circuits and wires. These rare uncorrectables should not be allowed to pass silently. This requires systems with large memory and a high reliability requirement to use an error correction code which can correct the entire 64-bit symbol delivered by one DDR5 chip, and which can report with high probity if there is a more extreme error which is uncorrected. ## 3 Background and Motivation A modern server may have up to 12 DDRS channels and each DDR5 channel includes two DDR5 sub-channels which operate independently. ECC needs to work with every store and every load on every subchannel, so there is a desire for ECC logic to be small and to use minimum power per operation because many copies of this function will be used.
2310.11352
Minimal L^p-Solutions to Singular Sublinear Elliptic Problems
We solve the existence problem for the minimal positive solutions $u\in L^{p}(\Omega, dx)$ to the Dirichlet problems for sublinear elliptic equations of the form \[ \begin{cases} Lu=\sigma u^q+\mu\qquad \quad \text{in} \quad \Omega, \\ \liminf\limits_{x \rightarrow y}u(x) = 0 \qquad y \in \partial_{\infty}\Omega, \end{cases} \] where $0<q<1$ and $Lu:=-\text{div} (\mathcal{A}(x)\nabla u)$ is a linear uniformly elliptic operator with bounded measurable coefficients. The coefficient $\sigma$ and data $\mu$ are nonnegative Radon measures on an arbitrary domain $\Omega \subset \mathbb{R}^n$ with a positive Green function associated with $L$. Our techniques are based on the use of sharp Green potential pointwise estimates, weighted norm inqualities, and norm estimates in terms of generalized energy.
Aye Chan May, Adisak Seesanea
2023-10-17T15:40:13Z
http://arxiv.org/abs/2310.11352v1
# Minimal \(L^{p}\)-solutions to singular ###### Abstract. We solve the existence problem for the minimal positive solutions \(u\in L^{p}(\Omega,dx)\) to the Dirichlet problems for sublinear elliptic equations of the form \[\begin{cases}\mathcal{L}u=\sigma u^{q}+\mu&\text{in}\quad\Omega,\\ \liminf\limits_{x\to y}u(x)=0&\text{$y\in\partial_{\infty}\Omega$},\end{cases}\] where \(0<q<1\) and \(\mathcal{L}u:=-\text{div}(\mathcal{A}(x)\nabla u)\) is a linear uniformly elliptic operator with bounded measurable coefficients. The coefficient \(\sigma\) and data \(\mu\) are nonnegative Radon measures on an arbitrary domain \(\Omega\subset\mathbb{R}^{n}\) with a positive Green function associated with \(\mathcal{L}\). Our techniques are based on the use of sharp Green potential pointwise estimates, weighted norm inqualities, and norm estimates in terms of generalized energy. Key words and phrases:sublinear elliptic equation, measure data, divergence form operator, Green function 2020 Mathematics Subject Classification: Primary 35J61; Secondary 31B10, 42B37 ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Function Spaces * 2.2 Potentials * 3 Construction of minimal \(L^{p}\)-solutions ## 1. Introduction Let \(\Omega\) be a nonempty open connected set in \(\mathbb{R}^{n}\) (\(n\geq 3\)) which possesses a positive Green function \(G\), and \(\mathcal{M}^{+}(\Omega)\) denotes the class of all nonnegative Radon measures in \(\Omega\). We consider the Dirichlet problem \[\begin{cases}\mathcal{L}u=\sigma u^{q}+\mu,\quad u\geq 0\quad\text{in}\quad \Omega,\\ \liminf_{x\to y}u(x)=0,\quad y\in\partial_{\infty}\Omega\end{cases} \tag{1.1}\] in the sublinear case \(0<q<1\) where \(\sigma,\mu\in\mathcal{M}^{+}(\Omega)\). Here \(\mathcal{L}u:=-\text{div}(\mathcal{A}(x)\nabla u)\) with bounded measurable coefficients is assumed to be uniformly elliptic, i.e., \(\mathcal{A}:\Omega\to\mathbb{R}^{n\times n}\) is a real symmetric matrix-valued function and there exists positive constants \(m\leq M\) so that \[m|\xi|^{2}\leq\mathcal{A}(x)\xi\cdot\xi\leq M|\xi|^{2}\] for almost every \(x\in\Omega\) and for every \(\xi\in\mathbb{R}^{n}\). In this paper, a solution \(u\) to the problem (1.1) will be understood in the sense that \(u\) is an \(\mathcal{A}\)-superharmonic function on \(\Omega\) such that \(u\in L^{q}_{loc}(\Omega,d\sigma)\) with \(u\geq 0\)\(d\sigma\)-a.e., and satisfies the corresponding integral equations \[u=\mathbf{G}(u^{q}d\sigma)+\mathbf{G}\mu\quad\text{in}\;\;\Omega. \tag{1.2}\] Here, the Green potential of a measure \(\sigma\in\mathcal{M}^{+}(\Omega)\), is defined by \[\mathbf{G}\sigma=\int_{\Omega}G(x,y)d\sigma(y),\quad x\in\Omega\] where a function \(G:\Omega\times\Omega\to(0,\infty]\) called a positive Green function associated with \(\mathcal{L}\) in \(\Omega\). In the classical case \(\mathcal{L}:=-\Delta\), these sublinear equations are closely related to the study of porous medium equations, and were studied by Brezis and Kamin [4] under the assumption of the bounded domain. The reader can also see such a sublinear problem under various assumptions, for instance, [1, 2, 5, 8, 9, 10, 11, 13], and the literature cited there. There are many of the existing solutions theories to elliptic equations (1.1) involving measures. For instance, Veron [14], considered problem (1.1) with different boundary conditions: homogeneous Dirichlet boundary conditions (\(u=0\) on \(\partial\Omega\)) and measure boundary conditions (\(u=\mu\) on \(\partial\Omega\) where \(\mu\) is Radon measure) under a smooth bounded domain \(\Omega\). The homogeneous case (\(\mu=0\)) of problem (1.1) was investigated by Seesanea and Verbitsky [11]. Nevertheless, when it comes to the case \(\mu\geq 0\), the relation between \(\sigma\) and \(\mu\) seems to be nontrivial in the scale of Lebesgue space. Furthermore, in [13], the author introduced the bilateral pointwise estimates in terms of the intrinsic nonlinear potentials which can be utilized to obtain the existence of a positive solution \(u\in L^{p}(\Omega,dx)\) to (1.1). Unfortunately, the definition of intrinsic nonlinear potential is defined in terms of the best localized constant of related sublinaer weighted norm inequality, which make it difficult to be verified. In this present paper, we aims to provide a simple approach to overcome the difficulties in [11] and deduce useful sufficient conditions on measures \(\sigma\) and \(\mu\) for the existence of the positive minimal \(\mathcal{A}\)-superharmonic solution \(u\in L^{p}(\Omega,dx)\) to (1.1). Our main results read as follows. **Theorem 1.1**.: _Let \(\sigma,\mu\in\mathcal{M}^{+}(\Omega)\) such that \((\sigma,\mu)\neq(0,0)\), \(0<q<1\) and \(G\) be a positive Green function associated with \(\mathcal{L}\) in \(\Omega\subset\mathbb{R}^{n},n\geq 3\). Suppose also that \(\frac{n}{n-2}<p<\infty\),_ \[\mathbf{G}\sigma\in L^{\frac{\gamma+q}{1-q}}(\Omega,d\sigma) \tag{1.3}\] _and_ \[\mathbf{G}\mu\in L^{\gamma}(\Omega,d\mu), \tag{1.4}\] _with \(\gamma=\frac{p(n-2)-n}{n}.\) Then there exists a positive minimal \(\mathcal{A}\)-superharmonic solution \(u\in L^{p}(\Omega,dx)\) to (1.1)._ A sufficient condition for (1.3) and (1.4) with \(\gamma=\frac{p(n-2)-n}{n}\) is given by \[\sigma\in L^{s_{1}}(\Omega,dx),\quad s_{1}=\frac{np}{n(1-q)+2p} \tag{1.5}\] and \[\mu\in L^{s_{2}}(\Omega,dx),\quad s_{2}=\frac{np}{n+2p} \tag{1.6}\] where \(\frac{n}{n-2}<p<\infty.\) Therefore, the following corollary can be simply deduced from Theorem 1.1. **Corollary 1.2**.: _Under the assumptions of Theorem 1.1, if conditions (1.5) and (1.6) are fulfilled, then there exists a positive minimal \(\mathcal{A}\)-superharmonic solution \(u\in L^{p}(\Omega,dx)\) to (1.1)._ Observe that Corollary 1.2 was done by Boccardo and Orsina [3], with a different proof, when \(\Omega\) is a bounded domain in \(\mathbb{R}^{n}\). #### Organization of the paper In Section 2, we organize some definitions and well-known research that are relevant to our problem. In Section 3, we prove an estimate for \(p\)-th integrability of potentials in terms of generalized Dirichlet energy and the existence result of the problem by applying the previous estimate. Moreover, we provide a sufficient condition for the existence of a positive solution to (1.1). ### Notation We use the following notation in this paper. Let \(\Omega\) be a connected open subset in \(\mathbb{R}^{n}\). * \(D\):= a relatively compact open subset of \(\Omega\). * \(\mathcal{H}_{\mathcal{A}}(D)\):= the set of all continuous \(\mathcal{A}\)-harmonic functions in \(D\). * \(\mathcal{C}_{0}^{\infty}(\Omega)\):= the set of all smooth compactly supported functions on \(\Omega\). * \(\mathcal{M}^{+}(\Omega)\):= the set of all nonnegative Radon measures on \(\Omega\). * \(L^{p}(\Omega,d\mu)\):= the \(L^{p}\) space with respect to Radon measure \(\mu\in\mathcal{M}^{+}(\Omega)\). * \(L^{p}(\Omega,dx)\):= the \(L^{p}\) space with respect to Lebesgue measure. ## 2. Preliminaries Thoughout, let \(\Omega\) be a domain (connected open set) in \(\mathbb{R}^{n}\). ### Function Spaces **Definition 2.1**.: For \(1\leq p<\infty\) and \(\mu\in\mathcal{M}^{+}(\Omega)\), we denote by \(L^{p}(\Omega,d\mu)\) the space of all real-valued measurable functions \(f\) on \(\Omega\) such that \[\|f\|_{L^{p}(\Omega,d\mu)}=\big{(}\int_{\Omega}|f(x)|^{p}d\mu(x)\big{)}^{ \frac{1}{p}}<\infty.\] **Definition 2.2**.: A function \(u\in W^{1,2}_{loc}(\Omega)\) is said to be \(\mathcal{A}\)**-harmonic** if \(u\) satisfies the equation \[\mathcal{L}u=0\quad\text{in}\quad\Omega\] in the distributional sense, i.e., \[\int_{\Omega}\mathcal{A}(x,\nabla u(x))\cdot\nabla\phi dx=0,\quad\forall\phi \in\mathcal{C}_{0}^{\infty}(\Omega).\] The set of \(\mathcal{A}\)-harmonic functions on \(\Omega\) is denoted by \(\mathcal{H}_{\mathcal{A}}(\Omega)\). Every \(\mathcal{A}\)-harmonic function \(u\) has a continuous representative which coincides with \(u\) a.e. see[7, Theorem 3.70]. **Definition 2.3**.: A function \(u:\Omega\to(-\infty,+\infty]\) is \(\mathcal{A}\)**-superharmonic** if \(u\) is lower semicontinuous in \(\Omega\), \(u\not\equiv+\infty\) in each component of \(\Omega\), and for each \(D\Subset\Omega\) and \(h\in\mathcal{C}(\bar{D})\cap\mathcal{H}_{\mathcal{A}}(D)\), the inequality \(u\geq h\) on \(\partial D\) implies \(u\geq h\) in \(D\). Let \(u\) be an \(\mathcal{A}\)-superharmonic function in \(\Omega.\) Then there exists a unique measure \(\omega\in\mathcal{M}^{+}(\Omega)\) such that \[\mathcal{L}u=\omega\quad\text{in}\quad\Omega\] in the distributional sense, i.e., \[\int_{\Omega}\mathcal{A}(x,\nabla u(x))\cdot\nabla\phi\;dx=\int_{\Omega}\phi\;d \omega,\quad\forall\phi\in\mathcal{C}_{0}^{\infty}(\Omega).\] The measure \(\omega\) is called the Riesz measure associated with \(u,\) see [7, Theorem 21.2]. ### Potentials Let \(\gamma>0\) and \(\omega\in\mathcal{M}^{+}(\Omega),\) and let \(G\) be a positive Green function associated with \(\mathcal{L}\) on \(\Omega.\) The generalized Green energy, introduced in [9], of a mesure \(\omega\in\mathcal{M}^{+}\Omega\) is given by \[\mathcal{E}_{\gamma}[\omega]:=\int_{\Omega}(\mathbf{G}\omega)^{\gamma}d\omega.\] The first theorem gives an auxiliary fact that will be used in the proof of the main lemma. The complete proof can be seen in [9, Lemma 3.3]. **Theorem 2.4** (See [9]).: _Let \(0<\gamma<1\) and \(\mu\in\mathcal{M}^{+}(\Omega)\). Suppose \(G\) is a positive Green function associated with \(\mathcal{L}\) on \(\Omega\). Suppose \(u:\mathbf{G}\mu\not\equiv\infty.\) Then \(w:=u^{\gamma}\) is a positive \(\mathcal{A}\)-superharmonic function on \(\Omega,\) and \(w=\mathbf{G}\omega,\) where \(\omega\in\mathcal{M}^{+}(\Omega)\) is the Riesz measure of \(w\). Moreover,_ \[\mathcal{E}_{\gamma}[\mu]<+\infty\quad\text{if and only if}\quad\mathcal{E}_{1}[ \omega]<+\infty\] The next theorem provides sharp lower pointwise estimates for supersolutions to sublinear elliptic equations due to Grigor'yan and Verbitsky, [6, Theorem 1.3]. **Theorem 2.5** ( See [6]).: _Let \(0<q<1\) and \(\sigma\in\mathcal{M}^{+}(\Omega)\). Suppose \(G\) is a positive Green function associated with \(\mathcal{L}\) on \(\Omega\). If \(u\in L^{q}_{loc}(\Omega,d\sigma)\) is a positive supersolution to the sublinear integral equation_ \[u\geq\mathbf{G}(u^{q}d\sigma),\quad x\in\Omega, \tag{2.1}\] _then_ \[u(x)\geq(1-q)^{\frac{1}{1-q}}\big{[}\mathbf{G}\sigma(x)\big{]}^{\frac{1}{1-q} },\quad x\in\Omega. \tag{2.2}\] We use the following pointwise iterated inequalities to derive the Green potential estimate, see [6, Lemma 2.5]. **Theorem 2.6** (See [6]).: _Let \(\sigma\in\mathcal{M}^{+}(\Omega)\) with \(\sigma\not\equiv 0\), and let \(G\) be the positive Green function associated with \(\mathcal{L}\) on \(\Omega.\) Then the following estimates hold. (i) If \(t\geq 1,\) then_ \[(\mathbf{G}\sigma)^{t}(x)\leq t\mathbf{G}((\mathbf{G}\sigma)^{t-1}d\sigma)(x), \quad x\in\Omega. \tag{2.3}\] _(ii) If \(0<t\leq 1,\) then_ \[(\mathbf{G}\sigma)^{t}(x)\geq t\mathbf{G}((\mathbf{G}\sigma)^{t-1}d\sigma)(x), \quad x\in\Omega. \tag{2.4}\] The argument of finding a solution depends on the following weighted norm inequalities of the \((s,r)\)-type in the case where \(0<r<s\) and \(1<s<\infty,\) for operators \(\mathbf{G}\): \[\|\mathbf{G}(fd\sigma)\|_{L^{r}(\Omega,d\sigma)}\leq c\|f\|_{L^{s}(\Omega,d \sigma)},\quad f\in L^{s}(\Omega,d\sigma), \tag{2.5}\] where \(c\) is a positive constant independent of \(f,\) for an arbitrary measure \(\sigma\in\mathcal{M}^{+}(\Omega),\) under certain assumptions on \(G,\) see [12, Theorem 1.1]. **Theorem 2.7** (See [12]).: _Let \(\sigma\in\mathcal{M}^{+}(\Omega)\) with \(\sigma\not\equiv 0,\) and let \(G\) be the positive Green function associated with \(\mathcal{L}\) on \(\Omega.\) (i)If \(1<s<\infty\) and \(0<r<s,\) then the weighted norm inequality (2.5) is fulfilled if and only if_ \[\mathbf{G}\sigma\in L^{\frac{sr}{s-r}}(\Omega,d\sigma). \tag{2.6}\] _(ii)If \(0<q<1\) and \(0<\gamma<\infty,\) then there exists a positive (super)solution \(u\in L^{\gamma+q}(\Omega,d\sigma)\) to sublinear integral equation (2.1) if and only if the weighted norm inequality (2.5) is fulfilled with \(r=\gamma+q\) and \(s=\frac{\gamma+q}{q},\) i.e.,_ \[\|\mathbf{G}(fd\sigma)\|_{L^{\gamma+q}(\Omega,d\sigma)}\leq c\|f\|_{L^{\frac{ \gamma+q}{q}}(\Omega,d\sigma)},\quad f\in L^{\frac{\gamma+q}{q}}(\Omega,d \sigma), \tag{2.7}\] _or equivalently,_ \[\mathbf{G}\sigma\in L^{\frac{\gamma+q}{1-q}}(\Omega,d\sigma). \tag{2.8}\] The next theorem gives the existence of a positive solution \(u\in L^{\gamma+q}(\Omega,d\sigma)(\gamma>0)\) to the integral equation (1.2) in the sublinear case, which is claimed by Seesanea and Verbitsky, see [9, Theorem 4.2]. **Theorem 2.8** (See [9]).: _Let \(0<q<1,0<\gamma<\infty\) and \(\sigma,\mu\in\mathcal{M}^{+}(\Omega)\) with \(\sigma,\mu\not\equiv 0.\) Suppose \(G\) is a positive quasi-symmetric lower semicontinuous kernel on \(\Omega\times\Omega,\) which satisfies the WMP. If (1.3) and_ \[\mathbf{G}\mu\in L^{\gamma+q}(\Omega,d\sigma) \tag{2.9}\] _hold, then there exists a positive (minimal) solution \(u\in L^{\gamma+q}(\Omega,d\sigma)\) to (1.2). The converse statement is valid without the quasi-symmetry assumption on \(G.\)_ The following lemma was stated in [9, Lemma 4.3]. We can control the interaction between the measure coefficient and measure data by applying the following lemma. **Lemma 2.9** (See [9]).: _Let \(0<q<1,0<\gamma<\infty,\) and \(\sigma,\mu\in\mathcal{M}^{+}(\Omega).\) Suppose \(G\) is a positive Green function associated with \(\mathcal{L}\) on \(\Omega.\) Then conditions (1.3) and (1.4) imply (2.9)._ The next results are essential lemmas to prove positive solutions to (1.1) when \(\mu=0.\) The complete proofs of the following two lemmas can be found in [11, Lemma 4.1 and Lemma 4.2]. **Lemma 2.10** (See [11]).: _Let \(0<q<1,\) and let \(\sigma\in\mathcal{M}^{+}(\Omega)\) with \(\sigma\not\equiv 0.\) Suppose \(G\) is a positive Green function associated with \(\mathcal{L}\) on \(\Omega.\) Suppose that \(\frac{n}{n-2}<p<\infty\) and the condition_ \[\mathbf{G}\sigma\in L^{\frac{p}{1-q}}(\Omega,dx) \tag{2.10}\] _is valid. Then_ \[\|\mathbf{G}(gd\sigma)\|_{L^{p}(\Omega,dx)}\leq c\|\mathbf{G}\sigma\|_{L^{ \frac{p}{1-q}}(\Omega,dx)}^{\frac{1}{s^{\prime}}}\|f\|_{L^{s}(\Omega,d\sigma) },\quad f\in L^{s}(\Omega,d\sigma),\] _where \(c\) is a positive constant independent of \(f\) and \(q\) and \(s=\frac{p(n-2)-n(1-q)}{nq}.\)_ **Lemma 2.11** (See [11]).: _Let \(0<q<1,\) and let \(\sigma\in\mathcal{M}^{+}(\Omega)\) with \(\sigma\not\equiv 0.\) Suppose \(G\) is a positive Green function associated with \(\mathcal{L}\) on \(\Omega.\) If \(\frac{n}{n-2}<p<\infty,\) then (1.3) with \(\gamma=\frac{p(n-2)-n}{n}\) implies (2.10). In fact,_ \[\|\mathbf{G}\sigma\|_{L^{\frac{p}{1-q}}(\Omega,dx)}\leq\tilde{C}\big{\|} \mathbf{G}\sigma\big{\|}_{L^{\frac{\gamma+q}{1-q}}(\Omega,d\sigma)}^{\frac{ \gamma+q}{1-q}}\] _where \(\tilde{C}\) is a positive constant depending on \(\gamma\) and \(q\)._ ## 3. Construction of minimal \(L^{p}\)-solutions In this section, we prove our main result stated Theorem 1.1 and its consequence in Corollary 1.2. The following lemma is one of the key ingredients in our approach. **Lemma 3.1**.: _Let \(0<\gamma<\infty\) and \(0<q<1.\) Suppose \(G\) is a positive Green function associated with \(\mathcal{L}\) in \(\Omega\subset\mathbb{R}^{n},n\geq 3\). Let \(\mu\in\mathcal{M}^{+}(\Omega)\) such that \(\mathbf{G}\mu\not\equiv\infty.\) If \(\omega\in\mathcal{M}^{+}(\Omega)\) is a Riesz measure of the \(\mathcal{A}\)-superharmonic function \((\mathbf{G}\mu)^{1-q},\) then_ \[\mathcal{E}_{\frac{\gamma+q}{1-q}}[\omega]\leq C\mathcal{E}_{\gamma}[\mu]\] _where \(C\) is a positive constant depending on \(\gamma\) and \(q\)._ Proof.: We have \(w:=(\mathbf{G}\mu)^{1-q}\) is a positive \(\mathcal{A}\)-superharmonic function in \(\Omega\), and \(w:=\mathbf{G}\omega\) where \(\omega\in\mathcal{M}^{+}(\Omega)\) is a Riesz measure of \(w\). Consider two cases as follows: * Case:\(\gamma+q>1\). Applying the iterated inequality (2.3) with \(t=\gamma+q\), together with Fubini's theorem and Holder's inequality with the exponents \(\frac{\gamma}{\gamma+q-1}\) and \(\frac{\gamma}{1-q}\), we obtain \[\mathcal{E}_{\frac{\gamma+q}{1-q}}[\omega]=\int_{\Omega}(\mathbf{G }\omega)^{\frac{\gamma+q}{1-q}}d\omega =\int_{\Omega}(\mathbf{G}\mu)^{\gamma+q}d\omega\] \[\leq C\int_{\Omega}\mathbf{G}((\mathbf{G}\mu)^{\gamma+q-1}d\mu)d\omega\] \[\leq C\Big{(}\int_{\Omega}(\mathbf{G}\mu)^{\gamma}d\mu\Big{)}^{ \frac{\gamma+q-1}{\gamma}}\Big{(}\int_{\Omega}(\mathbf{G}\omega)^{\frac{ \gamma}{1-q}}d\mu\Big{)}^{\frac{1-q}{\gamma}}\] \[=C\int_{\Omega}(\mathbf{G}\mu)^{\gamma}d\mu\] \[=C\mathcal{E}_{\gamma}[\mu]\] * Case:\(\gamma+q\leq 1\). Write \[\int_{\Omega}(\mathbf{G}\mu)^{\gamma+q}d\omega=\int_{\Omega}(\mathbf{G}\mu)^{ \gamma+q}F^{a-1}F^{1-a}d\omega,\] where \(a=\gamma+q\) and \(F\) is a positive \(\omega\)-measurable function to be determined later. Applying Holder's inequality with the exponents \(\frac{1}{a}\) and \(\frac{1}{1-a}\), we get \[\mathcal{E}_{\frac{\gamma+q}{1-q}}[\omega]=\int_{\Omega}(\mathbf{G }\omega)^{\frac{\gamma+q}{1-q}}d\omega =\int_{\Omega}(\mathbf{G}\mu)^{\gamma+q}F^{a-1}F^{1-a}d\omega\] \[\leq\Big{(}\int_{\Omega}(\mathbf{G}\mu)^{\frac{\gamma+q}{a}}F^{ \frac{a-1}{a}}d\omega\Big{)}^{a}\Big{(}\int_{\Omega}Fd\omega\Big{)}^{1-a}\] Setting \(F=(\mathbf{G}\omega)^{\frac{\gamma+q}{1-q}}\). \[\Big{(}\mathcal{E}_{\frac{\gamma+q}{1-q}}[\omega]\Big{)}^{\gamma+q}\leq\Big{(} \int_{\Omega}(\mathbf{G}\mu)\big{(}\mathbf{G}\omega\big{)}^{(\frac{\gamma+q}{ 1-q})(\frac{\gamma+q-1}{\gamma+q})}d\omega\Big{)}^{\gamma+q} \tag{3.1}\] The right-hand side of (3.1) is estimated by using Fubini's theorem, followed by inequality (2.4) with \(t=\frac{\gamma}{1-q}\) \[\mathcal{E}_{\frac{\gamma+q}{1-q}}[\omega]\leq\int_{\Omega}\mathbf{ G}(\big{(}\mathbf{G}\omega\big{)}^{\frac{\gamma+q-1}{1-q}}d\omega)d\mu \leq C\int_{\Omega}(\mathbf{G}\omega)^{\frac{\gamma}{1-q}}d\mu\] \[=C\int_{\Omega}(\mathbf{G}\mu)^{\gamma}d\mu\] \[=\mathcal{E}_{\gamma}[\mu].\] This completes the proof of the lemma. The following lemma gives Green potentials norm estimates in terms of generalized energy. **Lemma 3.2**.: _Let \(G\) be a positive Green function associated with \(\mathcal{L}\) in \(\Omega\subset\mathbb{R}^{n}\). Let \(\mu\in\mathcal{M}^{+}(\Omega)\) such that \(\mathbf{G}\mu\not\equiv\infty.\) Then, for \(0<\gamma<\infty,\)\(\mathbf{G}\mu\in L^{\gamma}(\Omega,d\mu)\) implies \(\mathbf{G}\mu\in L^{p}(\Omega,dx)\), i.e.,_ \[\|\mathbf{G}\mu\|_{L^{p}(\Omega,dx)}\leq c\Big{(}\mathcal{E}_{ \gamma}[\mu]\Big{)}^{\frac{1}{\gamma+1}}\] _where \(p=\frac{n(1+\gamma)}{n-2}\) and \(c\) is a positive constant depending on \(\gamma\)._ Proof.: Notice that \(w:=(\mathbf{G}\mu)^{1-q}\) with \(0<q<1\) is a positive \(\mathcal{A}\)-superharmonic function on \(\Omega\) since \(\mathbf{G}\mu\not\equiv+\infty,\) and \(w:=\mathbf{G}\omega,\) where \(\omega\in\mathcal{M}^{+}(\Omega)\) is the Riesz measure of \(w,\) see [9]. Applying the Lemma 3.1 together with Lemma 2.11, we get the desired estimate, \[\|\mathbf{G}\mu\|_{L^{p}(\Omega,dx)}=\big{\|}\mathbf{G}\omega\big{\|}_{L^{ \frac{p}{1-q}}(\Omega,dx)}^{\frac{1}{1-q}}\leq\tilde{C}\big{\|}\mathbf{G} \omega\big{\|}_{L^{\frac{\gamma+q}{1-q}}(\Omega,d\omega)}^{\frac{\gamma+q}{ \gamma+q}}\leq\tilde{C}C\big{\|}\mathbf{G}\mu\big{\|}_{L^{\gamma}(\Omega,d \mu)}^{\frac{\gamma}{\gamma+1}}.\] where \(\tilde{C}\) and \(C\) are constants in Lemma 2.11 and Lemma 3.1 respectively. We are now ready to prove the main theorem of this work. Proof of Theorem 1.1.: Suppose that (1.3) and (1.4) hold for \(\gamma=\frac{p(n-2)-n}{n}.\) The condition (2.9) is satisfied by Lemma 2.9. As a result, according to Theorem 2.8, the integral equation \[u=\mathbf{G}(u^{q}d\sigma)+\mathbf{G}\mu\quad\text{in}\quad\Omega\] has a positive solution \(u\in L^{\gamma+q}(\Omega,d\sigma).\) In order to get solution \(u\in L^{p}(\Omega,dx),\) we combine Lemma 2.10, Lemma 2.11 and Lemma 3.2. Then, we find that \[\|u\|_{L^{p}(\Omega,dx)} \leq\|\mathbf{G}(u^{q}d\sigma)\|_{L^{p}(\Omega,dx)}+\|\mathbf{G}\mu \|_{L^{p}(\Omega,dx)}\] \[\leq C\|\mathbf{G}\sigma\|_{L^{\frac{\gamma+q}{1-q}}(\Omega,d \sigma)}\,\|u^{q}\|_{L^{\frac{\gamma+q}{q}}(\mathbb{R}^{n},\,d\sigma)}+c\Big{(} \int_{\Omega}(\mathbf{G}\mu)^{\gamma}\,d\mu\Big{)}^{\frac{1}{\gamma+1}}\] \[=C\|u\|_{L^{\gamma+q}(\Omega,\,d\sigma)}^{q}<+\infty.\] This shows that there exists a positive solution \(u\in L^{p}(\Omega,dx)\) to (1.1) We finish this paper by providing a proof of Corollary 1.2. The following proof is mainly influenced by Seesanea and Verbitsky [9] and by Boccardo and Orsina [3]. Proof of Corollary 1.2.: Setting \(\gamma=\frac{p(n-2)-n}{n}\). Then \(s_{1}=\frac{np}{n(1-q)+2p}>1\). By Holder inequality, \[\int_{\Omega}\big{(}\mathbf{G}\sigma\big{)}^{\frac{\gamma+q}{1-q}}d\sigma\leq \big{\|}\mathbf{G}\sigma\big{\|}_{L^{\frac{\gamma+q}{1-q}}s_{1}^{\prime}( \Omega,dx)}^{\frac{\gamma+q}{1-q}}\|\sigma\|_{L^{s_{1}}(\Omega,dx)}, \tag{3.2}\] where \(s_{1}^{\prime}=\frac{np}{p(n-2)-n(1-q)}\) is the conjugate of \(s_{1}\). We see that \[\frac{1}{s_{1}}+\frac{1}{\big{(}\frac{\gamma+q}{1-q}\big{)}s_{1}^{\prime}}= \frac{2}{n}.\] Appealing to Hardy-Littlewood-Sobolev inequality, \[\begin{split}\big{\|}\mathbf{G}\sigma\big{\|}_{L^{\frac{\gamma+q} {1-q}}s_{1}^{\prime}(\Omega,dx)}\lesssim\big{\|}\mathbf{G}\tilde{\sigma}\big{\|} _{L^{\big{(}\frac{\gamma+q}{1-q}\big{)}s_{1}^{\prime}}(\mathbb{R}^{n},dx)}& \lesssim\|\tilde{\sigma}\|_{L^{s_{1}}(\mathbb{R}^{n},dx)}\\ &=\|\sigma\|_{L^{s_{1}}(\Omega,dx)}.\end{split} \tag{3.3}\] Here, \(\tilde{\sigma}\) is the zero extension of \(\sigma\) to \(\mathbb{R}^{n}\). Thus, by (3.2) and (3.3), \[\big{\|}\mathbf{G}\sigma\big{\|}_{L^{\frac{\gamma+q}{1-q}}(\Omega,d\sigma)} \leq\Big{(}\|\sigma\|_{L^{s_{1}}(\Omega,dx)}^{\frac{\gamma+q}{1-q}+1}\Big{)} ^{\frac{1-q}{\gamma+q}}=\big{\|}\sigma\big{\|}_{L^{s_{1}}(\Omega,dx)}^{\frac{ \gamma+1}{\gamma+q}}<+\infty\] Hence, (1.3) is valid. Similarly, we note that \(s_{2}=\frac{np}{n+2p}>1\). By Holder inequality, \[\int_{\Omega}(\mathbf{G}\mu)^{\gamma}d\mu\leq\|\mathbf{G}\mu\|_{L^{\gamma^{ \prime}_{2}}(\Omega,dx)}^{\gamma}\|\mu\|_{L^{s_{2}}(\Omega,dx)} \tag{3.4}\] where \(s_{2}^{\prime}=\frac{s_{2}}{s_{2}-1}\). We see that \[\frac{1}{s_{2}}+\frac{1}{\gamma s_{2}^{\prime}}=\frac{2}{n}.\] Taking Hardy-Littlewood-Sobolev inequality, \[\|\mathbf{G}\mu\|_{L^{\gamma s_{2}^{\prime}}(\Omega,dx)}\lesssim\|\mathbf{G} \tilde{\mu}\|_{L^{\gamma s_{2}^{\prime}}(\mathbb{R}^{n},dx)}\lesssim\|\tilde{ \mu}\|_{L^{s_{2}}(\mathbb{R}^{n},dx)}=\|\mu\|_{L^{s_{2}}(\Omega,dx)}, \tag{3.5}\] where \(\tilde{\mu}\) is the zero extension of \(\mu\) to \(\mathbb{R}^{n}\). Thus, by (3.4) and (3.5), \[\|\mathbf{G}\mu\|_{L^{\gamma}(\Omega,d\mu)}\leq\left(\|\mu\|_{L^{\gamma}(\Omega, dx)}^{\gamma+1}\right)^{\frac{1}{\gamma}}<+\infty.\] Therefore, (1.4) is fulfilled. Consequently, Theorem 1.1 yields the existence of the minimal positive solution \(u\in L^{p}(\Omega,dx)\) to (1.1). ## Acknowledgments This study was supported by Thammasat University Research Fund, Contract No. TUFT 52/2566. A.C.M. gratefully acknowledges financial support from the Excellent Foreign Student (EFS) scholarship, Sirindhorn International Institute of Technology (SIIT), Thammasat University.
2303.09425
Velocity measurement in the extensive [OIII] emission region 1.2° south-east of M31
The discovery of a broad, $\sim$1.5$^{\circ}$ long filamentary [OIII] 5007 emission $\sim$1.2$^{\circ}$ south-east of the M31 nucleus has recently been reported. More than 100 hours of exposures of a wide field (3.48$^{\circ} \times 2.32^{\circ}$) have allowed this pioneering detection based on 30 \AA\ narrow-band filters and several small refractors equipped with large cameras. We report a first velocity measurement in this extensive [OIII] emission line region. We used the low-resolution spectrograph MISTRAL (R $\sim$ 750), a facility of the Haute-Provence Observatory 193 cm telescope. The velocity measurement is based on the H$\alpha$, [NII], [SII] and [OIII] lines. The best solution to fit the spectrum indicates that the H$\alpha$ and [OIII] emissions are at the same heliocentric line-of-sight velocity of -96$\pm$4 km s$^{-1}$. This was measured within an area of $\sim$250 arcsec$^2$ selected on a bright knot along the long filament of $\sim$1.5$^{\circ}$, together with a [OIII]5007 surface brightness of 4.2$\pm$2.1 10$^{-17}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$. This agrees moderately well with the previous measurement. We also estimated the H$\alpha$/[NII] line ratio as $\sim$1.1. The radial velocities at which the H$\alpha$ and [OIII] lines were detected seem to show that these hydrogen and oxygen atoms belong to the same layer, but we cannot exclude that another weaker [OIII] line, belonging to another structure, that is, at another velocity, is below our detection threshold. Different scenarios have been considered to explain this filamentary structure...
P. Amram, C. Adami, B. Epinat, L. Chemin
2023-03-16T15:59:44Z
http://arxiv.org/abs/2303.09425v1
# Velocity measurement in the extensive [OIII] emission region 1.2+ ###### Abstract Context:The discovery of a broad, \(\sim\)1.5\({}^{\circ}\) long filamentary [OIII] 5007 emission \(\sim\)1.2\({}^{\circ}\) south-east of the M31 nucleus has recently been reported. More than 100 hours of exposures of a wide field (3.48\({}^{\circ}\)\(\times\) 2.32\({}^{\circ}\)) have allowed this pioneering detection based on 30 A narrow-band filters and several small refractory equipped with large cameras. Aims:We report a first velocity measurement in this extensive [OIII] emission line region. Methods:We used the low-resolution spectrograph MISTRAL (R \(\sim\) 750), a facility of the Haute-Provence Observatory 193 cm telescope. The velocity measurement is based on the H\(\alpha\), [NII], [SII] and [OIII] lines. Results:The best solution to fit the spectrum indicates that the H\(\alpha\) and [OIII] emissions are at the same heliocentric line-of-sight velocity of -96\(\pm\)4 km s\({}^{-1}\). This was measured within an area of \(\sim\)250 arcsec\({}^{2}\) selected on a bright knot along the long filament of \(\sim\)1.5\({}^{\circ}\), together with a [OIII]5007 surface brightness of 4.2\(\pm\)2.1 10\({}^{-17}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\). This agrees moderately well with the previous measurement. We also estimated the H\(\alpha\)/[NII] line ratio as \(\sim\)1.1. Conclusions:The radial velocities at which the H\(\alpha\) and [OIII] lines were detected seem to show that these hydrogen and oxygen atoms belong to the same layer, but we cannot exclude that another weaker [OIII] line, belonging to another structure, that is, at another velocity, is below our detection threshold. Different scenarios have been considered to explain this filamentary structure. The extra-galactic origin was excluded in favour of Galactic origins. We tentatively assume that this filament is a piece of a supernova remnant located at a distance of \(\sim\)0.7 kpc from the Sun, of which we only see a small fraction of the shells with a radius of \(\sim\)35 pc. The progenitor may be along the line of sight of the galaxy M31, but this observation might also just be part of a large-scale filamentary structure that should be investigated further. Conclusions: ## 1 Introduction Filter widths at half-maximum ranging from \(\sim\)940 A (B/b-band) to \(\sim\)1480 A (R/\(\pi\)-band) are used in most optical wide-field sky surveys. This allows detecting stellar continuum emission in stars or in galaxies, but not the intrinsic narrow emission line, as was first done by the Sloan Digital Sky Survey (SDSS; York et al. 2000) and by many others since. Medium-band filters are used in multiple narrow-band cosmological surveys that are carried out using many filters. They are typically ten times narrower than broad-band surveys, such as the Calar Alto Deep Imaging Survey (CADIS; Wolf et al. 2001a,b), and since, many others as well like the Javalambre-Physics of the Accelerated Universe Astrophysical Survey (J-PAS; Benitez et al. 2014). J-PAS uses 54 filters with a width of 145 A, placed 100 A apart over a multi-degree field of view (FoV). Medium-band filter surveys are more efficient in measuring the photometric redshift of distant galaxies than extended emission-line regions. Indeed, even a medium-band filter is not yet narrow enough to detect the Balmer lines emitted by the hydrogen atom or auroral lines emitted by several atomic species (O, O\({}^{+}\), O\({}^{++}\), N\({}^{+}\), and S\({}^{++}\)), which are drowned in the diffuse sky background. Narrow-band surveys (FWHM = 10-20 A) that allow detecting emission lines are much more time-consuming because almost each emission line needs its proper narrow band at each redshift. To follow the same line when it is redshifted, a wide spectral range typically needs to be followed in steps of 10-15 A. For example, to detect a Balmer line or an auroral line from z=0 to z=0.1 with a constant velocity step (i.e. a constant resolution), we need \(\sim\)40 narrow-band filters per emission line with an average step of 15 A, increasing linearly with the wavelength. In addition, medium-band filters should be coupled to broad-band imagery to remove the continuum emission. For this reason, wide-field emission-line surveys are rare and are therefore often conducted at low spatial resolution to reduce the observing time. Two narrow-band filter surveys almost cover the whole sky : (1) the Virginia Tech Spectral line Survey (VTSS; Finkbeiner 2003a) covered the northern hemisphere with a narrow bandpass (17 A) H\(\alpha\) filter at a resolution of 6' and a usable radius of 5\({}^{\circ}\) for each pointing and (2) the Southern HAplpha Sky Survey (SHASSA; Gaustad et al. 2001) mapped a latitude south of 20\({}^{\circ}\), with 13\({}^{\circ}\)\(\times\)13\({}^{\circ}\) FoVs and a resolution of 1.6' and 4.0', depending on the sensitivity. Alternatively, the Wisconsin H-Alpha Mapper Northern Sky Survey (WHAM; Reynolds et al. 2002, 1998; Haffner et al. 1998a; Haffner 1999; Haffner et al. 2003) covers the sky north of declination -30\({}^{\circ}\) with an an gular resolution of \(\sim\)1\({}^{\circ}\) and a sensitivity of 0.15 Rayleigh. It consists of 37,565 spectra obtained with a dual-etalon Fabry-Perot filter instead of narrow-band filters. The velocity resolution of \(\sim\)12 km s\({}^{-1}\) over a velocity range of \(\sim\)-90, +90 km s\({}^{-1}\) enables the removal of the geocoronal H\(\alpha\) emission. The WHAM survey has shown that interstellar H\(\alpha\) emission is detected in the whole sky, with intensities that range from thousands of Rayleigh near the Orion nebula and hundreds of Rayleigh in a large HII region (e.g. Barnard's loop) to 0.5 Rayleigh in faint high-latitude regions. Drechsler et al. (2023a, hereafter refereed to as D2023) used a [OIII] 5007 A narrow-band filter (FWHM = 30 A) on a 106 mm refractor and accumulated wide-field exposures of M31 during a total of 24.6 hours at various dark observing sites in Lorraine, France. Confirming observations were obtained using a 106 mm and a 135 mm telescope in California and in New Mexico, cumulating an additional 85.5 hours and 24.9 hours in [OIII], respectively. This team of astronomers reported the discovery of an unknown broad, 1.5\({}^{\circ}\) long filamentary emission nebulosity 1.2\({}^{\circ}\) south-east of the M31 nucleus based on the large FoV of 3.48\({}^{\circ}\times 2.32^{\circ}\) that is allowed by small telescopes and large cameras. On their website1, these authors show [OIII] and H\(\alpha\) images around M31 that show that the H\(\alpha\) and [OIII] shapes of the flux distribution are very different. Along the same line of sight (LoS) lies H\(\alpha\) emission that follows the large-scale patchy distribution, and also a filamentary and linear [OIII] emission that resembles cirrus fibratus radiatus in meteorology, that is, displays a very narrow band of fibrous filaments. In addition, this [OIII] emission has no obvious emission counterparts from radio to X-rays wavelengths. These authors estimated an [OIII] 5007 surface brightness of 4\(\pm\)2 10\({}^{-18}\) erg cm\({}^{-2}\) s\({}^{-1}\) arcsec\({}^{-2}\). Footnote 1: [https://www.astrobin.com/1d8ivk/](https://www.astrobin.com/1d8ivk/) In this paper, we report a velocity measurement in a region belonging to the filament that we compared to a region located outside the filament. Sect. 2 and Appendix A describe the observations and the data reduction. Sect. 3 provides the results, which are discussed in Sect. 4 before we conclude in Sect. 5. ## 2 Observations and data reductions Under excellent conditions of transparency and with a seeing of \(\sim\)2 arcsec, we have obtained long-slit spectra on January 25, 2023, using the low spectral resolution R = 700 at H\(\beta\) and R = 750 at H\(\beta\), which gives a line spread function (LSF); and a velocity dispersion of 2.95 A (182 km s\({}^{-1}\)) and of 3.72 A (170 km s\({}^{-1}\)) at H\(\beta\) and H\(\alpha\) wavelengths, respectively. The MISTRAL2 spectrograph3 installed on the 193 cm OHP telescope is equipped with a blue grism that allows covering the wavelength domain ranging from 4250 to 8000 A (Adami et al. 2018). We spent one hour on the [OIII] filament detected by D2023, hereafter referred to as the onset spectrum, and one hour offset from it, referred to as the offset spectrum. The 1D science spectra around the lines of interest are shown in Figs. 2 and A.2. Details about the data reduction are given in Appendix A. Footnote 3: [http://www.obs-hp.fr/guide/mistral/MISTRAL_spectrograph_camera.shtml](http://www.obs-hp.fr/guide/mistral/MISTRAL_spectrograph_camera.shtml) ## 3 Results When all the identifiable lines are fit together to optimise the measurement, the best solution gives a heliocentric LoS velocity of -96\(\pm\)4 km s\({}^{-1}\), meaning that all the chemical elements belong to the same layer. The fluxes in H\(\alpha\) and [OIII] 5007 are 7.5\(\pm\)2.5 and 2.7\(\pm\)1.4 10\({}^{-17}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\) (or 13.2 and 3.7 Rayleigh), respectively. Our [OIII] flux measurement is about seven times higher than that of 4\(\pm\)2 10\({}^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\) from D2023. Considering that we chose to observe a bright knot both in the [OIII] and H\(\alpha\) maps, our [OIII] flux measurement agrees acceptably well with that of D2023. The [OIII] surface brightness of the offset spectrum is 5.1\(\pm\)2.6 10\({}^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\), which is less than twice larger than the D2023 mean measurement. This suggests that the [OIII] filament is probably somewhat more extended than detected in narrow-band imaging. The H\(\alpha\)/[NII]6548, H\(\alpha\)/[NII]6583, and H\(\alpha\)/[NII]6548+6583 line ratios are equal to 3.1, 1.7, and 1.1, respectively. The LSF correctly fits all the spectral lines, which means that the lines are not resolved. The intrinsic dispersion of the emission lines is therefore lower than 137-147 km s\({}^{-1}\). The H\(\alpha\)/H\(\beta\) ratio is \(\geq\) 4.6. ## 4 Discussion The first question that should be discussed is why this extremely extended structure was discovered only now. The answer was given by the choscevers and is described in the introduction: it is very weak, and thus requires very long exposures in narrow-band imagery and goes unnoticed when wider filters are used. This emission is unresolved and diffuse, thus the use of a telescope with a larger mirror would not change the detection limit. This discovery demonstrates the importance of using a modest collecting surface, which makes it possible to cover a very large FoV with large modern detectors. The second question is why this structure has been detected in [OIII] and not in H\(\alpha\). The answer is probably linked to the presence of night skylines (geocoronal H\(\alpha\) plus OH lines: 12 lines ranging between -150 and + 120 km s\({}^{-1}\) around H\(\alpha\) at rest; Haffner et al. 2003), which are more intense around H\(\alpha\) than around [OIII] 5007, as well as the sodium lines due to possible urban light pollution. All the spectral lines of interest (hydrogen, oxygen, nitrogen, and sulfur) are highly contaminated by the night skylines. Despite this, low-resolution spectrography makes it possible to measure some of them better than what could be done in imaging. When we consider a H\(\alpha\) flux of \(\sim\)13 Rayleigh, which is rather high, we estimated that the night skylines passing through an FWHM\(\sim\)30 A filter centred around H\(\alpha\) at rest (which provides an equivalent power of resolution R\(\sim\)219) are more than 12 times more intense than the H\(\alpha\) flux of interest. The third question is the nature of the object. It might be an extragalactic or a Galactic structure. Figure 1: Extended [OIII] 5007 emission region south-east of M31 from D2023. H\(\alpha\) isocontours from Finkbeiner (VTSS, 2003a) are overplotted in log scale (in red). These contours are described in Fig. 3. The two yellow rectangles indicate the MISTRAL long-slit positions, the one called “onset” is located to the south, and the comparison rectangle, called the “offset” slit, lies to the north, where both H\(\alpha\) and [OIII] emissions are fainter. Their widths are enlarged by a factor 10 for visibility. North is up and east left. The displayed FoV is \(\sim\)5.1\({}^{\circ}\)\(\times\)1.6\({}^{\circ}\). We discuss the first hypothesis, which seems to us the most improbable, in Appendix B. ### Nearby supernova remnant? The strong [NII] lines observed here as in other SNRs might be due to nitrogen-enriched gas thrown off by the pre-supernova star. The [NII]6583/[NII]6548 line ratio is \(\sim\) 1.8, which is lower than the canonical value of 2.8, but [NII]6548 is blended with H\(\alpha\). The uncertain [SII]6716/[NII]6731 ratio is equal to \(\sim\)1.3. The H\(\alpha\)/[OIII]5007] ratio is \(\sim\) 1.5. van den Bergh et al. ([1973]) published a photographic atlas of 24 galactic supernova remnants (SNRs), 9 of which extend on more than 1\({}^{\circ}\). Most of them display somewhat complex spherical shapes that are embedded in each other (the most famous is S147, the crab, vela, and cygmus loop ), but some others exhibit linear filaments of various lengths (e.g. HB 9 or VRO 420501). The faint HB 3 SNR displays only one linear filament that is so short that the size of the SNR was not evaluated. Other SNRs are very shallow and diffuse (e.g. KES 45). To distinguish SNRs from HII regions and to measure their physical properties, spectra are needed. Daltabuit et al. ([1976]) and D'Odorico & Sabbadin ([1977]) led spectroscopic studies of SNRs that were extracted from the atlas of van den Bergh et al. ([1973]), and we can compare them to our target. The low values of the H\(\alpha\)/[NII] ratio of 1.1 and of the [SII]6716/[SII]6731 ratio of 1.3 indicate that it might belong to an SNR and not to HII regions. The H\(\alpha\)/[NII] line ratios of the reference SNR sample indeed range between 0.40 and 2.18, with a median value of 1.29, and the [SII]6716/[SII]6731 line ratio ranges between 0.50 and 1.44, with a median value of 1.23. Both median values are very similar to our measurements. Daltabuit et al. ([1976]) showed different correlations between physical quantities (line ratios, diameters, and expansion velocities) that should be understood as the result of shock waves propagating in the interstellar medium. Firstly, the sulphur intensity ratio [SII]6716/[SII]6731 and H\(\alpha\)/[NII] are correlated. Our measurements follow the trend given by this correlation. Secondly, two correlations exist between the intensity line ratios and the diameter of the SNR, which allows us to guess one diameter measurement and thus a distance estimate for our target. The first correlation is between [SII]6716/[SII]6731 and the diameter, the second correlation is between H\(\alpha\)/[NII] and the diameter. When we use these correlations as template values, the sulphur and H\(\alpha\)/[NII] line ratios would give a diameter of 45\(\pm\)5 pc and 35\(\pm\)5 pc, respectively. We would favour the last value, taking into account that the line ratio determination of H\(\alpha\)/[NII] is more robust that that of sulphur. The two measurements are nevertheless compatible. When we tentatively extrapolate the shape of the filaments that are slightly curved, to guess a possible location for the SN progenitor (see Fig. 3), the angular radius of the SNR would be \(\sim\)1.4\(\pm\)0.1\({}^{\circ}\). This leads to a distance of the target of 0.7\(\pm\)0.1 kpc from the Sun. Thirdly, a correlation exists between the H\(\alpha\)/[NII] line ratio and the expansion velocity of the SNR. The expected H\(\alpha\)/[NII] line ratio for an expansion velocity of the SNR of 96 km s\({}^{-1}\) is \(\sim\)1.2\(\pm\)0.1, which is fully compatible with our measurement. Finally, we may wonder where the progenitor of the SNR is. It is probably a low-magnitude evolved star, and if the drawing that sketches the shape of the SNR shell is to believed, it might be undetectable if it is along the LoS of M31. ### Galactic filament? Fig. 3 displays the large-scale VTSS H\(\alpha\) filaments in a larger FoV of \(\sim\)8.0\({}^{\circ}\)\(\times\)8.0\({}^{\circ}\), surrounding the \(\sim\)3.5\({}^{\circ}\)\(\times\)5.1\({}^{\circ}\) D2023 [OIII] image. Fig. 1 shows that H\(\alpha\) emission is also present south-east of M31, where [OIII] emission has been detected. This is confirmed by Fig. 3, which shows that the H\(\alpha\) filament that matches the [OIII] emission is extended south-west of M31, to trace almost half a ring, similar to Bernard's loop (Sh 2-276). This figure also exhibits other filaments at larger distances from M31, which expand over much larger FoVs than displayed in Fig. 3. Figure 2: Wavelength and flux calibrated science spectra (blue histograms) in the blue (left panel) and red (right panel) bands. The expected locations of the blueshield H\(\beta\), [OIII], H\(\alpha\), [NII], and [SII] lines have been fitted by Gaussian function (pink lines). The red line is the sum of the Gaussian functions. The small vertical bars below the spectra indicate the night skylines from UVES (in green) and sodium emission lines due to urban light pollution of human origin (in purple), their widths illustrate their relative intensity. The sum of all those lines is over-plotted in light green. Because the night skylines extend beyond the frame, they have been downscaled by a factor 5 and 50 for the blue and the red spectrum, respectively, and are mirrored symmetrically to the x-axis to fit in the frame (in green). The bold vertical black error bar on the right side of the plots indicates the standard deviation of the continuum between the emission lines. Even though the [OIII] emission has been detected on scales of several degrees, H\(\alpha\) emission has previously been detected on much larger scales. If the [OIII] and H\(\alpha\) emissions are in some way correlated, it is difficult to interpret the [OIII] emission without knowing whether [OIII] continues to follow the H\(\alpha\) emission outside the D2023 FoV. The WHAM and VTSS surveys we described in the introduction have shown that interstellar H\(\alpha\) emission is detected in the whole sky, with intensities that range from thousands of Rayleigh near the Orion nebula and hundreds of Rayleigh in large HII regions (e.g. Barnard's loop) to 0.5 Rayleigh in faint high-latitude regions. Interstellar H\(\alpha\) emission fills the sky with loops, filaments of any sizes, and unresolved source (\(<\)1\({}^{\circ}\)) or large emission enhancements, including large-scale filaments that are superposed on a diffuse H\(\alpha\) background that is essentially due to the Galactic free-free emission (e.g. Marcelin et al. 1998) in addition to the complex geocoronal emissions (Nossal et al. 2001). WHAM is a spectrometric survey that provides a flux and wavelength profile for each squared degree, from which a night skyline-free barycentric velocity is computed. The H\(\alpha\) velocity distribution of the northern sky is dispatched in lambda maps in step of 20 km s\({}^{-1}\). The maximum intensity of the squared-degree region that includes the D2023 [OIII] image is found in the lambda map [-90,-70] km s\({}^{-1}\) (shown in the small inset in the bottom left corner of Fig. 3, where the location of M31 is indicated by a blue circle). We know that WHAM does not extend at velocities lower than -90 km s\({}^{-1}\), and therefore, our measurement of -96\(\pm\)4 km s\({}^{-1}\) agrees very well with it. At this scale, the H\(\alpha\) flux encircled in this blue circle looks homogenous. Using the VTSS data within \(\sim\)1\({}^{\circ}\)2 around the target, we estimated the mean H\(\alpha\) flux and its standard deviation to 3.8\(\pm\)0.3 Rayleigh. The H\(\alpha\) flux measured from Fig. 2, which is the difference between the onset and the offset regions, is \(\sim\)14.5 Rayleigh. When we hypothesise that the flux of the offset region is given by the mean flux measured from VTSS and calibrated using WHAM (i.e. 3.5 Rayleigh), our flux measurement is \(\sim\)3.8 times higher than the diffuse WHAM H\(\alpha\) emission. We chose to observe a knot in the [OIII] image, however, which may also correspond to a knot in H\(\alpha\). WHAM and VTSS, with their resolutions of 1\({}^{\circ}\)2 and 6\({}^{\circ}\)2, respectively, cannot resolve the tiny FoV covered by our slit, which is \(\lesssim\) 2 10\({}^{-5}\) and 300 times smaller, respectively. Many faint filaments have no obvious correspondence to any previously known structures or to the other phases of the interstellar medium revealed by 21 cm radio continuum surveys, IR, or X-ray observations, and they have no readily identifiable origin or source of ionisation. Significant kinematical variations are also observed among various features. D2023 argued that the surface brightness distributions of H\(\alpha\) and [OIII] are very different (see 4.2), and this is what we can observe from their images4. An inspection also show linear features in this H\(\alpha\) image, however, that are much less extended and spectacular than the fine and long [OIII] filaments. Some have the same orientation as the [OIII] filaments, however. A diffuse ionisation source is probably required to maintain constant intensities along the filaments. Many mechanisms may produce large and faint filaments. The efficiency of leaking Lyman-continuum radiation from the disc is a long-standing problem for ionising a thick layer of the Galaxy and even the Galaxy halo (Bland-Hawthorn et al. 1998), but large-scale, kinematic Galactic structures such as Galactic rotation, superbubbles, chimneys, or worms (Koo et al. 1992) may be at their origin as well. The gas temperature (ranging from 6,000 to 10,000 K) and ionising states, which are requested to select the correct mechanisms, are rarely known because they require time-consuming additional line observations ([NII], [SII], and [OIII] lines). Footnote 4: [https://www.astrobbin.com/1d8ivk/](https://www.astrobbin.com/1d8ivk/) ## 5 Conclusions The faint and extended almost linear filamentary region discovered by D2023 seems very unusual. More precisely, extended [OIII] emission line regions over squared degrees like this have rarely been observed, even at redshift zero. This does not mean that they do not exist. They have to be searched for in almost blind survey mode over a large-scale area to explore different physical mechanics and to lower the cosmic variance. The low [NII]/H\(\alpha\) ratio of our data is most intriguing. It allows us to speculate that this filament may be a piece of an SNR. However, the arguments to support this speculation are not strong enough to confirm without ambiguity that this explains the origin of this emission, in particular the origin related to the curvature of the filament, which makes it possible to speculate on the size and distance of the SNR. Further investigations are required to study different Galactic large-scale scenarios. According to our velocity measurements, the structure detected in H\(\alpha\) and in [OIII] belongs to the same layer. It is unclear whether the structure observed by D2023 is different from the detected structure. On the one hand, it is indeed possible that the line [OIII] that we have measured is detectable everywhere where H\(\alpha\) has been seen in VTSS and WHAM, precisely because we have chosen to position the OFF slit in a region of weak Figure 3: Same as Fig. 1, but on a wider FoV of \(\sim\)8.0\({}^{\circ}\)\(\times\)8.0\({}^{\circ}\), showing the large-scale VTSs H\(\alpha\) filaments calibrated with WHAM. The isocontours are given in log scale. They have the following increasing intensities in Rayleigh: 2.60, 2.65, 2.74, 2.92, 3.25, 3.87, 5.03, 7.20, 11.27, 18.89, 33.18, and 59.95. The green circle extrapolates the shape of the [OIII] filament. The centre of the circle indicates the possible location of a foreground central stellar progenitor along the LoS of M31. The bottom left inset shows the WHAM-integrated H\(\alpha\) intensity, which ranges between -90 and -70 km s\({}^{-1}\). The location of M31 at a Galactic latitude of \(\sim\)-22\({}^{\circ}\) is indicated by the blue circle, which in turn is zoomed in the upper left corner. emission in H\(\alpha\) and in [OIII]. On the other hand, because the flux we measured is \(\sim\)6.8 times more intense than that detected by D2023, it is possible that we were limited by the sensitivity of the instrument as well as by an insufficient exposure time. Detecting and recognising a weaker structure in [OIII] at another LoS velocity is extremely challenging even when the two lines of [OIII] 4559 and 5007 are detected simultaneously. In any case, if this structure at another velocity exists, it is not detectable with our instrumental setup. We should aim to multiply the S/N by a factor of about five and to choose a resolving power about twice as high to overcome the very constraining lines of the night sky. A seeing-limited, multiple narrow-band filter survey, including the two most important Balmer lines H\(\alpha\) and H\(\beta\) and auroral lines (oxygen, nitrogen, and sulphur) over a very wide field of \(\sim\)10\({}^{\circ}\)\(\times\)10\({}^{\circ}\) centred around M31 would allow us to proceed in the understanding of the origin of this object, of the physical parameters (temperature and ionisation states) of the filaments that seem even more extended that what was observed by D2023, when possible correlations between [OIII] and H\(\alpha\) emission are searched for. The H\(\alpha\) observations from VTSS and WHAM around M31 (see Fig. 3) show very extended structure that is more or less filamentary at the low spatial resolution of WHAM. Unfortunately, in this FoV, neither VTSS nor WHAM explore other emission lines such as [OIII], and even if other lines were available, WHAM lacks a sufficient spatial resolution to resolve the filaments. There are magnificent and promising new opportunities for amateur astronomers armed with time and powerful wide-field observation facilities through small-size telescopes and their intrinsic large FoV, which are today equipped with large detectors at relatively affordable prices and a series of narrow filters whose prices have also been greatly reduced. Professional 3D surveys could also be conducted using fast Fourier transform interferometers or Fabry-Perot tunable filters, which would have the advantage of being able to cover the Balmer and auroral lines alone for a wide range of velocities and not the unneeded spectral ranges separating them. Their versatility in modulating the spectral resolution to separate the [NII] doublet from the H\(\alpha\) line, for example, to resolve the [SII] doublet and so on, which are fundamental for all diagnostics, and in increasing the spectral resolution, radial and velocity dispersion velocities can be measured, which are also essential for understanding the observations. The future instrumental revolution will come from the use of multi-spectral detectors such as the Microwave Kinetic Inductance Detectors (MKIDs)5, whose increasing power of resolution already today spans from R\(\sim\)35 at 2540 A to \(\sim\)14 at 13100 A. The growing size of their matrix, which will be mosaicable, will enable using all the orders of the tunable filter to cover the spectral range 2540 to 13100 A during a single scanning cycle of the interferometer. This will vastly increase the merit factor of the instrument. Footnote 5: [https://web.physics.ucsb.edu/~bmazin/mkids.html](https://web.physics.ucsb.edu/~bmazin/mkids.html) ###### Acknowledgements. The authors warmly thank Delphine Russeil for interesting discussions on the origin of filaments. Jerome Schmitt is also kindly thanked for his assistance during the observations and for the setup of the MISTRAL instrument. We are also grateful to the night operators for their assistance. We also thank Isabelle Boisse and Flavien Kiefer for letting us use 2 hours or their SOPHE (in cm, as well as the directors of the OSU (Jean-Luc Beuzit) and the OIP (Marc Ferrari) for the DDT allocation. This research is based on observations made with the T193/MISTRAL spectrograph and imager at Observatoire de Haute Provence (OHP, CNRS), France and has made use of the MISTRAL database, based at OHP, and operated at CeSAM (LAM), Marseille, France.
2307.05930
Chemical freeze-out parametrization with mean field repulsive hadron resonance gas model
We have examined the chemical freeze-out surface of the heavy-ion collision experiments within an interacting hadron resonance gas model. By considering repulsive interaction among hadrons in the mean-field level, we have suitably parameterized the freeze-out surface by fitting the yield data of mid-rapidity for the most central collision, for the collision energy available in AGS, RHIC (BES), and LHC programs. To suitably account for the repulsive interaction among mesons and (anti-) baryons, we have introduced phenomenological parameters $K_M$ and $K_B$ in the freeze-out parametrization. Although a finite value of these two parameters seem to be necessary to have an improved normalized \emph{chi-square}, the effect on the rest of the parameters like temperature and relevant chemical potentials seem to be within the standard variance.
Sunny Kumar Singh, Nachiketa Sarkar, Deeptak Biswas
2023-07-12T05:52:41Z
http://arxiv.org/abs/2307.05930v1
# Chemical freeze-out parametrization with mean field repulsive hadron resonance gas model ###### Abstract We have examined the chemical freeze-out surface of the heavy-ion collision experiments within an interacting hadron resonance gas model. By considering repulsive interaction among hadrons in the mean-field level, we have suitably parameterized the freeze-out surface by fitting the yield data of mid-rapidity for the most central collision, for the collision energy available in AGS, RHIC (BES), and LHC programs. To suitably account for the repulsive interaction among mesons and (anti-) baryons, we have introduced phenomenological parameters \(K_{M}\) and \(K_{B}\) in the freeze-out parametrization. Although a finite value of these two parameters seem to be necessary to have an improved normalized _chi-square_, the effect on the rest of the parameters like temperature and relevant chemical potentials seem to be within the standard variance. ## I Introduction The investigation of the phase structure of strongly-interacting matter stands as a pivotal and fundamental inquiry within the realm of ultra-relativistic heavy-ion physics. To comprehend the particle spectra observed in these experiments, statistical thermal models inspired by quantum chromodynamics (QCD) are employed. In particular, the transverse momentum (\(p_{T}\)) integrated rapidity spectra (namely \(dN/dY\)) are frozen onward the chemical freeze-out (CFO) boundary and helps to map the freeze-out surface on the phase diagram via the CFO parametrization with temperature (\(T\)) and baryon chemical potentials (\(\mu_{B}\)) [1]. For the past few decades, the Hadron resonance Gas (HRG) model has been successfully describing the abundance of hadrons in collisions across a wide range of energies, from the Schwerionen-Synchroton (SIS) to the Large Hadron Collider (LHC) [2; 3; 4; 5; 6; 7]. The success of the HRG model, coupled with the lack of reliable first-principle theories that can provide such parameterization for both high and low baryon density regions of the phase diagram have firmly established HRG as one of the most widely utilized models in this field. The simplest version of the HRG model is the ideal HRG model (IHRG) [4; 5; 7], where attractive interactions among hadrons in a dilute hadron gas can be approximated by treating higher mass resonances as stable particles. Initially proposed within the relativistic virial expansion framework, using the \(S\)-matrix approach [8], this model allows for the calculation of various thermodynamic quantities [9]. However, the IHRG model encountered discrepancies in different thermodynamic quantities when compared to lattice QCD results [10; 11], particularly at the temperature range above the pseudo-critical value. Additionally, an excess in the pion number density at chemical freeze-out was observed[12], indicating the need to incorporate short-range repulsive interactions between hadrons to achieve more accurate Equations of State (EoS) and realistic estimations of the chemical freeze-out boundary. One of the frequently employed methods to model the short-range repulsion is the Excluded Volume Hadron Resonance Gas (EVHRG) model [12; 13; 14; 15; 16; 17]. In this model, repulsive interactions are taken into account by incorporating an impenetrable volume surrounding the individual hadrons. Several versions of the EVHRG model have been proposed in the literature to determine the strength of short-range repulsive interactions through comparisons with lattice QCD calculations or experimental data. These include the diagonal EVHRG model [12], the cross-terms EVHRG [18; 19], the mass-dependent EVHRG [20; 21], and the Flavor- dependent EVHRG model [22]. Another phenomenological approach to include the interaction is the Van der Waals Hadron Resonance Gas (vdWHRG) model, which explicitly incorporates both repulsive and attractive interactions between baryons and anti-baryons [23; 24; 25; 26; 27]. The repulsive interactions between the various baryon-baryon and meson- meson can also be incorporated at the mean-field level [28; 29; 30; 31]. The interacting part of the pressure is added along with the ideal one and modification is introduced into the statistical model by shifting the energy of each particle by an amount equal to \(U(n)=Kn\) where \(n\) is the total hadron number density. One can incorporate the mean-field coefficients \(K_{M}\) and \(K_{B}\) to scale the repulsive interaction strength among the mesons and baryons respectively. Recent works [31] have augmented the mean-field coefficients \(K_{B}\) from lattice data of \(\chi_{2}^{B}-\chi_{4}^{B}\) and \(\chi_{2}^{B}-\chi_{6}^{B}\). In another investigation, suitable values of \(K_{B}\) and \(K_{M}\) were estimated by fitting lattice QCD data of bulk observables, cumulants, and the speed of sound [32]. In this study, we have focused on constraining the mean-field model at the chemical freeze-out boundary by com paring it with experimental yields through a \(\chi^{2}\) minimization procedure. Previous applications of this mean-field model at freeze-out involved fixing the repulsive strength parameter \(K\) to explain the data of 200 \(A\)GeV S + Au collisions at CERN- SPS [33]. While earlier studies consistently suggested a value of \(K_{B}=450\) MeV fm\({}^{-3}\), we aim to investigate the collision energy dependence of these phenomenological parameters by analyzing the rapidity spectra. Within this approach, for the first time we have obtained the collision energy dependence of mean-field coefficients by parameterizing the chemical freeze-out surface for RHIC and LHC energies. We have organized the paper as follows. In Sec. II we give a short description of the ideal HRG and the MFHRG model. In Sec. III we discuss the method we have employed to extract the various parameters in the model. In Sec. IV our results and the discussion of our results are provided in the context of heavy ion collision experiments. We conclude by giving a summary of the present work in Sec. V. ## II Formalism In the ideal hadron resonance gas model, the thermodynamic potential for each species is [15; 34]: \[\ln Z_{i}^{id}(T,\mu,V) \tag{1}\] \[=\pm\frac{Vg}{(2\pi)^{3}}\int d^{3}p\ln[1\pm e^{(-(E_{i}-\mu_{i})/ T)}]\] Where the upper(lower) sign corresponds to fermions(bosons). Here \(g\) is the degeneracy factor and \(V\) is the volume. Considering the baryon number (\(B\)), electric charge (\(Q\)), and strangeness (\(S\)), the chemical potential (\(\mu_{i}\)) of the \(i\)th hadron is determined by \(\mu_{i}=Q_{i}\mu_{Q}+S_{i}\mu_{S}+B_{i}\mu_{B}\). The grand thermodynamic potential for the total ensemble is given by: \[\ln Z^{ideal}=\sum_{i}\ln Z_{i}^{ideal} \tag{2}\] The number density of each species can be determined by: \[n_{i} = \frac{T}{V}\left(\frac{\partial\ln Z_{i}}{\partial\mu_{i}}\right) _{V,T} \tag{3}\] \[= \frac{g_{i}}{(2\pi)^{3}}\int\frac{d^{3}p}{\exp\left[\left(E_{i}- \mu_{i}\right)/T\right]\pm 1}.\] One can relate the thermal abundance of the detected particles at the chemical freeze-out surface with the corresponding rapidity densities as follows: \[\left.\frac{dN_{i}}{dy}\right|_{\text{Det}}\simeq\frac{dV}{dy}n_{i}^{\text{ Tot}}\right|_{\text{Det}} \tag{4}\] The total number density of each species considering decays from higher resonances can be computed as follows: \[n_{i}^{Tot} =n_{i}(T,\mu_{B},\mu_{Q},\mu_{S}) \tag{5}\] \[\quad+\sum_{j}n_{j}(T,\mu_{B},\mu_{Q},\mu_{S})\times\text{Branching Ratio}(j\to i)\] ### Mean-Field HRG (MFHRG) With the inclusion of short-range repulsive interactions between hadrons via mean-field approach, the effective chemical potential of each particle species gets modified by \(\mu_{\text{eff},i}=\mu_{i}-Kn\), where \(K\) is a phenomenological parameter that signifies the strength of the repulsive interaction and \(n\) is the number density of the interacting species of particles [31; 32]. The pressure of the mean-field repulsive model is given by: \[P_{\text{MF}}\left(T,\mu,V\right) \tag{6}\] \[=\pm T\sum_{i}\frac{g_{i}}{(2\pi)^{3}}\int d^{3}p\ln\left[1\pm e^{ -(E_{i}-\mu_{\text{eff},i})/T}\right]+\mathcal{P}_{M,B,B}\left(n_{M,B,B}\right)\] Here, \(\mathcal{P}\) is the factor arising from the interacting part, which is necessary to maintain the thermodynamic consistency [31]. \[\mathcal{P}_{B\{\bar{B}\}}\left(n_{B\{\bar{B}\}}\right) =\frac{1}{2}K_{B}n_{B\{\bar{B}\}}^{2},\] (Baryons) (7) \[\mathcal{P}_{M}\left(n_{M}\right) =\frac{1}{2}K_{M}n_{M}^{2},\] (Mesons) The above form of interacting pressure is written considering repulsive interactions among meson-meson and baryon(anti-baryon)- baryon(anti-baryon) pairs. Here the total meson number density \(n_{M}\) is calculated as: \[n_{M}=\sum_{i\in M}\frac{g_{i}}{(2\pi)^{3}}\int\frac{d^{3}p}{\exp\left[\left(E _{i}-\mu_{\text{eff},i}\right)/T\right]-1}. \tag{8}\] For mesons, \(\mu_{\text{eff},i}=\mu_{i}-K_{M}n_{M}\). \(K_{M}\) signifies the strength of the repulsive interactions among the meson-meson pairs. We have a similar equation for baryons and anti-baryon number densities: \[n_{B\{\bar{B}\}}=\sum_{i\in B\{\bar{B}\}}\frac{g_{i}}{(2\pi)^{3}}\int\frac{d^ {3}p}{\exp\left[\left(E_{i}-\mu_{\text{eff},i}\right)/T\right]+1}. \tag{9}\] \(B\) and \(\bar{B}\) imply baryons and anti-baryons respectively. Here, the effective chemical potential of the \(i\)th (anti-)baryon \(\mu_{\text{eff},i}=\mu_{i}-K_{B}n_{B\{\bar{B}\}}\). The repulsive interactions among the baryon-baryon and antibaryon-antibaryon pairs are given by the same strength parameter \(K_{B}\). These equations are transcendental in nature and should be solved simultaneously with these two equations Eq.[8-9]. ## III Method and data analysis The mid-rapidity data of hadron yields \(dN/dY\) were taken from various experiments at 0-5% centrality (most central) and at different energies. These consist of Pb-Pb collisions in LHC at a collision energy of 2760 GeV [35; 36; 37; 38]. We have also included Au-Au collisions at RHIC of 200, 130, 62.4 GeV [39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54], RHIC BES of 39, 27, 19.6, 11.5, 7.7 GeV [55; 56]. and in AGS at 4.85 GeV [57; 58; 59; 60; 61; 62; 63; 64; 65]. To extract the chemical freeze-out parameters i.e., \(T\), \(\mu_{B}\), \(\mu_{S}\), \(\mu_{Q}\) along with the parameters scaling the strength of the hadron-hadron interaction (\(K_{B}\) and \(K_{M}\)), we have fitted the detected hadron yields with the thermal model estimations. Considering the initial condition of the heavy-ion collision, it is customarily practiced to fix \(\mu_{Q}\) and \(\mu_{S}\) via two constraint equations. The first constraint is the ratio of net baryon to net charge which remains fixed throughout the collision process considering the isentropic evolution [66]. \[\frac{\sum_{i}n_{i}(T,\mu_{B},\mu_{S},\mu_{Q},K_{M},K_{B})B_{i}}{\sum_{i}n_{i} (T,\mu_{B},\mu_{S},\mu_{Q},K_{M},K_{B})Q_{i}}=r \tag{10}\] One can evaluate this ratio \(r\) considering the number of neutrons and protons in the incident nuclei. For heavy nuclei like Au-Au and Pb-Pb, this ratio \(r\) is approximately 2.5 [67]. The conservation of strangeness along with the strangeness neutrality imposes another constraint: \[\sum_{i}n_{i}(T,\mu_{B},\mu_{S},\mu_{Q},K_{M},K_{B})S_{i}=0 \tag{11}\] The rest of the parameters are determined by the \(\chi^{2}\) minimization procedure. The \(\chi^{2}\) is defined as: \[\chi^{2}=\sum_{i}\frac{\left(\left.\frac{dN_{i}}{dy}\right|_{\text{Epr}}-\left. \frac{dN_{i}}{dy}\right|_{\text{Model}}\right)^{2}}{\sigma_{i}^{2}} \tag{12}\] Here, we would like to emphasize that our present analysis focuses exclusively on data from the most central events of the collisions, thus we have chosen not to incorporate the strangeness suppression factor \(\gamma_{s}\) assuming a state of complete chemical equilibrium. For the present study, we have used data of \(\pi^{\pm},K^{\pm},p,\bar{p},\Lambda,\bar{\Lambda},\Xi^{\pm}\), as these are widely available for most of the collision energies. To optimize numerical efficiency and reduce the number of free parameters, we have fixed the \(K_{M}\) at three different values, i.e. 0, 50, and 100 \(\text{MeV fm}^{-3}\). In the considered HRG spectrum, all confirmed hadronic states up to mass 3 GeV have been included, with masses and branching ratios following the Particle Data Group [68]. The statistics and systematic uncertainties in a given data have been added considering the quadrature method. The variance of the evaluated parameter set for a particular minimization procedure has been calculated from the \(\pm 1\) deviation of the minimized \(\chi^{2}\) per degree of freedom [6]. ## IV Result and discussion ### Variation of freeze-out parameters: We have tabulated the fitted parameter set in Table. 1. For convenience, let us first discuss the variation for the mean-field coefficients, as these are the most novel output from our present study. Changes in other freeze-out parameters are commensurate with the variation in these mean-field coefficients. Extraction of both the \(K_{M}\) and \(K_{B}\) becomes numerically challenging due to the slow convergence rate. We have fixed the values of \(K_{M}\) to be 0, 50, and 100 \(\text{MeV fm}^{-3}\) and examined the corresponding values of \(K_{B}\). For a fixed value of \(K_{M}\), the \(K_{B}\) increases with collision energies and remains similar at higher RHIC and LHC energies as shown in Fig. 1. A similar saturation of thermal parameters at higher collision energies has been noticed earlier for temperature and chemical potentials [69; 70]. We have found that even for \(K_{M}=0.0\)\(\text{MeV fm}^{-3}\) a non-zero value of \(K_{B}\) helps achieve better \(\chi^{2}\) per degree of freedom while fitting with yield data. With increasing the \(K_{M}\) to 50 and 100 \(\text{MeV fm}^{-3}\) the values of \(K_{B}\) increases. In the context of heavy-ion collision, the total ensemble of baryons and mesons are connected via the constraints like net strangeness neutrality and a fixed net baryon-to-charge ratio. Along with these constraints, the final yield of mesons is predominantly influenced by the decay of various higher-mass baryon resonances [71]. Consequently, imposing a mean- field repulsion in mesons necessitates a higher value of \(K_{B}\) to restrict the baryon abundances, which eventually affects the final yield of mesons and Figure 1: Results from the fitted set of parameters. Blue, green, and red points are results for \(K_{M}\) values of 0, 50, and 100 \(\text{MeV fm}^{-3}\) respectively. validates the required constraints. Towards the lower collision energy, the medium is mainly baryon-dominated [72], and the effect from the variation of \(K_{M}\) is minimal. However, the system becomes meson dominated with increasing \(\sqrt{s_{NN}}\), and the effect of \(K_{M}\) is much more pronounced. We would like to emphasize that for the range of \(K_{M}\) considered, which spans from 0 to 100 MeV fm\({}^{-3}\), the corresponding \(K_{B}\) values vary between 100 and 800 (considering variances) MeV fm\({}^{-3}\). These specific values were previously explored in a hydrodynamic simulation that incorporated a hadronic equation of state, as mentioned in Ref. [33]. Furthermore, recent studies conducted using the MFHRG model have also confirmed this range of \(K_{B}\) values as they successfully account for the lattice data of various charge susceptibilities [31, 32, 73]. In the top left panel of Fig. 2, we have shown the variation of freeze-out temperature with collision energy for the three considered values of the mesonic mean-field coefficients \(K_{M}\) as mentioned earlier. For the freeze-out temperature (\(T\)) the difference from considering three different values of \(K_{M}\) seems to be similar to \(K_{B}\). The differences increase towards high collision energies, following the variation of \(K_{B}\). For all three values of \(K_{M}\), the temperature increases with the collision energy and becomes constant around \(160\) MeV near higher BES energies. We want to iterate here that, although the qualitative behavior is similar to the usual understanding of our freeze-out parametrization within the ideal HRG formalism, the temperature value is slightly higher (\(\sim\) 5 MeV) than the ideal HRG result. A finite value of the mean-field repulsion parameter restricts the number density which in turn produces a higher \(T\) to fit the yields. The collision energy dependence of the baryon chemical potential is shown in the top right panel. The effect of repulsion is almost negligible on the freeze-out values of \(\mu_{B}\). However, at very lower collision energy, the effect of \(K_{B}\) seems to induce a higher \(\mu_{B}\) as the medium in dominated by baryons. The chemical potential is shifted by \(K_{B}\)\(n_{B}\), so a higher value of \(K_{B}\) should be accompa Figure 2: Fitted set of parameters. Blue, green, and red points are results for \(K_{M}\) values of 0, 50, and 100 MeV fm\({}^{-3}\) respectively. nied by a higher \(\mu_{B}\) to produce a similar estimation of yields. The general behavior is similar to that of the ideal HRG parametrization. With higher collision energies the baryon stopping diminishes so the medium tends to form with lower net charges (\(B,Q\), and \(S\)), which results in a lower value of chemical potentials in the freeze-out parametrization. At lower collision energy this behavior induces a high value of \(\mu_{B}\), which tends to be zero at higher RHIC LHC energy The strange chemical potential follows the trend of \(\mu_{B}\). A finite \(\mu_{B}\) results in the dominance of the hyperons over the anti-hyperons, on the other hand, the strangeness-neutrality constraint demands the cancellation of the net strangeness arising from the baryon sector with that from the meson sector, which demands the \(\mu_{S}\) to be proportional to the \(\mu_{B}\). The variance of \(\mu_{S}\) is within the uncertainties for the three values of \(K_{M}\), which indicates that the higher mass strange mesons and baryons have negligible influence from the mean-field repulsion, while performing the freeze-out parametrization with yields. The resulting values of the freeze-out volume (which is presented in the freeze-out radius here), are presented in the right bottom panel. A similar non-monotonic behavior with the collision energy was earlier observed from the chemical freeze-out parametrization with ideal HRG in Refs. [6; 74]. The interesting observation here is the higher value of freeze-out radius while we imply a higher value of \(K_{M}\). A higher value of \(K_{M}\) and \(K_{B}\) suppresses the number density, which in turn results in a higher value of freeze-out radius to fit the yields. One can see that among the above-discussed parameters, the variation of volume with \(K\) is much more prominent. It seems that considering the repulsive interaction affects the value of the freeze-out volume mostly as the yield is directly proportional to the volume. Figure 3: \(\sqrt{s_{NN}}\) variation of the yields of (top left) \(\pi^{+}\), (top right) \(K^{+}\), (bottom left) \(p\) and (bottom right) \(\Lambda\). Blue, green, and red points are the results for \(K_{M}\) values of 0, 50, and 100 MeV fm\({}^{-3}\) respectively. Black points are the experimental data. ### Particle yields from thermal parametrization: To examine the differences in thermal abundances resulting from different values of \(K_{M}\), it would be informative to analyze the variations in yields. Fig. 3 displays the number density of pions, kaons, protons, and lambdas, calculated with the resulting parametrization. The impact of varying \(K_{M}\) is more pronounced for lighter mass pions, while the effect diminishes as the particle mass increases. Baryons with higher masses show negligible variations across the three cases, whereas pions demonstrate more significant alterations when different \(K_{M}\) values are con Figure 4: \(\sqrt{s_{NN}}\) variation of the particle to particle ratios \(\pi^{-}/\pi^{+}\) and \(K^{-}/K^{+}\) (top), strange to non-strange meson, \(K^{+}/\pi^{+}\) and \(K^{-}/\pi^{-}\) (middle), non-strange baryon to meson, proton to pion (bottom). Blue, green, and red points are the results for \(K_{M}\) values of 0, 50, and 100 MeV fm\({}^{-3}\) respectively. Black points are the experimental data from Ref. [75]. sidered. The effective chemical potential \(\mu_{\text{eff},i}=\mu_{i}-Kn_{M,B,\bar{B}}\) is expected to have a significant impact on pions since they carry only electric charge, and the magnitude of \(\mu_{Q}\) is much smaller compared to other chemical potentials. Conversely, the effect of this shift diminishes for strange and non-strange baryons, as their respective chemical potentials have larger magnitudes. It is worth noting that the chemical potentials themselves are modified for different \(K_{M}\) values, contributing to the observed variations. In this context, it is important to consider the decay feed-down effect as well. The total pion density receives a significant contribution from the decay of higher-mass meson and baryon resonances. The suppression of these states is also reflected in the final pion abundance, resulting in substantial variations. This effect is similarly observed for the lowest-mass strange hadron, kaon. On the other hand, baryons receive contributions from higher-mass baryons that are already thermally suppressed, leading to insignificant variations while considering different \(K_{M}\) values. At this juncture, we want to reiterate that the yield \(dN/dY\) is a product of this thermal density and the freeze-out volume \(dV/dY\). A reverse trend was observed for the freeze-out volume in Fig. 2, i.e. a higher value of \(K_{M}\) resulted in higher values of freeze-out volume. The cumulative effect of these two ensures the agreement between the yield data and our thermal model estimation. This indicates that the resulting parameters (especially freeze-out volume and \(K_{B}\)) are dependent on each other and on the values of \(K_{M}\). In our present study, it becomes challenging to decouple this systemic dependency. ### Particle ratios: It would be interesting to estimate various particle ratios and compare them with those from the experimental data. Along with checking the efficacy of our parameterization, this will also examine the effect of various choices of \(K_{M}\) on thermal yields. Here we shall discuss some of the important particle ratios from various sectors. The ratios of \(\pi^{-}/\pi^{+}\) and \(K^{-}/K^{+}\) as a function of \(\sqrt{s_{NN}}\) are depicted in the upper panel of Fig. 4. Our parametrization successfully reproduces the observed variation of the experimental data. The pion ratio is greater than unity at lower \(\sqrt{s_{NN}}\) due to the higher abundance of neutrons in the colliding nuclei, which induces an isospin asymmetry favoring \(\pi^{-}\). However, this asymmetry diminishes at higher RHIC and LHC energies, resulting in similar yields of \(\pi^{-}\) and \(\pi^{+}\). In the case of kaons, the variation follows the trend of \(\mu_{S}\). At lower collision energies, the positively charged kaon (\(K^{+}\)) becomes more abundant than the negatively charged kaon (\(K^{-}\)) to maintain strangeness neutrality. As the collision energy increases, this effect disappears, and the yields of particles and antiparticles become equal at the LHC. The qualitative behavior is the same for all three values of \(K_{M}\). It seems that the effect of \(K_{M}\) does not result in the large variation of the mentioned ratio. In the context of the heavy-ion collision, the strange to non-strange ratios signify the relative abundance of strangeness and portray the degree of equilibration for the strange sector [76]. Deviations from the equilibrium values have earlier been observed for non- central collisions, which necessitates the use of a strangeness saturation factor \(\gamma_{S}\)[70]. Being the lightest strange to non- strange particle, the ratio \(K^{+}/\pi^{+}\) and \(K^{-}/\pi^{-}\) are widely studied within the thermal model. The explanation of the non-monotonic behavior of the \(K^{+}/\pi^{+}\) was discussed as a signature of the thermalization in the strange sector and a possible existence of initial partonic state [77; 78; 79; 80; 81]. Although these details are beyond the scope of the present thermal model, our parameterization suitably explains the data for all three values of \(K_{M}\) in the middle panel of Fig.4. Although, there is not much variation among the estimations from the three cases, indicating that these ratios have a weak dependence on the variation of \(K_{M}\) and \(K_{B}\). We have shown the proton-to-pion ratio in the bottom panel of Fig. 4. As we have separately used two different mean-field coefficients \(K_{M}\) and \(K_{B}\) for the meson and baryon sectors separately, this ratio will portray their effect on the respective variation. We have plotted \(p/\pi^{+}\) and \(\bar{p}/\pi^{-}\) to nullify the effect of the charge chemical potential. In the context of heavy-ion collision, the abundance of pions is mainly dominated by the temperature as they are the lowest mass hadrons, whereas the protons mimic the variation of exponential of \(\mu_{B}/T\). At lower Figure 5: \(\sqrt{s_{NN}}\) variation of the total proton to total pion. Blue, green, and red points are the results for \(K_{M}\) values of 0, 50, and 100 MeV fm\({}^{-3}\) respectively. Black points are the experimental data from Ref. [75]. collision energy the medium is dominated by the baryons due to the baryon stopping, whereas at higher collision energies the system is dominated by the mesons, and changes from a baryon-dominated freeze-out to meson-dominated freeze-out occurs [72]. This phenomenon explains the variation observed in the proton to \(\pi^{+}\) ratio. On the other hand, the production of anti-proton increases as the collision energy increases and at high RHIC and LHC energies, the two ratio becomes similar. Here the values of the \(p/\pi^{+}\) ratio increase as we increase the \(K_{M}\) for a given collision energy. A higher value of \(K_{M}\) suppresses the abundance of pions and produces a higher value of the ratio. To quantify the impact of the various choice of repulsive parameters \(K_{M}\) and \(K_{B}\) in the particle ratios, we have plotted the total proton (\(p+\bar{p}\)) abundance normalized to total pion (\(\pi^{+}+\pi^{-}\)) in Fig. 5. This ratio has a larger impact from various choices of \(K_{M}\) than that of the individual ratios, as it signifies the relative abundance of the lowest mass baryons to that of the lowest mass mesons. The parametrization for \(K_{M}=0\) seems to agree with the data better than the other choices. At the freeze-out parametrization, one should not expect much variation in the baryon yield from the variation of \(K_{B}\) due to their heavier masses, on the contrary, the pion yields get significantly suppressed for a higher value of \(K_{M}\) due to the lower masses. This results in the variation shown as the ratio (\((p+\bar{p})/(\pi^{+}+\pi^{-})\)), as for a given collision energy it increases for a higher value of \(K_{M}\). The difference is much more pronounced at higher collision energies, as the thermal medium is meson dominated, so the different choices of \(K_{M}\) produce a larger effect. Motivated by the fact that the ratio corresponding to the total proton yield to pions gets significant variation from the values of mean-field parameter \(K_{M}\), we have investigated their impact on the ratios of susceptibilities calculated with the freeze-out parameterization. The \(n\)th order susceptibility is defined as: \[\chi_{x}^{n}=\frac{1}{VT^{3}}\frac{\partial^{n}(\ln Z)}{\partial\left(\frac{ \mu_{x}}{T}\right)^{n}} \tag{13}\] where \(\mu_{x}\) is the chemical potential for conserved charge \(x\). The susceptibilities would be related to the cumulants measured in the heavy-ion collisions as: \[VT^{3}\chi_{x}^{n}=C_{n}. \tag{14}\] As we have fixed the \(K_{M}\) and fitted the mean-field coefficient \(K_{B}\), it will be interesting to check the variance in the baryon cumulant ratios. We have calculated these cumulants within the Boltzmann approximation as it provides a reasonable baseline for the massive hadrons and resonances (except \(\pi\)) along the chemical freeze-out boundary [71], as \(m_{i}-\mu_{i}>>T\) at the respective freeze-out parametrization. Within this consideration, we can approximate the interacting partition functions in the Boltzmann limit and calculate the \(\chi_{B}^{n}\)[82; 31]. The differences arising from various values of the \(K_{M}\) increase as we move to ratios of higher-order cumulants. The effect is negligible for \(C_{2}/C_{1}\), while \(C_{3}/C_{2}\) and \(C_{4}/C_{2}\) decrease as we fix the \(K_{M}\) to higher values. As we imply a higher value of \(K_{M}\), it produces a higher value of \(K_{B}\) as discussed earlier in Sec. IV.1, which translates into these differences. We want to mention that the ratio \(C_{4}/C_{2}\) is \(1\) at all collision energy for the ideal HRG case, whereas the Figure 6: \(\sqrt{s_{NN}}\) variation of the cumulant ratios \(C_{2}/C_{1}\) (top), \(C_{3}/C_{2}\) (middle) and \(C_{4}/C_{2}\) (bottom). Blue, green, and red points are the results for \(K_{M}\) values of 0, 50, and 100 MeV fm\({}^{-3}\) respectively. Black points are the experimental data from Ref. [75]. impact of interaction gives rise to the observed variation. As a baseline, we have also plotted results for the net proton cumulants estimations from STAR collaboration [75, 83]. For simplification, we have not mimicked the experimental specification like \(p_{T}\) cut, decay feed-down into the cumulants calculations. Although the effect of decay feed-down and \(p_{T}\) cut-off have been found to be minimal earlier [84, 85, 82]. We want to reiterate that we have calculated the baryon cumulants ratios, which is different from the net-proton ratios. Although the qualitative behavior is similar, the quantitative difference between these two increases for higher order cumulant ratios [75]. The non-monotonic variation of \(C_{4}/C_{2}\) is not well captured in the thermal model estimation, although the results vary from the ideal baseline of \(1\). The \(C_{2}/C_{1}\) and \(C_{3}/C_{2}\) estimations agree with the data for \(K_{M}=50\) MeV fm\({}^{-3}\), there are larger deviation for \(C_{4}/C_{2}\), which seem to match for higher values of \(K_{M}\). This behavior suggests that a complete study of the net-proton cumulants with experimental constraints might restrict the variation of \(K_{M}\) and \(K_{B}\) both. ## V Summary Recent advancements in incorporating repulsive interactions between baryons and mesons in the hadron resonance gas (HRG) model have established it as a suitable candidate for providing a bulk description of the QCD medium below the transition temperature. Phenomenological descriptions such as the excluded volume HRG and van der Waals HRG models consider parameters such as a hard-core impenetrable radius of the hadrons. On the other hand, the mean-field repulsive HRG model (MFHRG) provides a robust representation of the medium by accounting for a density- dependent interaction strength. However, this model requires the inclusion of parameters such as \(K_{B}\) and \(K_{M}\) to scale the interaction strength among baryons and mesons, which can be appropriately estimated using bulk observables obtained from lattice QCD [31, 32]. It is crucial to apply this mean-field repulsive model to analyze data from heavy-ion collision experiments and assess its effectiveness in comparison to other counterparts such as the ideal HRG, evHRG, vdWHRG, and so on. Exploring the chemical freeze-out surface provides a foundation for investigating the collision energy dependence of the repulsive interaction strength by estimating \(K_{M}\) and \(K_{B}\). To parametrize the chemical freeze-out surface, we utilized the \(p_{T}\)- integrated mid-rapidity yield \(dN/dY\) data for pions, kaons, protons, \(\Lambda\), and \(\Xi\) in the most central collisions. The collision energy range available in AGS (4.85 GeV), RHIC-BES, and LHC (2.76 TeV) was analyzed. Given that the parameters \(K_{B}\) and \(K_{M}\) are interdependent due to relevant constraints and decay feed-down effects, evaluating them independently can lead to larger numerical variances. To address this issue, we fixed \(K_{M}\) at three representative values (0, 50, and 100 MeV.fm\({}^{-3}\)) and performed a \(\chi^{2}\) fitting to determine the remaining parameters: \(T\), \(\mu_{B}\), \(\mu_{Q}\), \(\mu_{S}\), \(K_{B}\), and the freeze-out radius \(R\). While the values of \(K_{B}\) were found to be finite and influenced the goodness of fit, the other parameters were consistent with those obtained from the ideal HRG model. Notably, \(K_{B}\) increases with collision energy and becomes significantly higher at higher \(\sqrt{s_{NN}}\). It is intriguing to observe that the values of \(K_{B}\) obtained from this freeze-out analysis are similar to those from earlier studies using the mean- field approach. The agreement between the estimation of \(K_{B}\) from lattice QCD-motivated studies [31, 32, 73, 86] and our analysis underscores the effectiveness of this model in describing the bulk properties of the created medium in heavy-ion collisions. Studying the influence of repulsive interactions on the thermal abundance of different states was crucial. While the effect of finite \(K_{M}\) and \(K_{B}\) values on the number density of massive strange hadrons and baryons was not significant, it played a more prominent role in the case of pions. Particle ratios within the same sector, such as meson-to-meson and baryon- to-baryon ratios were less affected by variations in \(K_{M}\) and \(K_{B}\). However, the proton-to-pion ratios exhibited significant variations. Consequently, the total proton to total pion ratio became a subject of investigation, as it appeared to be strongly dependent on the values of \(K_{M}\). Additionally, we explored the ratio of baryon susceptibilities using this freeze-out parameterization, as these susceptibilities are linked to net-proton cumulants measured in heavy-ion collisions. While our freeze-out parametrization was based on yields, there was a general agreement between our estimations of baryon cumulant ratios and the measurements of net-proton for lower orders. However, discrepancies arose when considering fourth-order cumulants. Proper treatment of cumulant ratios requires accounting for decay feed-down effects and implementing \(p_{T}\) cuts within the framework of this mean-field repulsive HRG model. This consideration will be essential for future studies, particularly in the context of energy available in BES-II, HADES, and CBM experiments. ## Acknowledgements D.B expresses gratitude to Sayantan Sharma, Aman Kanojia, Somenath Pal and Hiranmaya Mishra for engaging and fruitful discussions. D.B. would like to express sincere gratitude for the support received from NISER, Bhubaneswar, with special thanks to A. Jaiswal for the kind assistance and hospitality during the visit, where the majority of this work was performed.
2301.08142
Real analysis without uncountable sets
HMC sets are hereditarily at most countable sets. We rework a part of analysis of univariate real functions so that it (substantially) uses only HMC sets and present some applications. 1. By integrating functions $f\colon[u,v]_{\mathbb{Q}}= \{a\in\mathbb{Q}\;|\;u\le a\le v\}\to\mathbb{R}$ we carry out, with only HMC sets, Hilbert's proof of transcendence of $\mathrm{e}$. 2. We give a version of Hilbert's proof based on quasi-formal use of power series. 3. We prove, using only HMC sets, Liouville's theorem that transcendental numbers exist. 4. We construct a uniformly continuous function $f\colon[0,1]_{\mathbb{Q}}\to\mathbb{R}$ satisfying for every $a\in[0,1]_{\mathbb{Q}}$ that $f(\frac{1}{\sqrt{2}})=\frac{1}{\sqrt{2}}>f(a)$ and $f'(a)=1$ (the value $f(\frac{1}{\sqrt{2}})$ is of the continuous extension of $f$). 5. In conclusion we ask if FLT can be proven by means of only HMC sets.
Martin Klazar
2023-01-17T16:06:38Z
http://arxiv.org/abs/2301.08142v7
# A chapter in Countable Number Theory: the transcendence of Euler's number ###### Abstract We rework Hilbert's proof of the transcendence of Euler's number \(\mathrm{e}=2.71828\dots\) so that it uses only hereditarily at most countable sets. We achieve it by using only such real functions that are defined on sets of fractions, like for example \([a,b]_{\mathbb{Q}}:=\{c\in\mathbb{Q}\mid a\leq c\leq b\}\). The key tool is Riemann integration of real functions defined on rational intervals \([a,b]_{\mathbb{Q}}\). ## 1 Introduction Hilbert's proof [5] of the transcendence of Euler's number \(\mathrm{e}=2.71828\dots\) is a gem of Number Theory (and Mathematical Analysis) and in this article we make it also a gem of _Countable Number Theory_, abbreviated CNT. The proof rests on the integral identity, which we prove in Theorem 3.2, that for every \(n=0,1,\dots\), \[\int_{0}^{+\infty}x^{n}\mathrm{e}^{-x}\,\mathrm{d}x=n!\;.\] The number \(\mathrm{e}\) is given as the Cauchy sequence of fractions \[\mathrm{e}=\big{(}\sum_{i=0}^{n}\tfrac{1}{i!}\mid n=1,\,2,\,\dots\big{)}=(2,\, 2+\tfrac{1}{2!},\,2+\tfrac{1}{2!}+\tfrac{1}{3!},\,\dots)\] and is a countable, even a hereditarily at most countable, set. One may want to add that \(\mathrm{e}\) is actually the whole equivalence class \(E\) of Cauchy sequences of rational numbers equivalent to the displayed sequence. But \(E\) is uncountable, and in this article we do not want to use uncountable sets. Thus we have a problem with the proof, the integrand in the identity is the uncountable function \[\{(x,\,x^{n}\mathrm{e}^{-x})\mid x\in[0,\,+\infty)\}\;.\] We demonstrate that \(99.99\dots\%\) of the ordered pairs in the integrand are superfluous and are not needed for Hilbert's argument. It suffices to work with the restriction of the integrand to \([0,+\infty)_{\mathbb{Q}}=\{a\in\mathbb{Q}\mid a\geq 0\}\). A set \(X\) is _hereditarily at most countable_, abbreviated HMC, if for every \(n=0,1,\dots\) in every chain of memberships \[X=X_{0}\ni X_{1}\ni\dots\ni X_{n}\] the last set \(X_{n}\) is at most countable. In this article we start the project of turning (some parts of) Number Theory in CNT by simplifying proofs of various results so that only HMC sets are used. Next we plan to rework in [9] in this style Baker's complex-analytic proof [1, 2] of effective bounds on sizes of solutions of Thue (Diophantine) equations. In Section 2 we develop in all details, but only in the extent necessary for the key integral identity, a version of Riemann integration theory for real functions defined on rational intervals \([a,b]_{\mathbb{Q}}\) and \([a,+\infty)_{\mathbb{Q}}\). With the help of this theory we can present in Section 3 Hilbert's proof in CNT. Section 4 contains concluding and motivational comments. ## 2 Integration of countable real functions Let \(X\) and \(Y\) be sets. We use notation \(f\colon X\to Y\) to say that \(f\) is a function from \(X\) to \(Y\colon f\subset X\times Y\) and for every \(x\in X\) there is a unique \(y\in Y\) such that \((x,y)\in f\), written commonly as \(f(x)=y\). We call \(X\) the _definition domain of \(f\)_. The _restriction_ of \(f\colon X\to Y\) to a subset \(Z\subset X\) is the function \(f\,|\,Z\colon Z\to Y\) with the same values \[(f\,|\,Z)(x)=f(x),\ \ x\in Z\;.\] Usually we write just \(f\) instead of \(f\,|\,Z\). The _image_ of a set \(Z\subset X\) by \(f\colon X\to Y\) is the set \[f[Z]:=\{f(x)\;|\;x\in Z\}\subset Y\;.\] We denote by \(\mathbb{N}=\{1,2,\dots\}\) the set of natural numbers, by \(\mathbb{N}_{0}=\{0,1,\dots\}\) the set of nonnegative integers and by \(\mathbb{Z}=\{\dots,-1,0,1,\dots\}\) the ordered integral domain of integers. For \(n\in\mathbb{N}_{0}\) we set \[[n]:=\{1,\,2,\,\dots,\,n\},\ [0]:=\emptyset\;.\] If it is not said else, letters \(k\), \(l\), \(m\) and \(n\), possibly enriched with indices and/or primes, denote elements of \(\mathbb{N}\). Variables \(m\) and \(n\) may run also in \(\mathbb{N}_{0}\) and \(\mathbb{Z}\). By \(\mathbb{Q}=\{\frac{m}{n}\ |\ m\in\mathbb{Z},\,n\in\mathbb{N}\}\) we denote the ordered field of rational numbers (fractions). We take the arithmetic of this field \[\mathbb{Q}=(\mathbb{Q},\,0,\,1,\,+,\,\cdot,\,<)\] for granted. Letters \(a\), \(b\), \(c\) and \(d\), possibly enriched with indices and/or primes, denote fractions. For \(a,b\in\mathbb{Q}\) we define the interval \[[a,\,b]_{\mathbb{Q}}:=\{c\in\mathbb{Q}\ |\ a\leq c\leq b\}\;,\] and we define similarly other _rational intervals_. More precisely, if \(I\subset\mathbb{R}\) is a real interval with rational endpoints, we set \(I_{\mathbb{Q}}:=I\cap\mathbb{Q}\). In this section we develop a fragment of _Countable Mathematical Analysis_, abbreviated CMA, for functions of the type \[f\colon M\to\mathbb{R}\text{ with }M\subset\mathbb{Q}\;,\] where \(M\) is often a rational interval and \(\mathbb{R}\) is the set of real numbers (we introduce real numbers in Definition 2.1). Each such function is clearly a HMC set. Individual real numbers and countable sets of real numbers are fine, but we want to avoid use of uncountable sets, like for example the (standard) exponential function \(\mathrm{e}^{x}\colon\mathbb{R}\to\mathbb{R}\). We in fact want to avoid use of non-HMC sets, like for example \(\{1,\{\mathrm{e}^{x}\}\}\). The interested reader may ask: OK, you want to use only HMC sets, but should not the class of considered countable real functions be broader, should not it consist of functions of the type \[f\colon M\to\mathbb{R}\text{ with }M\subset\mathbb{R}\text{ at most countable?}\] When you allow only functions with rational arguments, you cannot work naturally with composite functions like \(f(x)=\mathrm{e}^{\mathrm{e}^{x}}\). Our reply is that, at this stage, we do not know. On the one hand, the latter functions are certainly more general and simple enough. On the other hand, functions of the former type (with rational arguments) suffice for carrying out Hilbert's proof, as we demonstrate on the following 40 or so pages: only simple functions of the form \(f(x)=yx^{n}\mathrm{e}^{-x}\) are needed in the proof. The argument with composite functions has not much force because we show in Definition 2.51 that it is easy to define composition of two uniformly continuous real functions with rational arguments. We do it according to our approach that fractions are basic building blocks and real numbers are ideal elements that are allowed but should be used sparingly, only when they are necessary, for example as values, rather than arguments, of functions. The question really is how much does CMA need non-uniformly-continuous functions. Most of the properties of real numbers, sequences, infinite series, functions, derivatives and integrals presented below are well known. At the same time they are not known at all and are completely new. Their standard versions are our inspiration and are classical. The versions presented here as results in CMA work only with HMC sets and appear -- as far as we know -- for the first time. We say in more detail what "working only with HMC sets" means in the criterion put forth at the end of Subsection 2.3; there we also reiterate in a meta-theorem that our version of Hilbert's proof complies with this criterion. We have to prove them all anew, with all details. We only skip some routine proofs concerning the ordered field \(\mathbb{R}\). All definitions, lemmas, propositions, theorems and corollaries presented in this section, from Definition 2.1 (real numbers) to Theorem 2.67 (integration by parts in CMA), with the exception of Definition 2.51 (composing UC functions), are needed for the proof of the key integral identity in Theorem 3.2 and for the CNT version of Hilbert's proof in Section 3. Here are titles of the forthcoming subsections. 1. Real numbers and real sequences 2. Infinite series 3. Countable real functions 4. Derivatives 5. The exponential function 6. Integrals 7. Improper integrals 8. Integration by parts Sequences and series of real numbers are HMC sets and the first two subsections therefore do not differ much from the standard treatment; the only substantial differences are in Definition 2.1 and Theorem 2.3. Standard real functions are, however, typically uncountable and in the third subsection we take a sharp turn away from them and begin to build CMA for countable real functions. Two important properties of these functions emerge, uniform continuity and uniformity of derivatives. They ensure that these functions behave in ways similar to the classical case. In the last three subsections we develop Riemann integration of functions of the type \[f\colon[a,b]_{\mathbb{Q}}\to\mathbb{R}\ \ \text{and}\ \ f\colon[a,+\infty)_{ \mathbb{Q}}\to\mathbb{R}\.\] Most of the 66 or so results in this section are involved in their standard form also in the classical form of Hilbert's proof [5] but, as well known, they are not explicitly mentioned, which does not do here. The interested reader may wonder if one can identify in this set some crucial result, on which everything directly or indirectly hinges. We think it is Theorem 2.36, the CMA version of the popular necessary condition -- vanishing of the derivative at a point -- that a function attains at the point (locally or globally) its extremal value. Unlike in classical Mathematical Analysis, in CMA the formulation and proof of this theorem are nontrivial. ### Real numbers and real sequences A rational sequence \((a_{n})=(a_{1},a_{2},\dots)\subset\mathbb{Q}\) is _Cauchy_ if \[\forall\,k\;\exists\,n_{0}\left(m,\,n\geq n_{0}\Rightarrow|a_{m}-a_{n}|\leq 1 /k\right)\,.\] We try to use systematically non-strict inequalities \(\leq\frac{1}{k}\), \(\leq\frac{1}{n}\), etc. instead of the strict forms \(<\frac{1}{k}\), \(<\frac{1}{n}\), etc. The reason for it is that \(\leq\) is preserved in the sense in which \(<\) is not: \(\forall\,x\geq 0\left(y\leq z\Rightarrow xy\leq xz\right)\) holds, but \(\forall\,x\geq 0\left(y<z\Rightarrow xy<xz\right)\) is not true. **Definition 2.1** (real numbers): _Any Cauchy sequence_ \[(a_{n})\subset\mathbb{Q}\] _of fractions is called a real number. We denote the set of real numbers by \(\mathbb{R}\)._ \(\mathbb{R}\) is an uncountable set but every element in it is a HMC set. Every subset of \(\mathbb{R}\) used in our constructions is at most countable. Letters \(x\), \(y\), \(z\) and \(w\), possibly enriched with indices and/or primes, denote real numbers. We use \(x\) and \(y\) to denote also generic arguments of functions. By \(C\geq 0\) we denote a nonnegative real constant, possibly different at different occasions. We regard two real numbers \((a_{n})\) and \((b_{n})\) as _equal_, written \((a_{n})\sim(b_{n})\), if \[\forall\,k\;\exists\,n_{0}\,\big{(}n\geq n_{0}\Rightarrow|a_{n}-b_{n}|\leq 1 /k\big{)}\;.\] Equivalently, \(n,m\geq n_{0}\Rightarrow|a_{n}-b_{m}|\leq\frac{1}{k}\) because both sequences are Cauchy. The relation \(\sim\) is reflexive, symmetric and transitive. We think of real numbers as of sequences of arbitrarily precise rational approximations. Their arithmetic is as follows. **Definition 2.2** (arithmetic of real numbers): _We set_ \[0_{\mathbb{R}}=0:=(0,\,0,\,\dots)\;\mbox{ and }\;1_{\mathbb{R}}=1:=(1,\,1,\, \dots)\;.\] _For real numbers \((a_{n})\) and \((b_{n})\) we define_ \[(a_{n})+(b_{n}):=(a_{n}+b_{n})\;\mbox{ and }\;(a_{n})\cdot(b_{n}):=(a_{n} \cdot b_{n})\;.\] _If \((b_{n})\not\sim(0,0,\dots)\), we set_ \[-(a_{n}):=(-a_{n})\;\mbox{ and }\;(b_{n})^{-1}:=(b_{n}^{-1})=(1/b_{n})\] _(here \(0^{-1}:=0\)). Finally we define_ \[(a_{n})<(b_{n})\iff\exists\,k\;\exists\,n_{0}\,\big{(}n\geq n_{0}\Rightarrow a _{n}\leq b_{n}-1/k\big{)}\;.\] Since \((a_{n})\) and \((b_{n})\) are Cauchy, in the definition of \(<\) we can write equivalently \(n,m\geq n_{0}\Rightarrow a_{n}\leq b_{m}-\frac{1}{k}\). The notation \(x\leq y\) means that \(x<y\) or \(x\sim y\), and similarly for \(x\geq y\). By \(x-y\) we mean \(x+(-y)\) and \(xy\) means \(x\cdot y\). By \(x/y=\frac{x}{y}\) we mean \(xy^{-1}\). We easily check that the neutral elements \(0\) and \(1\) are real numbers, that so are the results of the operations \(+\) and \(\cdot\) and of the inversions \(-(\dots)\) and \((\dots)^{-1}\), and that the relation \(\sim\) is congruent with respect to \(0\), \(1\), \(+\), \(\cdot\), \(-(\dots)\), \((\dots)^{-1}\) and \(<\). It is also not very hard to establish that with respect to \(0\), \(1\), \(+\), \(\cdot\), \(-(\dots)\), \((\dots)^{-1}\), \(<\) and the equality relation \(\sim\), real numbers form an ordered field. We set \(|x|:=\max(\{x,-x\})\), with the maximum taken in the linear order \(<\) on \(\mathbb{R}\). We use many times the _triangle inequality_ which says that for every \(n\)-tuple of real numbers \(x_{1}\), \(\dots\), \(x_{n}\), \[|x_{1}+x_{2}+\dots+x_{n}|\leq|x_{1}|+|x_{2}|+\dots+|x_{n}|\;.\] We omit proofs of all theses facts. We view the ordered field \(\mathbb{Q}\) as embedded in the ordered field \(\mathbb{R}\) via constant sequences, any \(a\in\mathbb{Q}\) is sent to \[(a,\,a,\,\dots)\in\mathbb{R}\;.\] \(\mathbb{R}\) is _Archimedean_, for every \(x\in\mathbb{R}\) there is an \(n\in\mathbb{N}\) such that \(x\leq n\). Similarly, for every \(x\in\mathbb{R}\) there is an \(n\in\mathbb{N}\) such that \(x\geq-n\). In fact, both claims hold with any infinite rational arithmetic progression \[(a+nd),\ a,\,d\in\mathbb{Q}\ \ \mbox{and}\ \ d>0\;,\] in place of \(\mathbb{N}\). Every real sequence \((x_{n})\) is of course a HMC set. We say that \((x_{n})\) is _bounded from above_ if there is a \(y\) such that \(x_{n}\leq y\) for every \(n\); then \(y\) is an _upper bound of \((x_{n})\)_. We define similarly _boundedness from below_ and _lower bounds_. A real sequence \((x_{n})\) is _bounded_ if it is bounded both from above and from below. **Theorem 2.3** (suprema and infima): _Every sequence \((x_{n})\) of real numbers that is bounded from above has the least upper bound, a necessarily unique upper bound \(y\) such that no \(z<y\) is an upper bound of \((x_{n})\). We call \(y\) the supremum of \((x_{n})\) and write_ \[y=\sup x_{n}\;.\] _Similarly, every sequence \((x_{n})\) that is bounded from below has the largest lower bound, a unique lower bound \(y\) such that no \(z>y\) is a lower bound of \((x_{n})\). We call \(y\) the infimum of \((x_{n})\) and write_ \[y=\inf x_{n}\;.\] _Proof._ We only prove the existence of suprema. Infima can be obtained by a similar construction (of sequences \((a_{n})\) below) or can be reduced to suprema by the identity \[\inf(-x_{n})=-\sup x_{n}\;,\] which holds whenever one side of it is defined. Let \((x_{n})\) be a real sequence that is bounded from above. We define rational sequences \((a_{n})\) and \((b_{n})\) as follows. We start with any upper bound \(a_{1}\in\mathbb{N}\) of \((x_{n})\) and set \(b_{1}:=1\). Suppose that \(a_{1}\),..., \(a_{n}\) and \(b_{1}\),..., \(b_{n}\) have been already defined. If \(a_{n}-b_{n}\) is still an upper bound of \((x_{n})\), we set \(a_{n+1}:=a_{n}-b_{n}\) and \(b_{n+1}:=b_{n}\). Else we set \(a_{n+1}:=a_{n}\) and \(b_{n+1}:=\frac{1}{1+1/b_{n}}\). Thus every \(a_{n}=(a_{n},a_{n},\dots)\in\mathbb{R}\) is an upper bound of \((x_{n})\) and the values of \(b_{n}\) are \(1\), \(\frac{1}{2}\), \(\frac{1}{3}\) and so on. We claim that \((a_{n})\) is a real number and that \((a_{n})\) is the supremum of \((x_{n})\). To prove it, we first observe that for any \((x_{n})\) and any \(a_{1}\) the "Else..." step when \(b_{n}\) decreases is performed infinitely often. This follows from the above remark on infinite rational arithmetic progressions. So let \(1\leq m_{1}<m_{2}<\dots\) be the indices \(n=m_{k}\) when this step is performed. Every \(a_{n}\) is an upper bound of \((x_{n})\), the sequence \((a_{n})\) is non-increasing and for every \(n\) the fraction \(a_{m_{n}}-\frac{1}{n}\) is not an upper bound of \((x_{n})\). Thus \(m\geq m_{n}\Rightarrow a_{m_{n}}\geq a_{m}>a_{m_{n}}-\frac{1}{n}\) and \((a_{n})\) is Cauchy. So \((a_{n})\) is a real number. We set \(y:=(a_{n})\) and show that \(y\) is the supremum of \((x_{n})\). First we show that \(y\) is an upper bound of \((x_{n})\). If not, \(y<x_{l}=(c_{n})\) for some \(l\). Then there are \(k\) and \(n_{0}\) such that \(n,m\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\leq c_{m}-\frac{1}{k}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\Rightarrow a_{n}\). Hence \(n\geq n_{0}\). \(a_{n_{0}}\leq c_{n}-\frac{1}{k}\) and \(a_{n_{0}}=(a_{n_{0}},a_{n_{0}},\ldots)<x_{l}\), contradicting that \(a_{n_{0}}\) is an upper bound of \((x_{n})\). Thus \(y\) is an upper bound of \((x_{n})\). Let \(z=(c_{n})<y\) be any real number smaller than \(y\). We finally show that \(z\) is not an upper bound of \((x_{n})\). There are \(k\) and \(n_{0}\) such that \(n,m\geq n_{0}\Rightarrow c_{n}\leq a_{m}-\frac{1}{k}\). The definition of \((a_{n})\) and the infinitude of the indices \(m_{n}\) imply that there are \(l\) and \(m\), \(m\geq n_{0}\), such that \(a_{m}-\frac{1}{k}=(a_{m}-\frac{1}{k},a_{m}-\frac{1}{k},\ldots)<x_{l}=(d_{n})\). Then there are \(k^{\prime}\) and \(n_{1}\geq n_{0}\) such that \(n\geq n_{1}\Rightarrow a_{m}-\frac{1}{k}\leq d_{n}-\frac{1}{k^{\prime}}\). Hence for every \(n\geq n_{1}\), \[c_{n}\leq a_{m}-1/k\leq d_{n}-1/k^{\prime}\;.\] Thus \(z=(c_{n})<(d_{n})=x_{l}\) and \(z\) is not an upper bound of \((x_{n})\). \(\Box\) **Definition 2.4** (limits of real sequences): _We say that a sequence \((x_{n})\) has the limit \(x\), and write that \(\lim x_{n}=x\) or \(\lim_{n\to\infty}x_{n}=x\), if_ \[\forall\,k\;\exists\,n_{0}\left(n\geq n_{0}\Rightarrow|x_{n}-x|\leq 1/k \right)\;.\] If \((x_{n})\) has a limit, we say that the sequence \((x_{n})\)_converges_. Limits are unique. We prove four results on limits of general real sequences. **Lemma 2.5**: _If \((x_{n})\) and \((y_{n})\) are real sequences such that \(\lim y_{n}=0\) and, for some \(n_{0}\),_ \[n\geq n_{0}\Rightarrow|x_{n}|\leq y_{n}\] _then \(\lim x_{n}=0\) as well._ _Proof._ Let a \(k\) be given. We take an \(n_{1}\), \(n_{1}\geq n_{0}\), such that \(n\geq n_{1}\Rightarrow|y_{n}|\leq\frac{1}{k}\). Then \(n\geq n_{1}\Rightarrow|x_{n}|\leq y_{n}\leq\frac{1}{k}\), thus \(|x_{n}-0|\leq\frac{1}{k}\). \(\Box\) **Proposition 2.6** (limits of linear combinations): _Let \((x_{n})\) and \((y_{n})\) be convergent real sequences with the respective limits \(z\) and \(w\), and let \(x\) and \(y\) be real numbers. Then the sequence \((xx_{n}+yy_{n})\) converges and has the limit_ \[\lim(xx_{n}+yy_{n})=x\lim x_{n}+y\lim y_{n}=xz+yw\;.\] _Proof._ Let a \(k\) be given. We take an \(n_{0}\) such that \(n\geq n_{0}\Rightarrow|z-x_{n}|\leq\frac{1}{k}\) and \(|w-y_{n}|\leq\frac{1}{k}\). Then for every \(n\geq n_{0}\) the triangle inequality shows that \[|xx_{n}+yy_{n}-(xz+yw)|\leq|x|\cdot|x_{n}-z|+|y|\cdot|y_{n}-w|\leq\frac{|x|+|y |}{k}\] and we see that \(\lim(xx_{n}+yy_{n})=xz+yw\). \(\Box\) **Proposition 2.7** (limits of products): _Let \((x_{n})\) and \((y_{n})\) be convergent real sequences with the respective limits \(z\) and \(w\). Then the sequence \((x_{n}y_{n})\) converges and has the limit_ \[\lim(x_{n}\cdot y_{n})=\lim x_{n}\cdot\lim y_{n}=zw\;.\] Proof.: It is easy to see that every convergent real sequence is bounded and so we can take a constant \(C\) such that \(|y_{n}|\leq C\) for every \(n\). For the given \(k\) we take an \(n_{0}\) such that \(n\geq n_{0}\Rightarrow|z-x_{n}|\leq\frac{1}{k}\) and \(|w-y_{n}|\leq\frac{1}{k}\). Then for every \(n\geq n_{0}\) the triangle inequality shows that \[|x_{n}y_{n}-zw|\leq|x_{n}-z|\cdot|y_{n}|+|z|\cdot|y_{n}-w|\leq\frac{1}{k}\cdot C +|z|\cdot\frac{1}{k}=\frac{C+|z|}{k}\] and we see that \(\lim x_{n}y_{n}=zw\). \(\Box\) **Proposition 2.8** (limits and order): _Let \((x_{n})\) and \((y_{n})\) be convergent real sequences with the respective limits \(z\) and \(w\). Then the implication_ \[\forall\,n\left(x_{n}\leq y_{n}\right)\Rightarrow\lim x_{n}=z\leq\lim y_{n}=w\] _holds._ Proof.: We show equivalently that \(z>w\) implies that \(x_{n}>y_{n}\) for some \(n\), in fact that it even implies that \(x_{n}>y_{m}\) for every \(n,m\geq n_{0}\) for some \(n_{0}\). So if \(z>w\), (since \(\mathbb{R}\) is Archimedean) we can take a \(k\) such that \(z-\frac{1}{k}>w+\frac{1}{k}\) and an \(n_{0}\) such that \(n\geq n_{0}\Rightarrow|x_{n}-z|\leq\frac{1}{k}\) and \(|y_{n}-w|\leq\frac{1}{k}\). Then for every \(n,m\geq n_{0}\) we have that \[x_{n}\geq z-1/k>w+1/k\geq y_{m}\] and indeed \(x_{n}>y_{m}\). \(\Box\) The interested reader may want to check that the implication still holds with the weaker assumption that \[\forall\,n_{0}\;\exists\,n\;\exists\,m\left(n,\,m\geq n_{0}\wedge x_{n}\leq y_ {m}\right)\,.\] Recall that a sequence \((u_{n})\subset X\) in a linear order \((X,<)\) is _non-increasing_ if \(u_{1}\geq u_{2}\geq\dots\), that it is _non-decreasing_ if \(u_{1}\leq u_{2}\leq\dots\) and that it is _monotone_ if it is non-increasing or non-decreasing. With strict inequalities we have _decreasing_, respectively _increasing_, sequences. **Proposition 2.9** (on monotone sequences): _Every non-decreasing sequence \((x_{n})\) that is bounded from above is convergent and has the limit_ \[\lim x_{n}=\sup x_{n}.\] _Every non-increasing sequence \((x_{n})\) that is bounded from below is convergent and has the limit_ \[\lim x_{n}=\inf x_{n}.\] Proof.: We suppose that \((x_{n})\) is non-decreasing and bounded from above, the other case is treated similarly. By Theorem 2.3, \(y:=\sup x_{n}\) exists. We show that \[\lim x_{n}=y\;.\] Let a \(k\) be given. Since \(y-\frac{1}{k}\) is not an upper bound of \((x_{n})\), we have that \(x_{l}>y-\frac{1}{k}\) for some \(l\). Then for every \(n\geq l\), \[y-1/k<x_{l}\leq x_{n}\leq y<y+1/k\;.\] Thus \(n\geq l\Rightarrow|x_{n}-y|\leq\frac{1}{k}\) and \(\lim x_{n}=y\). **Corollary 2.10** (limits of geometric sequences): _It is true that_ \[\lim x^{n}\left\{\begin{array}{lcl}=0&\ldots&|x|<1\,,\\ =1&\ldots&x=1\mbox{ and}\\ \mbox{does not exist}&\ldots&\mbox{ else}\;.\end{array}\right.\] Proof.: Let \(|x|<1\). Since \(|x^{n}|=|x|^{n}\) and since \(\lim 0^{n}=0\), we may assume that \(0<x<1\). Then the sequence \((x^{n})\) decreases and is bounded from below by \(0\). By the previous proposition, \[\lim x^{n}=\inf x^{n}=:y\geq 0\;.\] Suppose (for the contradiction) that \(y>0\). By the definition of \(y\) there is an \(n\) such that \(x^{n}<y/x\). But then \(x^{n+1}<y\), contradicting that \(y\) is a lower bound of \((x^{n})\). Thus \(y=0\). For \(x=1\) the result holds trivially. If \(x>1\), we easily show by a similar argument using the previous proposition that \((x^{n})\) is not bounded from above and thus \(\lim x^{n}\) does not exist. If \(x\leq-1\) then \(x^{n}\leq-1\) for odd \(n\), \(x^{n}\geq 1\) for even \(n\) and again \(\lim x^{n}\) does not exist. **Corollary 2.11** (exponential vs. factorial growth): _For every positive real numbers \(y\) and \(z\) we have the limit_ \[\lim_{n\to\infty}\frac{yz^{n}}{n!}=0\;.\] Proof.: Let \(m\) be such that \(m>z\). We consider the sequence \[(x_{n}):=(yz^{m}/m!,\,yz^{m+1}/(m+1)!,\,\ldots\,)\;.\] It has positive terms and decreases. By Proposition 2.9, \[\lim_{n\to\infty}\frac{yz^{n}}{n!}=\lim x_{n}=\inf x_{n}=:y^{\prime}\geq 0\;.\] Suppose for the contrary that \(y^{\prime}>0\). By the definition of \(y^{\prime}\), there is an \(n\) such that \(0<x_{n}<y^{\prime}m/z\). But then \(x_{n+1}=x_{n}z/(m+n)<y^{\prime}\), contradicting that \(y^{\prime}\) is a lower bound of \((x_{n})\). Thus \(y^{\prime}=0\). **Lemma 2.12**: _Every sequence \((u_{n})\subset X\) in any HMC linear order \((X,<)\) has a monotone subsequence._ _Proof._ Let \((X,<)\) be a HMC linear order and \((u_{n})\subset X\) be a sequence in it. We consider the set \[H:=\{n\mid n<m\Rightarrow u_{n}<u_{m}\}\;.\] If \(H\) is infinite, \(H=\{m_{1}<m_{2}<\dots\}\), then \((u_{m_{n}})\) is an increasing subsequence of \((u_{n})\). If \(H\) is finite, we begin with any \(m_{1}>\max(H)\) (or with any \(m_{1}\) if \(H=\emptyset\)). Then, since \(m_{1}\not\in H\), there is an \(m_{2}\) such that \(m_{1}<m_{2}\) and \(u_{m_{1}}\geq u_{m_{2}}\). Since \(m_{2}\not\in H\), there is an \(m_{3}\) such that \(m_{2}<m_{3}\) and \(u_{m_{2}}\geq u_{m_{3}}\), and so on. This way we obtain a non-increasing subsequence \((u_{m_{n}})\) of \((u_{n})\). \(\Box\) A sequence of real numbers \((x_{n})\) is _Cauchy_ if \[\forall\,k\;\exists\,n_{0}\left(m,\,n\geq n_{0}\Rightarrow|x_{m}-x_{n}|\leq 1 /k\right)\,.\] **Theorem 2.13** (metric completeness of \(\mathbb{R}\)): _A sequence \((x_{n})\) is Cauchy if and only if it is convergent._ _Proof._ The implication \(\Leftarrow\). Let \(\lim x_{n}=x\). For the given \(k\) we take an \(n_{0}\) such that \(n\geq n_{0}\Rightarrow|x_{n}-x|\leq\frac{1}{k}\). Then for every \(n,m\geq n_{0}\) the triangle inequality shows that \[|x_{n}-x_{m}|\leq|x_{n}-x|+|x-x_{m}|\leq 2/k\] and we see that \((x_{n})\) is Cauchy. The implication \(\Rightarrow\). Since \((x_{n})\) is Cauchy, it is bounded. By the previous lemma \((x_{n})\) has a monotone subsequence \((x_{m_{n}})\). The subsequence \((x_{m_{n}})\) is bounded as well. By Proposition 2.9, \(\lim x_{m_{n}}=y\) for some \(y\). We show that \(\lim x_{n}=y\). Let a \(k\) be given. We take an \(n_{0}\) such that \(n\geq n_{0}\Rightarrow|x_{m_{n}}-y|\leq\frac{1}{k}\) and that \(n,m\geq n_{0}\Rightarrow|x_{n}-x_{m}|\leq\frac{1}{k}\). Then, since always \(m_{n}\geq n\), \[n\geq n_{0}\Rightarrow|x_{n}-y|\leq|x_{n}-x_{m_{n_{0}}}|+|x_{m_{n_{0}}}-x_{n }|\leq 1/k+1/k=2/k\] and we see that \(\lim x_{n}=y\). \(\Box\) **Theorem 2.14** (Bolzano-Weierstrass): _Any bounded sequence of real numbers has a convergent subsequence._ _Proof._ By Lemma 2.12 any bounded real sequence has a bounded monotone subsequence. This subsequence converges by Proposition 2.9. \(\Box\) ### Infinite series In this subsection we develop a theory of absolutely convergent real series. We need it for the proof of Theorem 2.44 and for bounding \({\rm e}^{x}\). **Definition 2.15** (sums of series, absolute convergence): _An (infinite) series \(S\) is any sequence_ \[S=(x_{0},\,x_{1},\,\dots)=\sum_{n=0}^{\infty}x_{n}\] _of real numbers, indexed by \(\mathbb{N}_{0}\). We say that \(S\) converges if the real limit_ \[x:=\lim\sum_{i=0}^{n}x_{i}=\lim(x_{0}+x_{1}+\dots+x_{n})\] _exists. If it is the case, we call \(x\) the sum of the series \(S\) and write that_ \[x=\sum_{n=0}^{\infty}x_{n}\;.\] _We say that the series \(S\) absolutely converges if the series \(\sum_{n=0}^{\infty}|x_{n}|\) converges._ We call the terms \(x_{n}\) in \(S\)_summands_ and index them starting from zero because an important series we use later is \(\sum_{n=0}^{\infty}\frac{x^{n}}{n!}\). The (finite) sums \(x_{0}+x_{1}+\dots+x_{n}\) are called _partial sums_ (of the series). The symbol \(\sum_{n=0}^{\infty}x_{n}\) denotes both the series as a real sequence, and its sum as a real number. It is an ambiguous but standard notation. If we write \(\sum_{n=m}^{\infty}x_{n}\) for some \(m\in\mathbb{N}_{0}\), we mean by this the series \(\sum_{n=0}^{\infty}y_{n}\) such that \(y_{0}=y_{1}=\dots y_{m-1}=0\) and \(y_{n}=x_{n}\) for \(n\geq m\). **Proposition 2.16** (linearity of sums): _Suppose that \(z\) and \(w\) are real numbers and that \(\sum_{n=0}^{\infty}x_{n}\) and \(\sum_{n=0}^{\infty}y_{n}\) are convergent series with the respective sums \(x\) and \(y\). Then the next series converges and has the sum_ \[\sum_{n=0}^{\infty}(zx_{n}+wy_{n})=z\cdot\sum_{n=0}^{\infty}x_{n}+w\cdot\sum_ {n=0}^{\infty}y_{n}\ \ (=zx+wy)\;.\] _Proof._ This follows from the definition of sums of series and from Proposition 2.6. \(\Box\) In view of the ambiguous notation for series, the above identity holds for sums, but in \(\dots=:\dots+\dots\) it also defines the formal operation of linear combination of series as real sequences. **Proposition 2.17** (AC \(\Rightarrow\) C): _If a series \(\sum_{n=0}^{\infty}x_{n}\) absolutely converges then it converges and the inequality_ \[\bigg{|}\sum_{n=0}^{\infty}x_{n}\bigg{|}\leq\sum_{n=0}^{\infty}|x_{n}|\] _for sums holds._ Proof.: Let \(\sum_{n=0}^{\infty}x_{n}\) be an absolutely convergent series. By Theorem 2.13 the sequence \((t_{n})\) of partial sums of \(\sum_{n=0}^{\infty}|x_{n}|\) is Cauchy and for the given \(k\) we can take an \(n_{0}\) such that \[n\geq m\geq n_{0}\Rightarrow\left|\left|x_{m}\right|+\left|x_{m+1}\right|+ \cdots+\left|x_{n}\right|\right|=\left|x_{m}\right|+\left|x_{m+1}\right|+ \cdots+\left|x_{n}\right|\leq 1/k\;.\] The triangle inequality shows that for every \(n\geq m\geq n_{0}\), \[\left|x_{m}+x_{m+1}+\cdots+x_{n}\right|\leq\left|x_{m}\right|+\left|x_{m+1} \right|+\cdots+\left|x_{n}\right|\leq 1/k\] and we see that the sequence \((s_{n})\) of partial sums of \(\sum_{n=0}^{\infty}x_{n}\) is Cauchy. By Theorem 2.13 this series converges. By the triangle inequality, \[\left|s_{n}\right|\leq t_{n}\text{, i.e., }-t_{n}\leq s_{n}\leq t_{n}\;,\] for every \(n\in\mathbb{N}_{0}\). The second claim follows from Propositions 2.6 and 2.8. \(\Box\) _Geometric series_ is the family of series \(\sum_{n=0}^{\infty}x^{n}\). In bounds on \(\mathrm{e}^{x}\) we need slightly more general series. **Proposition 2.18** (on geometric series): _Let \(m\in\mathbb{N}_{0}\) and \(x\) be a real number. The next series converges if and only if \(|x|<1\), and then it has the sum_ \[\sum_{n=m}^{\infty}x^{n}=\frac{x^{m}}{1-x}\;.\] Proof.: For \(x\neq 1\) this series has the partial sums (\(n\geq m\) and we omit the initial dummy zeros) \[x^{m}+x^{m+1}+\cdots+x^{n}=x^{m}\cdot\frac{1-x^{n-m+1}}{1-x}\;.\] By Proposition 2.6 and Corollary 2.10, they have the limit \(\frac{x^{m}}{1-x}\) for \(|x|<1\) and no limit for \(|x|\geq 1\). For \(x=1\) the partial sums are \(x^{m}+x^{m+1}+\cdots+x^{n}=n-m+1\) and have no limit. \(\Box\) **Proposition 2.19** (majorants): _Let \(n_{0}\in\mathbb{N}\), \(x>0\) and_ \[S=\sum_{n=0}^{\infty}x_{n}\;\text{ and }\;T=\sum_{n=0}^{\infty}y_{n}\] _be series such that \(T\) converges and \(|x_{n}|\leq xy_{n}\) holds for every \(n\geq n_{0}\). Then \(S\) converges._ Proof.: We consider the new series \(S^{\prime}=\sum_{n=0}^{\infty}x_{n}^{\prime}\) and \(T^{\prime}=\sum_{n=0}^{\infty}y_{n}^{\prime}\), defined by \(x_{n}^{\prime}:=0\) for \(n<n_{0}\), \(x_{n}^{\prime}:=x_{n}\) for \(n\geq n_{0}\) and similarly for \(y_{n}^{\prime}\). Thus \(|x_{n}^{\prime}|\leq xy_{n}^{\prime}\) holds for every \(n\in\mathbb{N}_{0}\). It follows from Definition 2.15 and Proposition 2.6 that \(S\) converges if and only if \(S^{\prime}\) converges, and that the same holds for \(T\) and \(T^{\prime}\). Let \((s_{n})\) and \((t_{n})\) be the partial sums of \(\sum_{n=0}^{\infty}|x_{n}^{\prime}|\) and \(T^{\prime}\), respectively. They are both non-decreasing and \(s_{n}\leq xt_{n}\) holds for every \(n\in\mathbb{N}_{0}\). Since the series \(T^{\prime}\) converges, the sequence \((t_{n})\) converges and has an upper bound \(C\). Thus for every \(n\in\mathbb{N}_{0}\) the upper bound \[s_{n}\leq xt_{n}\leq xC\] holds. By Proposition 2.9, \((s_{n})\) converges. By Proposition 2.17 so does \(S^{\prime}\). Hence the series \(S\) converges. \(\Box\) We prove Theorem 2.44 with the help of the next result. **Theorem 2.20** (Cauchy product of series): _Let_ \[S=\sum_{n=0}^{\infty}x_{n}\ \mbox{ and }\ T=\sum_{n=0}^{\infty}y_{n}\] _be absolutely convergent series with the respective sums \(x\) and \(y\). Then the series_ \[U=\sum_{n=0}^{\infty}z_{n}:=\sum_{n=0}^{\infty}(x_{0}y_{n}+x_{1}y_{n-1}+ \cdots+x_{n}y_{0})\] _converges and has sum \(xy\)._ Proof.: Let \((s_{n})\), \((t_{n})\) and \((u_{n})\) be partial sums of the series \(S\), \(T\) and \(U\), respectively. By the triangle inequality (\(n\in\mathbb{N}_{0}\)), \[|u_{n}-xy| \leq \left|u_{n}-\sum_{i=0}^{n}x_{i}\cdot\sum_{j=0}^{n}y_{j}\right|+ \left|\sum_{i=0}^{n}x_{i}\cdot\sum_{j=0}^{n}y_{j}-xy\right|=:|A_{n}|+|B_{n}|\;.\] By Proposition 2.7, \(\lim|B_{n}|=0\). We show that also \(\lim|A_{n}|=0\). We denote the sums \(\sum_{n=0}^{\infty}|x_{n}|\) and \(\sum_{n=0}^{\infty}|y_{n}|\) by \(x^{\prime}\) and \(y^{\prime}\), respectively. Since \(u_{n}=z_{0}+z_{1}+\cdots+z_{n}=\sum_{i+j\leq n}x_{i}\cdot y_{j}\) (here \(i,j,n\in\mathbb{N}_{0}\)), we have the bound \[|A_{n}| = \left|-\sum_{\begin{subarray}{c}i,j\leq n\\ i+j>n\end{subarray}}x_{i}\cdot y_{j}\right|\leq\bigg{(}\sum_{n/2<i\leq n}|x_{ i}|\bigg{)}\cdot y^{\prime}+x^{\prime}\cdot\bigg{(}\sum_{n/2<j\leq n}|y_{j}| \bigg{)}\] \[=: C_{n}\cdot y^{\prime}+x^{\prime}\cdot D_{n}\;.\] By the absolute convergence of \(S\) and \(T\), \[(|x_{0}|+|x_{1}|+\cdots+|x_{n}|)\ \mbox{ and }\ (|y_{0}|+|y_{1}|+\cdots+|y_{n}|)\] are Cauchy sequences and \(\lim C_{n}=\lim D_{n}=0\). Thus (by Proposition 2.6) \(\lim|A_{n}|=0\) and \(\lim u_{n}=xy\). \(\Box\) The series \(U\) is called the _Cauchy product_ of the series \(S\) and \(T\). ### Countable real functions Now we have to leave the standard path of Mathematical Analysis because we want all our functions be HMC sets. Let \(M\subset\mathbb{Q}\). A function \(f\colon M\to\mathbb{R}\) is _continuous (on \(M\))_ if \[\forall\,a\in M\;\forall\,k\;\exists\,n\,\big{(}b\in M\wedge|a-b|\leq 1/n \Rightarrow|f(a)-f(b)|\leq 1/k\big{)}\;.\] In CMA continuity tames functions insufficiently and we upgrade it to the _uniform continuity_, abbreviated UC. This property of functions takes over the role of compactness of definition domains in standard Mathematical Analysis. **Definition 2.21** (uniform continuity): _Let \(M\subset\mathbb{Q}\). We say that a function \(f\colon M\to\mathbb{R}\) is uniformly continuous (on \(M\)), abbreviated UC, if_ \[\forall\,k\;\exists\,n\,\big{(}a,\,b\in M\wedge|a-b|\leq 1/n\Rightarrow|f(a)-f (b)|\leq 1/k\big{)}\;.\] Note the example of the function \(f\colon[0,1]_{\mathbb{Q}}\to\mathbb{R}\), given by \(f(a)=0\) for \(0\leq a\leq\frac{1}{\sqrt{2}}\) and \(f(a)=1\) for \(\frac{1}{\sqrt{2}}<a\leq 1\), that is continuous (on \([0,1]_{\mathbb{Q}}\)) but not uniformly continuous. **Proposition 2.22** (UC and absolute value): _If \(M\subset\mathbb{Q}\) and \(f\colon M\to\mathbb{R}\) is a UC function then its absolute value_ \[|f|\colon M\to\mathbb{R}\] _is also UC._ Proof.: Let \(M\) and \(f\) be as stated and let a \(k\) be given. We take an \(n\) such that \[a,\,b\in M\wedge|a-b|\leq 1/n\Rightarrow|f(a)-f(b)|\leq 1/k\;.\] Let \(a,b\in M\) with \(|a-b|\leq 1/n\) be arbitrary. If \(f(a)\) and \(f(b)\) have equal signs (or one of them is \(0\)) then \[\big{|}|f|(a)-|f|(b)\big{|}=|(\pm 1)(f(a)-f(b))|\leq 1/k\;.\] If they have different signs then \(|f(a)|,|f(b)|\leq 1/k\) and again \[\big{|}|f|(a)-|f|(b)\big{|}\leq\max(\{|f(a)|,\,|f(b)|\})\leq 1/k\;.\] We see that \(|f|\) is UC. We say that a (HMC) set \(X\) of real numbers is _bounded_ if \[\exists\,C\,\forall\,x\in X\big{(}|x|\leq C\big{)}\] -- we say that \(C\)_bounds_\(M\). For \(M\subset\mathbb{Q}\), we say that a function \(f\colon M\to\mathbb{R}\) is _bounded_ if its image \(f[M]\) is bounded. **Lemma 2.23**: _Suppose that \(M\subset\mathbb{Q}\) is a bounded set and that \(f\colon M\to\mathbb{R}\) is a \(\mathrm{UC}\) function. Then \(f\) is bounded._ _Proof._ By the assumption on \(f\) there is an \(n\) such that \[a,\,b\in M\wedge|a-b|\leq 1/n\Rightarrow|f(a)-f(b)|\leq 1\;.\] Since \(M\) is bounded, there exists a finite subset \(N\subset M\) such that \[\forall\,a\in M\;\exists\,b=b_{a}\in N\left(|a-b|\leq 1/n\right)\,.\] Let \(C\) bound \(|f|[N]\). Then for every \(a\in M\), \[|f(a)|\leq|f(a)-f(b_{a})|+|f(b_{a})|\leq 1+C\] and \(f\) is bounded. \(\Box\) We show that \(\mathrm{UC}\) is preserved by products. **Proposition 2.24** (UC and products): _Suppose that \(M\subset\mathbb{Q}\) is a bounded set and that \(f,g\colon M\to\mathbb{R}\) are \(\mathrm{UC}\) functions. Their product function_ \[fg\colon M\to\mathbb{R}\] _is \(\mathrm{UC}\) as well._ _Proof._ Let \(M\), \(f\) and \(g\) be as stated and let a \(k\) be given. By the previous lemma we can take a constant \(C\) bounding \(f[M]\) and \(g[M]\). We take an \(n\) such that \(a,b\in M\wedge|a-b|\leq\frac{1}{n}\Rightarrow|f(a)-f(b)|,|g(a)-g(b)|\leq \frac{1}{k}\). Then for every \(a,b\in M\) with \(|a-b|\leq\frac{1}{n}\) we have that \[|(fg)(a)-(fg)(b)|\leq|f(a)|\cdot|g(a)-g(b)|+|g(b)|\cdot|f(a)-f(b)|\leq 2C/k\] and see that \(fg\) is \(\mathrm{UC}\). \(\Box\) For linear combinations the proof is simpler. **Proposition 2.25** (UC and lin. combinations): _Suppose that \(x\) and \(y\) are real numbers, \(M\subset\mathbb{Q}\) and that \(f,g\colon M\to\mathbb{R}\) are \(\mathrm{UC}\) functions. The linear combination_ \[xf+yg\colon M\to\mathbb{R}\] _is \(\mathrm{UC}\) as well._ _Proof._ Let \(x\), \(y\), \(M\), \(f\) and \(g\) be as stated. For a given \(k\) we take an \(n\) such that \(a,b\in M\wedge|a-b|\leq\frac{1}{n}\Rightarrow|f(a)-f(b)|,|g(a)-g(b)|\leq \frac{1}{k}\). Then for every \(a,b\in M\) with \(|a-b|\leq\frac{1}{n}\) we have that \[|(xf+yg)(a)-(xf+yg)(b)|\leq|x|\cdot|f(a)-f(b)|+|y|\cdot|g(a)-g(b)|\leq\frac{| x|+|y|}{k}\] and see that \(xf+yg\) is \(\mathrm{UC}\). \(\Box\) **Corollary 2.26** (\(yx^{n}\) is UC): _For every \(y\), every \(n\in\mathbb{N}_{0}\) and every bounded set \(M\subset\mathbb{Q}\), the function_ \[f(x)=yx^{n}\colon M\to\mathbb{R}\] _is_ UC_._ Proof.: Let \(M\subset\mathbb{Q}\) be arbitrary. Every constant function \(f(x)=y\) defined on \(M\) is UC and so is the identical function \(f(x)=x\). The stated result follows from this by repeated applications of Proposition 2.24. More generally, any polynomial function \(p\colon M\to\mathbb{R}\) is UC for any bounded \(M\subset\mathbb{Q}\). We say that a real number \(x\) is a _limit point of a set_\(M\subset\mathbb{Q}\) if \[\forall\,k\;\exists\,a\in M\left(0<|x-a|\leq 1/k\right)\,.\] Equivalently, there is a sequence \((a_{n})\subset M\setminus\{x\}\) such that \(\lim a_{n}=x\). The next simple theorem is crucial for our approach. **Theorem 2.27** (extending UC functions): _Let \(x\) be a real number, \(M\subset\mathbb{Q}\) and \(f\colon M\to\mathbb{R}\) be_ UC_. Then for every sequence \((a_{n})\subset M\) with \(\lim a_{n}=x\), the sequence_ \[(f(a_{n}))=(f(a_{1}),\,f(a_{2}),\,\dots)\] _converges. Moreover, if such sequence \((a_{n})\) exists then \(\lim f(a_{n})\) does not depend on \((a_{n})\)._ Proof.: Since \(f\) is UC, for any sequence \((a_{n})\subset M\) converging to \(x\) the sequence of values \((f(a_{n}))\) is Cauchy. In detail, let a \(k\) be given. We take an \(m^{\prime}\) such that for any \(a,b\in M\) with \(|a-b|\leq\frac{1}{m^{\prime}}\) one has that \(|f(a)-f(b)|\leq\frac{1}{k}\). Then, using Theorem 2.13, we take an \(n_{0}\) such that \(m,n\geq n_{0}\Rightarrow|a_{m}-a_{n}|\leq\frac{1}{m^{\prime}}\). Then for every \(m,n\geq n_{0}\) we have that \(|f(a_{m})-f(a_{n})|\leq\frac{1}{k}\). By Theorem 2.13, \(\lim f(a_{n})=y\) for some \(y\). In the same vein, \(\lim(f(a_{n})-f(b_{n}))=0\) for any two sequences \((a_{n}),(b_{n})\subset M\) such that \(\lim a_{n}=\lim b_{n}=x\). Thus if \(y\) is defined, it is unique and does not depend on the choice of \((a_{n})\). In situations described in the theorem, if \(x\not\in M\) but is a limit point of \(M\), the function \(f\colon M\to\mathbb{R}\) extends uniquely to the function \(f\colon M\cup\{x\}\to\mathbb{R}\) by the value \[f(x):=\lim_{n\to\infty}f(a_{n})\in\mathbb{R}\,\] for any sequence \((a_{n})\subset M\) converging to \(x\). **Definition 2.28** (close numbers): _Let \(M\subset\mathbb{Q}\). We say that a real number \(x\) is close to \(M\) if \(x=\lim a_{n}\) for a sequence \((a_{n})\subset M\)._ Thus \(x\) is close to \(M\) if and only if \(x\in M\) or \(x\) is a limit point of \(M\). We see that any UC function \(f\colon M\to\mathbb{R}\) naturally extends to any real number \(x\) close to \(M\). The next theorem shows that in CMA bounded subsets of \(\mathbb{Q}\) play the role of compact subsets of \(\mathbb{R}\) in the classical case. **Theorem 2.29** ("attaining" extrema): _Let \(M\subset\mathbb{Q}\) be a nonempty bounded set and \(f\colon M\to\mathbb{R}\) be a UC function. Then there exist real numbers \(y\) and \(y^{\prime}\) that are close to \(M\) and are such that, with the extended function \(f\), for every real number \(y^{\prime\prime}\) close to \(M\) it is true that_ \[\forall\,x\in M\cup\{y^{\prime\prime}\}\left(f(y)\leq f(x)\leq f(y^{\prime}) \right)\,.\] _Hence the extended function \(f\) attains at \(y\) its global minimum value \(f(y)\), and at \(y^{\prime}\) its global maximum value \(f(y^{\prime})\)._ Proof.: We show that the extended function \(f\) attains globally minimum value, global maximum is treated similarly. There is a sequence \((a_{n})\) such that \(M=\{a_{n}\ |\ n\in\mathbb{N}\}\). The sequence \((f(a_{n}))\) is bounded by Lemma 2.23. Using Theorem 2.3 we define \[z:=\inf f(a_{n})\.\] By the definition of infima there is a sequence \((b_{n})\subset M\) such that \(z=\lim f(b_{n})\). By Theorem 2.14 there is a convergent subsequence \((b_{m_{n}})\) of \((b_{n})\) with \(\lim b_{m_{n}}=:y\). Hence \(y\) is close to \(M\) and, with the extended function \(f\), \[f(y)=\lim f(b_{m_{n}})=z\.\] Since \(z\) is a lower bound of \((f(a_{n}))\), we see by Proposition 2.8 and Theorem 2.27 that \(f(y)\leq f(y^{\prime\prime})\) for every \(y^{\prime\prime}\) close to \(M\). Note that the closure \(\overline{M}\) of \(M\) in \(\mathbb{R}\), consisting of all real numbers close to \(M\), in general is not a HMC set. This theorem plays a key role in the next subsection in proving mean value theorems. At this point the interested reader may feel a bit uncertain about the precise meaning of our claim that we use only HMC sets. Intuitively it looks clear but is not it true, especially in this subsection, that we often mention the uncountable set \(\mathbb{R}\), for example in every definition of a function \(f\) in the form \(f\colon M\to\mathbb{R}\)? We state the following criterion. In CNT and CMA, the usage of only HMC sets means that every relevant claim about a set can be formalized by a set formula that involves only sets that are provably HMC. For example, the claim that \(x\) is a real number, \(x\in\mathbb{R}\), superficially involves the set \(\mathbb{R}\) but by Definition 2.1 we easily formalize it by the formula asserting that (i) \(x\) is a set of ordered pairs \((y,z)\) and (ii) if \((y,z)\in x\wedge(y,z^{\prime})\in x\) then \(z=z^{\prime}\) and (iii) the set of first coordinates \(y\) equals \(\mathbb{N}\) and (iv) every second coordinate \(z\) is a fraction \(\frac{m}{n}\) with \(m\in\mathbb{Z}\) and \(n\in\mathbb{N}\) and (v) \(x\) as a sequence of fractions is Cauchy. Clearly, this formula, when written out formally, involves only (set variables for) provably HMC sets. Similarly one can formalize any definition of a function that \(f\colon M\to\mathbb{R}\) with \(M\subset\mathbb{Q}\). On the other hand, for example the definition of a classical function \(f\colon[0,1]\to\mathbb{R}\) cannot be formalized in this way; no formalization, natural or less natural, can avoid the fact that the first coordinates in pairs in \(f\) form the provably uncountable set \([0,1]\) (not necessarily mentioning it) and hence \(f\) is provably an uncountable set. **Meta-theorem**.: _The version of Hilbert's proof of transcendence of \(\mathrm{e}\) in \(\mathrm{CNT}\), presented here in this and the next section, complies with the above criterion._ ### Derivatives We proceed to derivatives of countable real functions. This subsection is central for deducing the key integral identity. Uniformity of derivatives emerges as an important notion. **Definition 2.30** (derivative): _Let \(a\in M\subset\mathbb{Q}\), where \(a\) is a limit point of \(M\), and let \(f\colon M\to\mathbb{R}\). If a real number \(y\) satisfies that_ \[\forall\,k\;\exists\,n\left(b\in M\wedge 0<|a-b|\leq 1/n\Rightarrow\left|\frac{f (b)-f(a)}{b-a}-y\right|\leq 1/k\right)\,,\] _we say that \(y\) is the derivative of \(f\) at \(a\) and write that \(f^{\prime}(a)=y\). If \(f^{\prime}(a)\) exists we also say that \(f\) is differentiable at \(a\)._ We allow only finite values of derivatives. For \(M\subset\mathbb{Q}\) and \(f\colon M\to\mathbb{R}\) we define the set \[D(f):=\{a\in M\mid a\text{ is a limit point of }M\text{ and }f^{\prime}(a)\text{ exists}\}\;.\] It is the definition domain of the function \[f^{\prime}\colon D(f)\to\mathbb{R}\] that sends any \(a\in D(f)\) to \(f^{\prime}(a)\) and is called the _derivative of \(f\)_. If \(N\subset D(f)\), we say that \(f\) is _differentiable on \(N\)_. Classically, differentiability at \(a\) implies continuity at \(a\), and therefore any classical function that is differentiable on a set is on it automatically continuous. For a CMA counterpart of this result in Proposition 2.32 below we introduce the notion of uniform derivative. For \(M\subset\mathbb{Q}\), \(f\colon M\to\mathbb{R}\) and \(a\in D(f)\) we restate the differentiability of \(f\) at \(a\) as the approximation \[f(b)=f(a)+\left(f^{\prime}(a)+\delta_{a,\,f}(b)\right)\cdot(b-a),\ \ b\in M\;,\] where the function \(\delta_{a,f}\colon M\to\mathbb{R}\) satisfies that \(\forall\,k\;\exists\,n\left(b\in M\wedge|b-a|\leq\frac{1}{n}\Rightarrow| \delta_{a,\,f}(b)|\leq\frac{1}{k}\right)\). **Definition 2.31** (uniform derivative): _Let \(M\subset\mathbb{Q}\) and \(f\colon M\to\mathbb{R}\). We say that the derivative_ \[f^{\prime}\colon D(f)\to\mathbb{R}\] _is uniform (on \(D(f)\)) if_ \[\forall\,k\;\exists\,n\;\forall\,a\in D(f)\left(b\in M\wedge|b-a|\leq 1/n \Rightarrow|\delta_{a,\,f}(b)|\leq 1/k\right)\,.\] In words, we require that the convergence of differential ratios to the derivative \(f^{\prime}(a)\) is uniform in \(a\in D(f)\): \[\forall\,k\ \exists\,n\ \forall\,a\in D(f)\bigg{(}b\in M\wedge 0<|a-b|\leq 1/n \Rightarrow\bigg{|}\frac{f(b)-f(a)}{b-a}-f^{\prime}(a)\bigg{|}\leq 1/k\bigg{)}\;.\] **Proposition 2.32** (on uniform derivative): _Let \(M\subset\mathbb{Q}\) and \(f\colon M\to\mathbb{R}\) be a function such that the derivative_ \[f^{\prime}\colon D(f)\to\mathbb{R}\] _is uniform. The following hold._ 1. _If_ \(f^{\prime}\) _is bounded then the restriction of_ \(f\) _to_ \(D(f)\) _is_ UC_._ 2. _If_ \(M\) _is bounded and_ \(f^{\prime}\) _is_ UC _then the restriction of_ \(f\) _to_ \(D(f)\) _is_ UC_._ Proof.: Let \(M\) and \(f\) be as stated. 1. Let a \(k\) be given and \(C\) bound \(f^{\prime}[D(f)]\). We take an \(n\), \(n\geq k\), such that the condition in Definition 2.31 holds. Then for every \(a,b\in D(f)\) with \(|a-b|\leq\frac{1}{n}\), \[|f(b)-f(a)|\leq\big{(}|f^{\prime}(a)|+|\delta_{a,f}(b)|\big{)}\cdot|a-b|\leq \frac{C+1}{n}\leq\frac{C+1}{k}\] and we see that \(f\) is UC. 2. If \(M\) is bounded and \(f^{\prime}\) is UC then by Lemma 2.23\(f^{\prime}\) is bounded and we use part 1. **Proposition 2.33** (linearity of derivatives in CMA): _Let \(M\subset\mathbb{Q}\), \(x\) and \(y\) be real numbers and let \(f,g\colon M\to\mathbb{R}\) be functions such that their derivatives \(f^{\prime}\) and \(g^{\prime}\) are uniform on \(D(f)\) and \(D(g)\), respectively. Then_ \[D(f)\cap D(g)\subset D(xf+yg)\subset M\;,\] _the derivative_ \[(xf+yg)^{\prime}=xf^{\prime}+yg^{\prime}\] _and is uniform on \(D(f)\cap D(g)\)._ Proof.: Let \(M\), \(x\), \(y\), \(f\) and \(g\) be as stated and let a \(k\) be given. We take an \(n\) such that for any \(a\in D(f)\) and any \(b\in M\) with \(0<|a-b|\leq\frac{1}{n}\) we have \[\bigg{|}\frac{f(b)-f(a)}{b-a}-f^{\prime}(a)\bigg{|}\leq\frac{1}{k}\] and that the analogous inequality holds for the function \(g\). Then for any \(a\) in \(D(f)\cap D(g)\) and any \(b\in M\) with \(0<|a-b|\leq\frac{1}{n}\), \[\bigg{|}\frac{(xf+yg)(b)-(xf+yg)(a)}{b-a}-(xf^{\prime}+yg^{\prime })(a)\bigg{|}\leq\] \[\leq\,|x|\cdot\bigg{|}\frac{f(b)-f(a)}{b-a}-f^{\prime}(a)\bigg{|} +|y|\cdot\bigg{|}\frac{g(b)-g(a)}{b-a}-g^{\prime}(a)\bigg{|}\leq\frac{|x|+|y|} {k}\;.\] So the derivative of \(xf+yg\) equals \(xf^{\prime}+yg^{\prime}\) and is uniform on \(D(f)\cap D(g)\). \(\Box\) Already the classical Leibniz formula for the derivative of the product of two functions at a point is not completely straightforward, in the general form when infinite derivatives are allowed one has to assume that one of the functions is continuous at the point. In CMA this tendency is more pronounced and the assumptions below are non-obvious. It causes no problems in applications because we always apply the Leibniz formula to well behaved functions. **Theorem 2.34** (Leibniz formula in CMA): _Let \(M\subset\mathbb{Q}\) and \(f,g\colon M\to\mathbb{R}\) be two functions such that (i) \(f\) and \(g\) are bounded, (ii) one of them is UC and the other has bounded derivative and (iii) the derivatives \(f^{\prime}\) and \(g^{\prime}\) are uniform on \(D(f)\) and \(D(g)\), respectively. Then_ \[D(f)\cap D(g)\subset D(fg)\subset M\;,\] _the derivative_ \[(fg)^{\prime}=f^{\prime}g+fg^{\prime}\] _and is uniform on \(D(f)\cap D(g)\)._ _Proof._ Let \(M\), \(f\) and \(g\) be as stated and let a \(k\) be given. We assume that \(f\) is UC and \(g^{\prime}\) is bounded, the other case is treated similarly. We take an \(n\) such that for any \(a\in D(f)\) and any \(b\in M\) with \(0<|a-b|\leq\frac{1}{n}\) we have \[\left|\frac{f(b)-f(a)}{b-a}-f^{\prime}(a)\right|\leq\frac{1}{k}\;,\] that the analogous inequality holds for \(g\) and that for \(a,b\in M\) with \(|a-b|\leq\frac{1}{n}\) always \(|f(b)-f(a)|\leq\frac{1}{k}\). We also take a constant \(C\) bounding the sets \(f[M]\), \(g[M]\) and \(g^{\prime}[D(g)]\). Then for any \(a\in D(f)\cap D(g)\) and any \(b\in M\) with \(0<|a-b|\leq\frac{1}{n}\), \[\left|\frac{(fg)(b)-(fg)(a)}{b-a}-(f^{\prime}g+fg^{\prime})(a) \right|=\] \[=\left|\frac{f(b)(g(b)-g(a))+(f(b)-f(a))g(a)}{b-a}-f^{\prime}(a) g(a)-f(a)g^{\prime}(a)\right|\] \[\leq|f(b)|\cdot\left|\frac{g(b)-g(a)}{b-a}-g^{\prime}(a)\right|+ |f(b)-f(a)|\cdot|g^{\prime}(a)|+\] \[+\left|\frac{f(b)-f(a)}{b-a}-f^{\prime}(a)\right|\cdot|g(a)|\leq C \cdot\frac{1}{k}+\frac{1}{k}\cdot C+\frac{1}{k}\cdot C=\frac{3C}{k}\;.\] Thus the derivative of \(fg\) is \(f^{\prime}g+fg^{\prime}\) and is uniform on \(D(f)\cap D(g)\). \(\Box\) For the proof of Theorem 3.2 we need two families of uniform derivatives, the first one is this. **Corollary 2.35** (derivative of \(yx^{n}\)): _For every \(a<b\) in \(\mathbb{Q}\), real number \(y\) and \(n\in\mathbb{N}_{0}\), the function_ \[f(x):=yx^{n}\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _has for \(n>0\) the derivative_ \[f^{\prime}(x)=nyx^{n-1}\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\;.\] _For \(n=0\), \(f^{\prime}(x)=y^{\prime}\equiv 0\) identically on \([a,b]_{\mathbb{Q}}\). The derivative \(f^{\prime}(x)\) is uniform on \([a,b]_{\mathbb{Q}}\)._ _Proof._ It is easy to see that for \(f(x)=y\) and \(f(x)=x\) the result holds. Else we use induction on \(n\geq 1\) and apply the previous theorem to the product \(yx^{n}=x\cdot yx^{n-1}\). The assumptions (i), (ii) and (iii) of Theorem 2.34 are satisfied due to the inductive assumption, boundedness of \([a,b]_{\mathbb{Q}}\) and due to Corollary 2.26. In (ii) both combinations of assumptions are actually valid. \(\Box\) In the following theorem we arrive at the heart of the proof of Theorem 3.2. We prove this theorem via integration by parts in Theorem 2.67 which follows from the Fundamental Theorem of Analysis in Theorem 2.66. This theorem follows from the Lagrange mean value Theorem 2.38 which in turn follows from its particular case, the Rolle Theorem 2.37. And the Rolle theorem follows from the next theorem that gives the necessary condition -- vanishing of the derivative at a point -- that a function attains at the point its global extreme. It is one of the best known results in Mathematical Analysis, but its CMA version here appears for the first time. **Theorem 2.36** (global extremes and derivative): _Let \(c<d\) be in \(\mathbb{Q}\), \(x\) be a real number with \(c<x<d\) and \(f\colon[c,d]_{\mathbb{Q}}\to\mathbb{R}\) be a function such that the derivative_ \[f^{\prime}\colon D(f)=[c,\,d]_{\mathbb{Q}}\to\mathbb{R}\] _is uniform, \(\mathrm{UC}\) and \(f^{\prime}(x)\neq 0\). Then \(f\) does not attain at \(x\) its global extreme, i.e.,_ \[\exists\,b,\,b^{\prime}\in[c,\,d]_{\mathbb{Q}}\left(f(b)<f(x)<f(b^{\prime}) \right)\,.\] _The values \(f(x)\) and \(f^{\prime}(x)\) are of the extensions of \(f\) and \(f^{\prime}\), respectively, by Theorem 2.27._ _Proof._ Let \(c\), \(d\), \(x\) and \(f\) be as stated and let \(f^{\prime}(x)<0\), the case that \(f^{\prime}(x)>0\) is treated similarly. Note that \(f\) is UC by part 2 of Proposition 2.32. For \(a,b\in[c,d]_{\mathbb{Q}}\) near \(x\) we use the approximation \[f(b)=f(a)+\left(f^{\prime}(a)+\delta_{a,\,f}(b)\right)\cdot(b-a)\] to obtain the above points \(b\) and \(b^{\prime}\) showing that \(f(x)\) is not a global extremal value of \(f\). We describe in detail how to obtain a \(b\) with \(f(b)<f(x)\), the argument showing that there is a \(b\) with \(f(b)>f(x)\) is similar and is outlined at the end. We take a \(k\) such that \(\frac{1}{k}\leq|f^{\prime}(x)|/3\). Since \(f^{\prime}\) is uniform, there is an \(n\) such that \(x+\frac{1}{n}<d\) and that for every \(a\in[c,d]_{\mathbb{Q}}\) the implication \[b\in[c,d]_{\mathbb{Q}}\wedge a\leq b\leq a+1/n\Rightarrow|\delta_{a,\,f}(b)| \leq 1/k\] holds. For every such \(a\) and \(b\) the implication \[f^{\prime}(a)\leq\frac{f^{\prime}(x)}{2}\wedge b\geq a+\frac{1}{2n}\Rightarrow \left(f^{\prime}(a)+\delta_{a,\,f}(b)\right)\cdot(b-a)\leq\frac{f^{\prime}(x) }{12n}\] holds as well. By the definition of the values \(f(x)\) and \(f^{\prime}(x)\) (see Theorem 2.27) we can take an \(a\) with \(a>x\) and near enough to \(x\) such that \[a+\frac{1}{n}\leq d\wedge f^{\prime}(a)\leq\frac{f^{\prime}(x)}{2}\wedge f(a)+ \frac{f^{\prime}(x)}{12n}<f(x)\;.\] Finally, we take any fraction \(b\) such that \(a+\frac{1}{2n}\leq b\leq a+\frac{1}{n}\). The two previous implications then give that \[f(b)=f(a)+\left(f^{\prime}(a)+\delta_{a,\,f}(b)\right)\cdot(b-a)\leq f(a)+ \frac{f^{\prime}(x)}{12n}<f(x)\;,\] as desired. We explain how to get a \(b\) in \([c,d]_{\mathbb{Q}}\) with \(f(b)>f(x)\). We select a \(k\) as before, replace the first inequality for \(n\) with \(c<x-\frac{1}{n}\), replace the next two ranges for \(b\) with \(a-\frac{1}{n}\leq b\leq a\) and \(b\leq a-\frac{1}{2n}\), respectively, and reverse the final inequality in the second implication to \(\geq-f^{\prime}(x)/12n\). Then we select some \(a<x\) such that \(c\leq a-\frac{1}{n}\), \(f^{\prime}(a)\) is as before but \(f(a)-f^{\prime}(x)/12n>f(x)\). Any fraction \(b\) with \(a-\frac{1}{n}\leq b\leq a-\frac{1}{2n}\) gives the value \(f(b)>f(x)\). \(\Box\) We say that a real number \(x\) is a _two-sided limit point of \(M\subset\mathbb{Q}\)_, or of \(M\subset\mathbb{R}\), if \[\forall\,k\;\exists\,b,\,b^{\prime}\in M\left(x-1/k\leq b^{\prime}<x<b\leq x+ 1/k\right)\,.\] The classical version of Theorem 2.36 is simpler and considerably more general: If \(x\in M\subset\mathbb{R}\), \(x\) is a two-sided limit point of \(M\), \(f\colon M\to\mathbb{R}\) and \(f^{\prime}(x)\neq 0\) (possibly \(f^{\prime}(x)=\pm\infty\)) then \(f\) does not attain at \(x\) its global (or local) extreme. In the proof of Theorem 2.36 we had to cope with the difficulty that \(f^{\prime}(x)\) is defined only as the limit of \(f^{\prime}(a_{n})\) for any \((a_{n})\subset[c,d]_{\mathbb{Q}}\) going to \(x\) and cannot be accessed directly as the limit of differential ratios. Thus we called to help the uniformity of \(f^{\prime}\). Another difficulty is that we have to take some \(b\) close to \(a\) but not too close to it, and this is hard to do in general definition domains \(M\subset\mathbb{Q}\) (such as we have in the quoted classical theorem). Fortunately, rational intervals \([c,d]_{\mathbb{Q}}\) suffice for our purposes. It is an intriguing question how much the definition domain of \(f\) in Theorem 2.36 can be generalized but we will not pursue it here. Instead we proceed to the two classical mean value theorems, here in CMA versions. **Theorem 2.37** (Rolle, Cma): _Let \(a<b\) be in \(\mathbb{Q}\) and \(f\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\) be a function such that \(f(a)=f(b)\) and the derivative_ \[f^{\prime}\colon D(f)=[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _is uniform and \(\mathrm{UC}\). Then there exists a real number \(x\) (close to \([a,\,b]_{\mathbb{Q}}\)) such that_ \[a<x<b\wedge f^{\prime}(x)=0\;.\] _The value \(f^{\prime}(x)\) is of the extension of \(f^{\prime}\) by Theorem 2.27._ _Proof._ As we know, \(f\) is UC by part 2 of Proposition 2.32. If \(f\) is a constant function with \(f(c)=f(a)=f(b)\) for every \(c\in[a,b]_{\mathbb{Q}}\) then \(f^{\prime}(c)=0\) for every such \(c\) (Corollary 2.35) and we can take any \(x:=c\) with \(a<c<b\). Suppose that \(f(c)>f(a)=f(b)\) for some \(c\) with \(a<c<b\), the other case with \(f(c)<f(a)=f(b)\) is treated similarly. By Theorem 2.29 (and Proposition 2.8) there is an \(x\) with \(a\leq x\leq b\) such that \(f\) attains at \(x\) its global maximum. As \(f(x)\geq f(c)\), it follows that \(a<x<b\). By Theorem 2.36, \(f^{\prime}(x)=0\). \(\Box\) **Theorem 2.38** (Lagrange, Cma): _Let \(a<b\) be in \(\mathbb{Q}\) and \(f\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\) be a function such that the derivative_ \[f^{\prime}\colon D(f)=[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _is uniform and \(\mathrm{UC}\). Then there exists a real number \(x\) (close to \([a,\,b]_{\mathbb{Q}}\)) such that_ \[a<x<b\wedge f(b)-f(a)=f^{\prime}(x)\cdot(b-a)\;.\] _The value \(f^{\prime}(x)\) is of the extension of \(f^{\prime}\) by Theorem 2.27._ _Proof._ Let \(a\), \(b\) and \(f\) be as stated. We set \(y:=\frac{f(b)-f(a)}{b-a}\) and consider the function \[g=g(x):=f(x)-yx\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\;.\] By Propositions 2.25, 2.33 and Corollaries 2.26 and 2.35, the derivative \[g^{\prime}(x)=f^{\prime}(x)-y\colon D(g)=[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\;,\] is uniform and UC. Also, \(g(b)-g(a)=f(b)-f(a)-y(b-a)=0\) and \(g(a)=g(b)\). We apply Theorem 2.37 to \(g\) and get an \(x\) such that \(a<x<b\) and \(g^{\prime}(x)=f^{\prime}(x)-y=0\). Thus \[f^{\prime}(x)=y=\frac{f(b)-f(a)}{b-a}\;\text{ and }\;f(b)-f(a)=f^{\prime}(x) \cdot(b-a)\;,\] as stated. \(\Box\) ### The exponential function In this subsection we introduce the countable exponential function and derive some properties of it. **Definition 2.39** (\(\mathrm{e}^{x}\)): _The exponential function \(\mathrm{e}^{x}=\exp(x)\colon\mathbb{Q}\to\mathbb{R}\) is defined as the sum of the absolutely convergent series_ \[\mathrm{e}^{a}=\exp(a):=\sum_{n=0}^{\infty}\frac{a^{n}}{n!}\;.\] For every fraction \(a\) and every \(n_{0}\geq 2|a|\), if \(n\geq n_{0}\) then \[\left|\frac{a^{n}}{n!}\right|\leq\frac{|a|^{n_{0}}}{n_{0}!}\left(\frac{1}{2} \right)^{n-n_{0}}=\frac{(2|a|)^{n_{0}}}{n_{0}!}\left(\frac{1}{2}\right)^{n}\;.\] By Propositions 2.17, 2.18 and 2.19, for every \(a\) the series \(\sum_{n=0}^{\infty}a^{n}/n!\) absolute convergences and hence converges. The previous definition is therefore correct. **Corollary 2.40** (\(\exp\) **near \(0\) 1)**: _It is true that_ \[\forall\,a\in\mathbb{Q}\left(|a|\leq\tfrac{1}{2}\Rightarrow|\mathrm{e}^{a}-1| \leq 2|a|\right)\;.\] _Proof._ Let \(a\) satisfy that \(|a|\leq\tfrac{1}{2}\). Then by Proposition 2.16 and 2.17, the triangle inequality and Proposition 2.8, \[\left|\mathrm{e}^{a}-1\right|\leq\sum_{n=1}^{\infty}|a|^{n\text{ Prop.\ \ref{prop:2.18}}}\;\frac{|a|}{1-|a|}\leq 2|a|\;.\] \(\Box\) We give this proof in more details for the following stronger bound. **Corollary 2.41** (\(\exp\) **near \(0\) 2)**: _It is true that_ \[\forall\,a\in\mathbb{Q}\left(|a|\leq\tfrac{1}{2}\Rightarrow|\mathrm{e}^{a}-1- a|\leq a^{2}\right)\;.\] _Proof._ Let \(a\) be as stated. Then by Propositions 2.16 and 2.17, the triangle inequality and Proposition 2.8, \[\left|\mathrm{e}^{a}-1-a\right|\leq\frac{1}{2}\sum_{n=2}^{\infty}|a|^{n\text{ Prop.\ \ref{prop:2.18}}}\;\frac{1}{2}\cdot\frac{a^{2}}{1-|a|}\leq a^{2}\;.\] \(\Box\) We give the last proof in detail. Consider the series \(S:=\sum_{n=0}^{\infty}a^{n}/n!\) and \(T:=\sum_{n=0}^{\infty}a_{n}\) with \(a_{0}:=0\), \(a_{1}:=a\) and \(a_{n}:=0\) for \(n>1\). By Proposition 2.16, in the sense of sums \[\mathrm{e}^{a}-1-a=S-T=\sum_{n=2}^{\infty}a^{n}/n!\;.\] If \((u_{n})\) are partial sums of the last series and \((v_{n})\) are partial sums of the series \(\sum_{n=2}^{\infty}|a|^{n}\) then by the triangle inequality, \[|u_{n}|\leq v_{n}/2\] for every \(n\in\mathbb{N}_{0}\). Thus by Propositions 2.8, 2.17 and 2.18, \[\left|\mathrm{e}^{a}-1-a\right|=|\lim u_{n}|\leq\lim|u_{n}|\leq\frac{\lim v_{n} }{2}=\frac{a^{2}}{2(1-|a|)}\leq a^{2}\;.\] **Definition 2.42** (Euler's number): _We define Euler's number \(\mathrm{e}\in\mathbb{R}\) as the value of the exponential function at \(1\),_ \[\mathrm{e}:=\exp(1)=\mathrm{e}^{1}=\sum_{n=0}^{\infty}\frac{1}{n!}=2.71828\ldots\;.\] In the next section we prove transcendence of this real number. The binomial theorem is well known but can be stated in many ways. We fix the following form. **Proposition 2.43** (binomial theorem): _Let \(x\) and \(y\) be real numbers and let \(n\in\mathbb{N}_{0}\). Then_ \[(x+y)\cdot(x+y)\cdot\ldots\cdot(x+y)=(x+y)^{n}=\sum_{i=0}^{n}\binom{n}{i}x^{i} y^{n-i}\;.\] Proof.: Let \(n\in\mathbb{N}_{0}\) and \(i\in\{0,1,\ldots,n\}\). By the first ratio below, the binomial coefficient \[\binom{n}{i}=\frac{n(n-1)\ldots(n-i+1)}{i!}=\frac{n!}{i!(n-i)!}\] counts \(i\)-element subsets of the set \([n]\). Using the distributive law we multiply out the above displayed \(n\)-term product. The monomial \(x^{i}y^{n-i}\) then arises exactly when we select \(x\) in some \(i\) brackets and \(y\) in the remaining \(n-i\) brackets. This can be done in \(\binom{n}{i}\) ways and this is the resulting coefficient of \(x^{i}y^{n-i}\). **Theorem 2.44** (exponential identity): _For every \(a,b\in\mathbb{Q}\),_ \[\exp(a+b)=\exp(a)\cdot\exp(b)\;.\] Proof.: Let \(a\) and \(b\) be arbitrary fractions. Each series defining \(\exp(x)\) absolutely converges and \[\exp(a)\cdot\exp(b) = \sum_{n=0}^{\infty}\frac{a^{n}}{n!}\cdot\sum_{n=0}^{\infty}\frac{b^ {n}}{n!}\] \[\stackrel{{\text{Thm. \ref{thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thmth:thm:thm:thmthm:thm:ththm:thm:thmth:thm:thm:thm:thm:thm:thmthm:thm:thmth:thmth:thmthm:thm:ththm:thm:thm:thm:thmthm:thm:thmthm:thm:thm:thmthm:thm:thm:thm:thmthm:thm:thmth:thm:thm:thmthm:thm:thmth:thm:thmthm:thm:thmthm:thm:thmthm:ththm:thm:thm:thm:thmthm:thm:thm:thm:thm:thmthm:thmthm:thm:thmthm:thmthm:thm:thmthm:thm:thmthm:thm:thmthm:thmthm:thm:thmthm:thmth:thm:thmthm:thm:thm:thmthm:thm:thmthm:thmthm:thm:thmthm:thm:thmthm:thm:thmthm:thm:thmthm:thm:thmthm:thmthm:thmthm:thm:thmthm:thm:thmthm:thmthm:thmthm:thmthm:thm:thmthm:thm:thmthm:thmthm:thmthm:thm:thmthm:thm:thm:thm:thm:thmthm:thm:thmthm:thm:thmthm:thm:thm:thmthm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thmthm:thmthm:thmthm:thm:thm:thmthm:thm:thmthm:thmthm:thmthm:thmthm:thm:thmthm:thm:thmthm:thm:thmthm:thm:thmthm:thm:thm:thmthm:thmthm:thmthm:thmthm:thmthm:thm:thmthm:thm:thmthm:thmthm:thm:thmthm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thmthm:thm:thmthm:thm:thmthm:thm:thm:thmthm:thm:thmthm:thm:thmthm:thm:thm:thm:thmthm:thm:thm:thmthm:thm:thm:thm:thmthm:thm:thm:thmthm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:th We see that \(\mathrm{e}^{x}\,|\,M\) is UC. \(\Box\) For every \(M\subset\mathbb{Q}\) and \(f\colon M\to\mathbb{R}\) we denote \[-M:=\{a\;|\;-a\in M\}\;\mbox{ and }\;f^{-}\colon\,-M\to\mathbb{R},\;\;f^{-}(a):=f(-a)\;.\] Thus if \(M\) is bounded then so is \(-M\), and if \(f\) is UC then so is \(f^{-}\). From this and the last proposition we get the next corollary. **Corollary 2.48** (\(\exp(-x)\) is UC): _For every bounded set \(M\subset\mathbb{Q}\), the restriction \(\mathrm{e}^{-x}\,|\,M\) is UC._ _Proof._ Suppose that \(M\subset\mathbb{Q}\) is bounded. Then \(\mathrm{e}^{-x}\,|\,M=\big{(}\mathrm{e}^{x}\,|\,-M\big{)}^{-}\). \(\Box\) **Corollary 2.49** (\(yx^{n}\exp(-x)\) is UC): _For every real number \(y\), bounded set \(M\subset\mathbb{Q}\) and every \(n\in\mathbb{N}_{0}\), the restriction of the function_ \[f(x):=yx^{n}\exp(-x)\] _to \(M\) is UC._ _Proof._ By Corollaries 2.48 and 2.26 and by Proposition 2.24. \(\Box\) The second uniform derivative for the proof of Theorem 3.2 is as follows. **Proposition 2.50** (derivative of \(-\exp(-x)\)): _For every \(c<d\) in \(\mathbb{Q}\), the function_ \[-\exp(-x)\colon[c,\,d]_{\mathbb{Q}}\to\mathbb{R}\] _has the uniform derivative_ \[\big{(}-\exp(-x)\big{)}^{\prime}=\exp(-x)\colon[c,\,d]_{\mathbb{Q}}\to\mathbb{ R}\;.\] _Proof._ Let \(f(x):=-\exp(-x)\), \(g(x):=-f(x)\), \(C\) be a constant bounding \([c,d]_{\mathbb{Q}}\) and let \(a,b\in[c,d]_{\mathbb{Q}}\) be two fractions with \(0<|a-b|\leq\frac{1}{n}\), \(n\geq 2\). Then, by Theorem 2.44 and Corollaries 2.41 and 2.45, \[\left|\frac{f(b)-f(a)}{b-a}-g(a)\right|=\left|\mathrm{e}^{-a}\cdot \frac{1+a-b-\mathrm{e}^{a-b}-(a-b)}{b-a}-\mathrm{e}^{-a}\right|\] \[\leq\left|\mathrm{e}^{-a}\right|\cdot|a-b|\leq\frac{\mathrm{e}^{ C}}{n}\;.\] This goes to \(0\) with \(n\to\infty\) independently of \(a\in[c,d]_{\mathbb{Q}}\). Hence \(g(a)=f^{\prime}(a)\) and the derivative is uniform. \(\Box\) These results on the function \(\mathrm{e}^{-x}\colon\mathbb{Q}\to\mathbb{R}\) remind us that later, not now as here we do not need them, we have to derive results on UC and uniform derivatives of composite functions and of inverse functions. We define the operation of composition of functions with fractional arguments as follows. **Definition 2.51** (composing UC functions): _Suppose that \(M,N\subset\mathbb{Q}\) and that_ \[g\colon N\to\mathbb{R}\ \text{ and }\ f\colon M\to\mathbb{R}\] _are UC functions such that every number \(x\in g[N]\) is close to \(M\). Then we define by means of Theorem 2.27 the composite function \(h=f(g)=f\circ g\) by_ \[h\colon N\to\mathbb{R},\ \ h(a):=\lim_{n\to\infty}f(a_{n})\;,\] _where \(a\in N\) and \((a_{n})\subset M\) is any sequence with \(\lim a_{n}=g(a)\)._ In fact, it suffices that only the outer function \(f\) is UC. ### Integrals In this subsection we develop for CMA a fragment of the classical Riemann integration theory. For \(a<b\) in \(\mathbb{Q}\) consider the interval \(I:=[a,b]_{\mathbb{Q}}\). A _partition_ of \(I\) is an \((n+1)\)-tuple \(\overline{a}=(a_{0},a_{1},\ldots,a_{n})\) of fractions \(a_{i}\) such that \(a=a_{0}<a_{1}<\cdots<a_{n}=b\). Its _norm_\(\Delta(\overline{a})\in\mathbb{Q}\) is \[\Delta(\overline{a}):=\max(\{a_{i}-a_{i-1}\mid i\in[n]\})\;.\] A _tagged partition_\(P\) of \(I\) is any pair \[P=(\overline{a},\,\overline{b})\] of a partition \(\overline{a}\) of \(I\) and an \(n\)-tuple \(\overline{b}=(b_{1},\ldots,b_{n})\) of fractions \(b_{i}\in[a_{i-1},a_{i}]_{\mathbb{Q}}\). We set \(\Delta(P):=\Delta(\overline{a})\). For a function \(f\colon I\to\mathbb{R}\) and a tagged partition \(P=(\overline{a},\overline{b})\) of \(I\) we define the corresponding _Riemann sum_\(R(f,P)\in\mathbb{R}\) by \[R(f,\,P):=\sum_{i=1}^{n}f(b_{i})\cdot(a_{i}-a_{i-1})\;.\] The next theorem is fundamental for defining integrals in CMA and we simplify its proof by a lemma. **Lemma 2.52**: _Let \(a<b\) be in \(\mathbb{Q}\), \(k\in\mathbb{N}\), \((a_{0},\ldots,a_{n})\) and \((b_{0},\ldots,b_{m})\) be partitions of \([a,b]_{\mathbb{Q}}\) such that_ \[\{a_{0},\,a_{1},\,\ldots,\,a_{n}\}\subset\{b_{0},\,b_{1},\,\ldots,\,b_{m}\}\;,\] _and \((x_{1},\ldots,x_{n})\) and \((y_{1},\ldots,y_{m})\) be real tuples such that for every \(i\in[n]\) and every \(j\in[m]\),_ \[a_{i-1}\leq b_{j-1}<b_{j}\leq a_{i}\Rightarrow|x_{i}-y_{j}|\leq 1/k\;.\] _Then_ \[\bigg{|}\sum_{i=1}^{n}x_{i}\cdot(a_{i}-a_{i-1})-\sum_{j=1}^{m}y_{j}\cdot(b_{j} -b_{j-1})\bigg{|}\leq\frac{b-a}{k}\;.\] Proof.: This bound holds by the triangle inequality if \(n=1\): \[\left|x_{1}\cdot(b-a)-\sum_{j=1}^{m}y_{j}\cdot(b_{j}-b_{j-1})\right|=\] \[=\,\left|\,\sum_{j=1}^{m}x_{1}\cdot(b_{j}-b_{j-1})-\sum_{j=1}^{m}y _{j}\cdot(b_{j}-b_{j-1})\right|\leq\] \[\leq\,\sum_{j=1}^{m}|x_{1}-y_{j}|\cdot(b_{j}-b_{j-1})\leq\frac{1}{ k}\sum_{j=1}^{m}(b_{j}-b_{j-1})=\frac{b-a}{k}\;.\] For \(n\geq 1\) we apply this case to the partitions of the \(n\) intervals \([a_{i-1},a_{i}]_{\mathbb{Q}}\), \(i\in[n]\), by the \(b_{j}\) lying in the intervals, and get by the triangle inequality that the stated displayed absolute value is again at most \[\sum_{i=1}^{n}\frac{a_{i}-a_{i-1}}{k}=\frac{b-a}{k}\;.\] \(\Box\) For two tagged partitions \(P=(\overline{a},\overline{c})\) and \(Q=(\overline{b},\overline{d})\) of \([a,b]_{\mathbb{Q}}\) with \(\overline{a}=(a_{0},a_{1},\ldots,a_{n})\) and \(\overline{b}=(b_{0},b_{1},\ldots,b_{m})\) we define their union \[P\cup Q=(\overline{a}\cup\overline{b},\,\overline{c}\cup\overline{d})\] by taking \(\overline{a}\cup\overline{b}\) to be the partition \((z_{0},z_{1},\ldots,z_{n^{\prime}})\) of \([a,b]_{\mathbb{Q}}\) with the fractions \[\{z_{0},\,z_{1},\,\ldots,\,z_{n^{\prime}}\}=\{a_{0},\,a_{1},\,\ldots,\,a_{n}\} \cup\{b_{0},\,b_{1},\,\ldots,\,b_{m}\}\;,\] and setting \(\overline{c}\cup\overline{d}=(z_{1},\ldots,z_{n^{\prime}})\). **Theorem 2.53** (existence of integrals): _Let \(a<b\) be in \(\mathbb{Q}\),_ \[f\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _be a UC function, and let \((P_{n})\) and \((Q_{n})\) be two sequences of tagged partitions of \([a,b]_{\mathbb{Q}}\) such that \(\lim\Delta(P_{n})=\lim\Delta(Q_{n})=0\). Then the two sequences of corresponding Riemann sums converge and have equal (real) limits,_ \[\lim_{n\to\infty}R(f,\,P_{n})=\lim_{n\to\infty}R(f,\,Q_{n})\;.\] Proof.: Let \(a\), \(b\), \(f\), \((P_{n})=((\overline{a_{n}},\overline{b_{n}}))\) and \((Q_{n})\) be as stated and let a \(k\) be given. Since \(f\) is UC, we can take an \(l\) such that \[a\leq c,\,c^{\prime}\leq b\wedge|c-c^{\prime}|\leq 1/l\Rightarrow|f(c)-f(c^{ \prime})|\leq 1/k\;.\] First we show that \((R(f,P_{n}))\) and \((R(f,Q_{n}))\) are Cauchy sequences. Then we show that \[\lim(R(f,P_{n})-R(f,Q_{n}))=0\,.\] By this and Theorem 2.13 we will be done. Using the assumption on \((P_{n})\) and \((Q_{n})\) we take an \(n_{0}\) such that \(n\geq n_{0}\Rightarrow\Delta(P_{n}),\Delta(Q_{n})\leq\frac{1}{l}\). Let \(m,n\geq n_{0}\). The previous lemma, the triangle inequality, the choice of \(m\) and \(n\) and the definition of \(l\) imply that \[|R(f,\,P_{m})-R(f,\,P_{n})|\leq\] \[\leq|R(f,\,P_{m})-R(f,\,P_{m}\cup P_{n})|+|R(f,\,P_{m}\cup P_{n})- R(f,\,P_{n})|\leq\] \[\leq\frac{b-a}{k}+\frac{b-a}{k}=\frac{2(b-a)}{k}\:.\] The same holds for \(Q_{m}\) and \(Q_{n}\) and we see that both sequences of Riemann sums are Cauchy. Let \(n\geq n_{0}\). For the second claim we take the tagged partition \(P_{n}\cup Q_{n}\) of \([a,b]_{\mathbb{Q}}\) and get in the same way that \[|R(f,\,P_{n})-R(f,\,Q_{n})|\leq\] \[\leq|R(f,\,P_{n})-R(f,\,P_{n}\cup Q_{n})|+|R(f,\,P_{n}\cup Q_{n})- R(f,\,Q_{n})|\leq\] \[\leq\frac{b-a}{k}+\frac{b-a}{k}=\frac{2(b-a)}{k}\:.\] Thus \(\lim(R(f,P_{n})-R(f,Q_{n}))=0\). \(\Box\) Like in the standard theory, we define integrals of countable real functions by limits of sequences of Riemann sums. An interesting question, which we leave aside here, is for which function this integral exists. **Definition 2.54** (definition of integrals 1): _Let \(a<b\) be in \(\mathbb{Q}\) and_ \[f\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _be a \(\mathrm{UC}\) function. Its (Riemann) integral is the (real) limit_ \[(\mathbb{Q})\int_{a}^{b}f=(\mathbb{Q})\int_{a}^{b}f(x)\,\mathrm{d}x:=\lim_{n \to\infty]}R(f,\,P_{n})\;,\] _for any sequence \((P_{n})\) of tagged partitions of \([a,b]_{\mathbb{Q}}\) such that \(\lim\Delta(P_{n})=0\)._ By the previous theorem this limit always exists and does not depend on \((P_{n})\). We extend the definition slightly by setting \((\mathbb{Q})\int_{a}^{a}f:=0\) for any \(a\) and any \(f\), and setting \((\mathbb{Q})\int_{a}^{b}f:=-(\mathbb{Q})\int_{b}^{a}f\) for \(a>b\) if the latter integral is defined. **Proposition 2.55** (definition of integrals 2): _Let \(a<b\) be in \(\mathbb{Q}\) and_ \[f\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _be a \(\mathrm{UC}\) function. Then for every \(k\in\mathbb{N}\) there is an \(n\in\mathbb{N}\) such that for every tagged partition \(P\) of \([a,b]_{\mathbb{Q}}\),_ \[\Delta(P)\leq 1/n\Rightarrow\left|R(f,\,P)-(\mathbb{Q})\int_{a}^{b}f\right| \leq 1/k\:.\] Proof.: Suppose that \(a\), \(b\) and \(f\) are as stated and that a \(k\) is given. The integral exists by Theorem 2.53. We take an \(l\) such that \(\frac{2(b-a)+1}{l}\leq\frac{1}{k}\) and then an \(n\) such that \[c,\,c^{\prime}\in[a,\,b]_{\mathbb{Q}}\wedge|c-c^{\prime}|\leq 1/n\Rightarrow|f(c)-f (c^{\prime})|\leq 1/l\;.\] Using Definition 2.54 we take a tagged partition \(Q\) of \([a,b]_{\mathbb{Q}}\) such that \(\Delta(Q)\leq 1/n\) and \(|R(f,Q)-(\mathbb{Q})\int_{a}^{b}f|\leq 1/l\). Then for every tagged partition \(P\) of \([a,b]_{\mathbb{Q}}\) with \(\Delta(P)\leq 1/n\) we have by Lemma 2.52 and the triangle inequality that \[\left|R(f,\,P)-(\mathbb{Q})\int_{a}^{b}f\right|\leq|R(f,\,P)-R(f, \,P\cup Q)|+\] \[+|R(f,\,P\cup Q)-R(f,\,Q)|+\left|R(f,\,Q)-(\mathbb{Q})\int_{a}^{ b}f\right|\leq\] \[\leq\frac{b-a}{l}+\frac{b-a}{l}+\frac{1}{l}=\frac{2(b-a)+1}{l} \leq\frac{1}{k}\;.\] \(\Box\) **Proposition 2.56** (linearity of integrals 1): _Let \(a<b\) be in \(\mathbb{Q}\), \(x\) and \(y\) be real numbers and_ \[f,\,g\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _be \(\mathrm{UC}\) functions. Then_ \[(\mathbb{Q})\int_{a}^{b}(xf+yg)=x\cdot(\mathbb{Q})\int_{a}^{b}f+y\cdot(\mathbb{ Q})\int_{a}^{b}g\;.\] Proof.: The three integrals exist by Proposition 2.25 and Theorem 2.53. It is easy to see that for any tagged partition \(P\) of \([a,b]_{\mathbb{Q}}\) it holds that \[R(xf+yg,\,P)=x\cdot R(f,\,P)+y\cdot R(g,\,P)\;.\] Thus the stated identity follows from Theorem 2.53 and Proposition 2.6. \(\Box\) This identity can be extended to any pair \(a,b\in\mathbb{Q}\). **Proposition 2.57** (additivity of integrals 1): _Let \(a<b<c\) be in \(\mathbb{Q}\) and \(f\colon[a,c]_{\mathbb{Q}}\to\mathbb{R}\) be a \(\mathrm{UC}\) function. Then_ \[(\mathbb{Q})\int_{a}^{c}f=(\mathbb{Q})\int_{a}^{b}f+(\mathbb{Q})\int_{b}^{c}f\;.\] Proof.: The three integrals exist by Theorem 2.53. Any two tagged partitions \(P\) and \(P^{\prime}\) of \([a,b]_{\mathbb{Q}}\) and \([b,c]_{\mathbb{Q}}\), respectively, straightforwardly merge in a tagged partition \(Q\) of \([a,c]_{\mathbb{Q}}\) such that \[R(f,\,P)+R(f,\,P^{\prime})=R(f,\,Q)\;.\] Thus the stated identity follows from Theorem 2.53 and Proposition 2.6. \(\Box\) This identity can be extended to any triple \(a,b,c\in\mathbb{Q}\). **Proposition 2.58** (comparing integrals): _Let \(a<b\) be in \(\mathbb{Q}\) and let_ \[f,\,g\colon[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _be UC functions such that \(f\leq g\) on \([a,b]_{\mathbb{Q}}\). Then the inequality_ \[(\mathbb{Q})\int_{a}^{b}f\leq(\mathbb{Q})\int_{a}^{b}g\] _holds._ Proof.: The two integrals exist by Theorem 2.53. For every tagged partition \(P\) of \([a,b]_{\mathbb{Q}}\) the implication \[f\leq g\text{ on }[a,b]_{\mathbb{Q}}\Rightarrow R(f,\,P)\leq R(g,\,P)\] holds. The stated inequality therefore follows from Theorem 2.53 and Proposition 2.8. This inequality can be adapted for any pair \(a,b\in\mathbb{Q}\). **Corollary 2.59** (absolute value and integrals): _Let \(a<b\) be in \(\mathbb{Q}\) and let \(f\colon[a,b]_{\mathbb{Q}}\to\mathbb{R}\) be a UC function. Then the inequality_ \[\left|(\mathbb{Q})\int_{a}^{b}f\right|\leq(\mathbb{Q})\int_{a}^{b}|f|\] _holds._ Proof.: The two integrals exist by Proposition 2.22 and Theorem 2.53. The inequality follows from Propositions 2.56 and 2.58 and the two inequalities \[-f\leq|f|\wedge|f|\leq f\] that hold on \([a,b]_{\mathbb{Q}}\). This inequality can be adapted for any pair \(a,b\in\mathbb{Q}\). The next inequality is often used. **Corollary 2.60** (LM bound): _If \(a<b\) are in \(\mathbb{Q}\) and \(f\colon[a,b]_{\mathbb{Q}}\to\mathbb{R}\) is a UC function and \(x\geq 0\) is a real number such that \(|f|\leq x\) on \([a,b]_{\mathbb{Q}}\) then the bound_ \[\left|(\mathbb{Q})\int_{a}^{b}f\right|\leq(b-a)\cdot x\] _holds._ Proof.: This proof is similar to the previous proof, we use that \(f(c)\leq x\) and \(-f(c)\leq x\) for every \(c\in[a,b]_{\mathbb{Q}}\) and use the trivial integral \[(\mathbb{Q})\int_{a}^{b}x\,\mathrm{d}y=x(b-a)\;.\] Also the last inequality can be adapted for any pair \(a,b\in\mathbb{Q}\). ### Improper integrals We introduce absolutely convergent improper integrals with upper integration limit \(+\infty\) and derive some of their properties. The next proposition is analogous to Proposition 2.17. **Proposition 2.61** (improper integrals): _Let \(f\colon[a,+\infty)_{\mathbb{Q}}\to\mathbb{R}\) be a function that is \(\mathrm{UC}\) on every interval \([a,b]_{\mathbb{Q}}\) with \(b\geq a\) and such that the real sequence_ \[\big{(}(\mathbb{Q})\int_{a}^{n}|f|\big{)}=\big{(}(\mathbb{Q})\int_{a}^{n}|f| \;\big{|}\;n=1,\,2,\,\dots\big{)}\] _converges. For any \(n\geq a\) the integral exists by Proposition 2.22 and Theorem 2.53, and for \(n<a\) the term of the sequence is defined arbitrarily. Then the sequence \(((\mathbb{Q})\int_{a}^{n}f)\) converges as well._ Proof.: Let \(f\) be as stated, \(I_{n}:=(\mathbb{Q})\int_{a}^{n}|f|\) and \(J_{n}:=(\mathbb{Q})\int_{a}^{n}f\). Then \((I_{n})\) is Cauchy (by Theorem 2.13) and for a given \(k\) we take an \(n_{0}>a\) such that, by Proposition 2.57, \[n\geq m\geq n_{0}\Rightarrow(\mathbb{Q})\int_{m}^{n}|f|=|I_{n}-I_{m}|\leq 1/k\;.\] By Proposition 2.57 and Corollary 2.59, for any \(n\geq m\geq n_{0}\) we have that \[|J_{n}-J_{m}|=\left|(\mathbb{Q})\int_{m}^{n}f\right|\leq(\mathbb{Q})\int_{m}^{ n}|f|\leq 1/k\] as well. Thus the sequence \((J_{n})\) is Cauchy and by Theorem 2.13 the limit \(\lim_{n\to\infty}\,(\mathbb{Q})\int_{a}^{n}f\) exists. \(\Box\) **Definition 2.62** (integrals over \([a,+\infty)_{\mathbb{Q}}\)): _In the situation of the previous proposition we proclaim the (real) limit of \(((\mathbb{Q})\int_{a}^{n}f)\) to be the integral of \(f\) (over the interval \([a,+\infty)_{\mathbb{Q}}\)),_ \[(\mathbb{Q})\int_{a}^{+\infty}f=(\mathbb{Q})\int_{a}^{+\infty}f(x)\,\mathrm{d }x:=\lim_{n\to\infty}\,(\mathbb{Q})\int_{a}^{n}f\;,\] _and we say that the (improper) integral \((\mathbb{Q})\int_{a}^{+\infty}f\) absolutely converges._ **Proposition 2.63** (linearity of integrals 2): _Let \(a\) be in \(\mathbb{Q}\), \(x\) and \(y\) be real numbers, and_ \[f,\,g\colon[a,\,+\infty)_{\mathbb{Q}}\to\mathbb{R}\] _be functions that are \(\mathrm{UC}\) on every interval \([a,b]_{\mathbb{Q}}\) with \(b\geq a\) and are such that the improper integrals \((\mathbb{Q})\int_{a}^{+\infty}f\) and \((\mathbb{Q})\int_{a}^{+\infty}g\) absolutely converge. Then the next improper integral absolutely converges and_ \[(\mathbb{Q})\int_{a}^{+\infty}(xf+yg)=x\cdot(\mathbb{Q})\int_{a}^{+\infty}f+y \cdot(\mathbb{Q})\int_{a}^{+\infty}g\;.\] Proof.: We denote the first improper integral by \(I\). Propositions 2.58, 2.56, 2.22 and 2.25 show that for every \(n>a\), \[0\leq(\mathbb{Q})\int_{a}^{n}|xf+yg|\leq|x|\cdot(\mathbb{Q})\int_{a}^{n}|f|+|y| \cdot(\mathbb{Q})\int_{a}^{n}|g|\;.\] By the assumption on \(f\) and \(g\) and by Definition 2.62 and Proposition 2.9, the integral \(I\) absolutely converges. For every \(n>a\), again by Proposition 2.56, \[(\mathbb{Q})\int_{a}^{n}(xf+yg)=x\cdot(\mathbb{Q})\int_{a}^{n}f+y\cdot( \mathbb{Q})\int_{a}^{n}g\;.\] The stated identity follows from Propositions 2.61 and 2.6. **Proposition 2.64** (additivity of integrals 2): _Let \(a\,<\,b\) be in \(\mathbb{Q}\) and let \(f\colon[a,+\infty)_{\mathbb{Q}}\to\mathbb{R}\) be a function that is \(\mathrm{UC}\) on every interval \([a,c]_{\mathbb{Q}}\) with \(c\geq a\). Then the identity_ \[(\mathbb{Q})\int_{a}^{+\infty}f=(\mathbb{Q})\int_{a}^{b}f+(\mathbb{Q})\int_{b }^{+\infty}f\] _holds whenever one of the two improper integrals absolutely converges._ Proof.: Let \(a\), \(b\) and \(f\) be as stated and let the first improper integral be \(I\) and the last one be \(J\). We assume that \(I\) absolutely converges. Then for every \(n>b\) Propositions 2.22, 2.57 and Theorem 2.53 give that \[(\mathbb{Q})\int_{a}^{n}|f|=(\mathbb{Q})\int_{a}^{b}|f|+(\mathbb{Q})\int_{b}^ {n}|f|\;.\] By the assumption on \(I\) and by Proposition 2.6 we see that \(J\) absolutely converges. For every \(n>b\), we get again by Proposition 2.57 that \[(\mathbb{Q})\int_{a}^{n}f=(\mathbb{Q})\int_{a}^{b}f+(\mathbb{Q})\int_{b}^{n}f\;.\] By the limit transition \(n\to\infty\) we get by Proposition 2.6 that the identity \(I=(\mathbb{Q})\int_{a}^{b}f+J\) holds. If \(J\) absolutely converges we use a very similar argument. This identity is easily adapted for any pair \(a,b\in\mathbb{Q}\). **Proposition 2.65** (shifting the interval): _Let \(a\) and \(b\geq 0\) be fractions and let \(f\colon[a,+\infty)_{\mathbb{Q}}\to\mathbb{R}\) be a function that is \(\mathrm{UC}\) on every interval \([a,c]_{\mathbb{Q}}\) with \(c\geq a\). Then the identity_ \[(\mathbb{Q})\int_{a}^{+\infty}f(x+b)\,\mathrm{d}x=(\mathbb{Q})\int_{a+b}^{+ \infty}f(x)\,\mathrm{d}x\] _holds whenever one of the two improper integrals absolutely converges._ Proof.: This is immediate by Definition 2.62 from the equality \[(\mathbb{Q})\int_{a}^{c}f(x+b)\,\mathrm{d}x=(\mathbb{Q})\int_{a+b}^{c+b}f(x)\, \mathrm{d}x\] which holds for every \(c\geq a\). The equality of integrals follows in turn from the equality \[R(f(x+b),\,P)=R(f(x),\,Q)\] for Riemann sums, where \(P\) is any tagged partition of \([a,c]_{\mathbb{Q}}\) and \(Q\) is the tagged partition of \([a+b,c+b]_{\mathbb{Q}}\) obtained by shifting \(P\) by \(b\). That is, if \(P=(\overline{a},\overline{b})\) with \(\overline{a}=(a_{0},a_{1},\ldots,a_{n})\) and \(\overline{b}=(b_{1},\ldots,b_{n})\) then \(Q=(\overline{a^{\prime}},\overline{b^{\prime}})\) with \((i\in[n])\) \[a_{i}^{\prime}:=a_{i}+b\ \ \text{and}\ \ b_{i}^{\prime}:=b_{i}+b\;.\] \(\Box\) ### Integration by parts We present variants in CMA of the Fundamental Theorem of Analysis (Calculus) and of the formula for integration by parts. **Theorem 2.66** (FTA in CMA): _Let \(a<b\) be in \(\mathbb{Q}\) and \(f\colon[a,b]_{\mathbb{Q}}\to\mathbb{R}\) be a function such that the derivative_ \[f^{\prime}\colon D(f)=[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _is uniform and \(\mathrm{UC}.\) Then_ \[(\mathbb{Q})\int_{a}^{b}f^{\prime}=f(b)-f(a)\;.\] Proof.: Let \(a\), \(b\) and \(f\) be as stated and let a \(k\) be given. The integral exists by Theorem 2.53. Let \(\overline{a}=(a_{0},a_{1},\ldots,a_{n})\) be a partition of \([a,b]_{\mathbb{Q}}\) such that \[\left|R(f^{\prime},\,P)-(\mathbb{Q})\int_{a}^{b}f^{\prime}\right|\leq 1/k\] holds for any \(\overline{b}\) in \(P=(\overline{a},\overline{b})\) (by Proposition 2.55 it suffices to take any \(\overline{a}\) with small enough \(\Delta(\overline{a})\)). By Theorem 2.38, for every \(i\in[n]\) there is a real number \(x_{i}\) such that (for the extended \(f^{\prime}\)) \[a_{i-1}<x_{i}<a_{i}\wedge f(a_{i})-f(a_{i-1})=f^{\prime}(x_{i})\cdot(a_{i}-a_ {i-1})\;.\] Using Theorem 2.27 we take a \(b_{i}\in[a_{i-1},a_{i}]_{\mathbb{Q}}\) such that for every \(i\in[n]\), \[|f^{\prime}(b_{i})-f^{\prime}(x_{i})|\leq 1/k\;.\] Then, with \(\overline{b}:=(b_{1},\ldots,b_{n})\) and \(P:=(\overline{a},\overline{b})\), \[\left|f(b)-f(a)-R(f^{\prime},\,P)\right| = \left|\,\sum_{i=1}^{n}\left(f(a_{i})-f(a_{i-1})-f^{\prime}(b_{i})( a_{i}-a_{i-1})\right)\right|\] \[= \left|\,\sum_{i=1}^{n}\left(f^{\prime}(x_{i})-f^{\prime}(b_{i}) \right)\cdot(a_{i}-a_{i-1})\right|\] \[\leq \frac{1}{k}\sum_{i=1}^{n}(a_{i}-a_{i-1})=\frac{b-a}{k}\;.\] The triangle inequality gives that \[\left|f(b)-f(a)-(\mathbb{Q})\int_{a}^{b}f^{\prime}\right|\leq|f(b) -f(a)-R(f^{\prime},\,P)|+\] \[+\left|R(f^{\prime},\,P)-(\mathbb{Q})\int_{a}^{b}f^{\prime}\right| \leq\frac{b-a}{k}+\frac{1}{k}=\frac{b-a+1}{k}\;.\] For \(k\to\infty\) we obtain the stated identity. \(\Box\) Recall the shorthand \[[f]_{a}^{b}:=f(b)-f(a)\;.\] **Theorem 2.67** (integration by parts in CMA): _Let \(a<b\) be in \(\mathbb{Q}\) and let \(f,g\colon[a,b]_{\mathbb{Q}}\to\mathbb{R}\) be functions such that their derivatives_ \[f^{\prime}\colon D(f)=[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\;\text{ and }\;g^{ \prime}\colon D(g)=[a,\,b]_{\mathbb{Q}}\to\mathbb{R}\] _are uniform and \(\mathrm{UC}\). Then_ \[(\mathbb{Q})\int_{a}^{b}fg^{\prime}=[fg]_{a}^{b}-(\mathbb{Q})\int_{a}^{b}f^{ \prime}g\;.\] Proof.: We note that \(f\) and \(g\) are UC by part 2 of Proposition 2.32. Thus both integrals exist by Proposition 2.24 and Theorem 2.53. All functions \(f\), \(g\), \(f^{\prime}\) and \(g^{\prime}\) are bounded by Lemma 2.23. Thus by Theorem 2.34\(fg\) is differentiable on \([a,\,b]_{\mathbb{Q}}\) and the derivative \((fg)^{\prime}=f^{\prime}g+fg^{\prime}\) is uniform. By Propositions 2.24 and 2.25 it is UC. Thus by Theorem 2.66 and Proposition 2.56 we have that \[[fg]_{a}^{b}=(\mathbb{Q})\int_{a}^{b}(fg)^{\prime}=(\mathbb{Q})\int_{a}^{b}(f ^{\prime}g+fg^{\prime})=(\mathbb{Q})\int_{a}^{b}f^{\prime}g+(\mathbb{Q})\int_ {a}^{b}fg^{\prime}\;.\] Rearranging, we get the stated identity. \(\Box\) Transcendence of Euler's number in CNT In this section we give Hilbert's proof of the transcendence of \(\mathrm{e}\) in CNT. Recall that a real number \(x\) is _algebraic_ if there exist \(n+1\), \(n\in\mathbb{N}_{0}\), fractions \(a_{0}\), \(a_{1}\),..., \(a_{n}\) such that \(a_{n}\neq 0\) and \[\sum_{i=0}^{n}a_{i}x^{i}=0\;.\] Thus \(x\) is a root of a nonzero polynomial with rational coefficients. If \(x\) is not algebraic, we call it a _transcendental_ number. The transcendence of \(\mathrm{e}=2.71828\ldots\) was proved first by Ch. Hermite in [4] in 1873, see [15] for discussion of Hermite's method. **Theorem 3.1** (Hermite): _Euler's number \(\exp(1)=\mathrm{e}=2.71828\ldots\) is transcendental._ We simplify Hilbert's simplification [5] of Hermite's proof even more by performing the proof in CNT. It is a simplification even though our article is ten times longer than Hilbert's, where moreover the transcendence of \(\pi\) is proven. We start from the key integral identity. **Theorem 3.2** (key integral identity): _For every \(n\in\mathbb{N}_{0}\),_ \[(\mathbb{Q})\int_{0}^{+\infty}x^{n}\mathrm{e}^{-x}\,\mathrm{d}x=n!\;\;(=1\cdot 2 \cdot\ldots\cdot n)\;,\] _with \(0!:=1\)._ _Proof._ For every real number \(y\), every \(a<b\) in \(\mathbb{Q}\) and every \(n\in\mathbb{N}_{0}\), the restriction of the function \(f(x)=yx^{n}\exp(-x)\) to \([a,b]_{\mathbb{Q}}\) is UC by Corollary 2.49 and therefore the integral \((\mathbb{Q})\int_{a}^{b}f\) exists. All proper integrals considered in this proof are of this form. Let \(n\in\mathbb{N}_{0}\) and let \(m\) be arbitrary. We start with \(n=0\). Corollary 2.48, Proposition 2.50 and Theorem 2.66 imply that \[(\mathbb{Q})\int_{0}^{m}x^{n}\mathrm{e}^{-x}\,\mathrm{d}x=(\mathbb{Q})\int_{0} ^{m}\mathrm{e}^{-x}\,\mathrm{d}x=[-\mathrm{e}^{-x}]_{0}^{m}=1-\mathrm{e}^{-m}\;.\] By Proposition 2.6, Corollary 2.46 and Definition 2.62, \[(\mathbb{Q})\int_{0}^{+\infty}\mathrm{e}^{-x}\,\mathrm{d}x=1\;.\] Thus the stated identity holds for \(n=0\). For \(n>0\) we proceed by induction. We assume that \(n\in\mathbb{N}\) and that the improper integral \[(\mathbb{Q})\int_{0}^{+\infty}x^{n-1}\mathrm{e}^{-x}\,\mathrm{d}x\] absolutely converges. Corollaries 2.26 and 2.48, Proposition 2.50 and Corollary 2.35 imply that \[(\mathbb{Q})\int_{0}^{m}x^{n}\mathrm{e}^{-x}\,\mathrm{d}x = (\mathbb{Q})\int_{0}^{m}x^{n}\left(-\mathrm{e}^{-x}\right)^{\prime} \,\mathrm{d}x\] \[\stackrel{{\mathrm{Thm.\,\ref{thm:2.67}}}}{{=}} \left[x^{n}\left(-\mathrm{e}^{-x}\right)\right]_{0}^{m}+(\mathbb{Q })\int_{0}^{m}\left(x^{n}\right)^{\prime}\mathrm{e}^{-x}\,\mathrm{d}x\] \[\stackrel{{\mathrm{Prop.\,\ref{thm:2.56}}}}{{=}} 0-m^{n}\mathrm{e}^{-m}+n\cdot(\mathbb{Q})\int_{0}^{m}x^{n-1} \mathrm{e}^{-x}\,\mathrm{d}x\;.\] Using Corollary 2.46, the inductive assumption and Proposition 2.6 and Definition 2.62 we get that \((\mathbb{Q})\int_{0}^{+\infty}x^{n}\mathrm{e}^{-x}\,\mathrm{d}x\) absolutely converges and \[I_{n}:=(\mathbb{Q})\int_{0}^{+\infty}x^{n}\mathrm{e}^{-x}\,\mathrm{d}x=n\cdot( \mathbb{Q})\int_{0}^{+\infty}x^{n-1}\mathrm{e}^{-x}\,\mathrm{d}x=n\cdot I_{n- 1}\;.\] We know that \(I_{0}=1\). Using the obtained recurrence \(I_{n}=nI_{n-1}\) we deduce that \(I_{n}=n!\,\). \(\Box\) We generalize the key identity. **Corollary 3.3** (generalization): _For every polynomial_ \[p(x)=a_{n}x^{n}+\cdots+a_{1}x+a_{0}\;,\] _where \(n\in\mathbb{N}_{0}\) and \(a_{i}\in\mathbb{Z}\), one has that_ \[(\mathbb{Q})\int_{0}^{+\infty}p(x)\mathrm{e}^{-x}\,\mathrm{d}x=\sum_{i=0}^{n} a_{i}\cdot i!\in\mathbb{Z}\;.\] _Proof._ This follows from Proposition 2.63 and Theorem 3.2. \(\Box\) **Corollary 3.4** (congruence): _For every polynomial_ \[p(x)=a_{n}x^{n}+\cdots+a_{1}x+a_{0}\] _such that \(n\in\mathbb{N}_{0}\), \(a_{i}\in\mathbb{Z}\) and \(a_{0}=a_{1}=\cdots=a_{m}=0\) for some \(m\in\mathbb{N}_{0}\) with \(0\leq m\leq n\), the integral_ \[(\mathbb{Q})\int_{0}^{+\infty}p(x)\mathrm{e}^{-x}\,\mathrm{d}x\] _is an integer divisible by \((m+1)!\)._ _Proof._ This follows from the previous corollary. \(\Box\) We start the proper proof of the transcendence of \(\mathrm{e}\). We assume for the contrary that \(\mathrm{e}=\exp(1)\) is algebraic: \[a_{n}\mathrm{e}^{n}+\cdots+a_{1}\mathrm{e}+a_{0}=0\] for some \(n\in\mathbb{N}_{0}\), \(a_{i}\in\mathbb{Q}\) and \(a_{n}\neq 0\). We multiply the equation by a common denominator of the coefficients \(a_{i}\) and get that \(a_{i}\in\mathbb{Z}\). We take out the maximum possible power of \(\mathrm{e}\) from the left-hand side and get that \(a_{0}\neq 0\). We see that \(n\in\mathbb{N}\). We take the integral polynomials (with \(n\) as above) \[p_{m}(x):=x^{m}\big{(}(x-1)(x-2)\ldots(x-n)\big{)}^{m+1},\ \ m\in\mathbb{N}\;,\] and split the integral \[I := (\mathbb{Q})\int_{0}^{+\infty}p_{m}(x)\mathrm{e}^{-x}\,\mathrm{d}x\] \[= (\mathbb{Q})\int_{0}^{i}p_{m}(x)\mathrm{e}^{-x}\,\mathrm{d}x+( \mathbb{Q})\int_{i}^{+\infty}p_{m}(x)\mathrm{e}^{-x}\,\mathrm{d}x\;,\] \[=: I_{i}+J_{i},\ \ i\in\mathbb{N}_{0}\;,\] by Proposition 2.64. Then \[0=\big{(}a_{n}\mathrm{e}^{n}+\cdots+a_{1}\mathrm{e}+a_{0}\big{)}\cdot I=\sum_ {i=0}^{n}a_{i}\mathrm{e}^{i}\cdot I_{i}+\sum_{i=0}^{n}a_{i}\mathrm{e}^{i}\cdot J _{i}=:A(m)+B(m)\;.\] We get the desired contradiction by showing in the next two propositions that \(|A(m)|\) has an exponential upper bound in \(m\) but \(|B(m)|\geq m!\) for infinitely many \(m\). **Proposition 3.5** (bounding \(A(m)\)): _There exist real numbers \(y,z>0\) such that_ \[\forall\,m\in\mathbb{N}\left(|A(m)|\leq yz^{m}\right)\,.\] _Proof._ Let \(w:=\sum_{i=0}^{n}|a_{i}|\mathrm{e}^{i}\). Since \(0<\mathrm{e}^{-a}\leq 1\) for \(a\geq 0\) (by Theorem 2.44) and \(|p_{m}(x)|\leq n^{(n+1)(m+1)}\) on \([0,n]_{\mathbb{Q}}\), Corollary 2.60 implies that for every \(i=0,1,\ldots,n\), \[|I_{i}|=\bigg{|}\int_{0}^{i}p_{m}(x)\mathrm{e}^{-x}\,\mathrm{d}x\bigg{|}\leq n \cdot n^{(n+1)(m+1)}\;.\] Thus \[|A(m)|\leq w\sum_{i=0}^{n}|I_{i}|\leq w(n+1)n^{n+2}\cdot n^{(n+1)m}\] and we get the stated bound with \(y:=w(n+1)n^{n+2}\) and \(z:=n^{n+1}\). \(\Box\) **Proposition 3.6** (bounding \(B(m)\)): _It is true that_ \[\forall\,m\in\mathbb{N}\left(B(m)\in\mathbb{Z}\wedge m!\text{ divides }B(m)\right)\] _and that \(|B(m)|\geq m!\) for infinitely many \(m\)._ Proof.: For every \(i=0,1,\ldots,n\) we have by Theorem 2.44 and Proposition 2.63 that \[a_{i}\mathrm{e}^{i}\cdot J_{i} = a_{i}\mathrm{e}^{i}\cdot\int_{i}^{+\infty}p_{m}(x)\mathrm{e}^{-x} \,\mathrm{d}x\] \[= a_{i}\int_{i}^{+\infty}p_{m}(x)\mathrm{e}^{-x+i}\,\mathrm{d}x\] \[\stackrel{{\text{Prop.\ref{prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:propprop:prop:prop:prop:prop:prop:prop:prop:propprop:prop:propprop:prop:prop:prop:prop:prop:prop:propprop:prop:prop:propprop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:propprop:prop As a set-theorist, I am puzzled by the idea (page 5) that "uncountable sets possess much less definite and certain being compared to countable sets". You can't go very far in analysis or topology without uncountable sets. And Cantor's proof that the reals are uncountable is treasured by mathematicians of virtually all persuasions. In mathematical logic there is a long tradition of efforts, see for example [16, 17], to formalize mathematics within countable formal systems and models. This is not very important here because our approach is different. This article shows that with some patience and using only ordinary means, not sophisticated tools of mathematical logic, in analysis one can go quite far without uncountable sets. Based on our experience we are optimistic about further development of CNT (Countable Number Theory). We think that our approach via CMA (Countable Mathematical Analysis) to exorcising uncountable sets from Number Theory is for number theorists more practical than the approach of mathematical logic. Mathematical Analysis on \(p\)-adic numbers has been developing, see for example [11]. It can be of interest to see what our approach gives here, just replace the Euclidean metric \(|a-b|\) on \(\mathbb{Q}\) with the \(p\)-adic one \(|a-b|_{p}\). Another topic for a possible chapter in CNT is FLT (Fermat's Last Theorem). This theorem says that it is true that \[k,\,l,\,m,\,n\in\mathbb{N}\wedge k^{n}+l^{n}=m^{n}\Rightarrow n\leq 2\;.\] It was proved in [14, 18], see the book [3] or the series of five booklets [7, 8, 6, 12, 13] for surveys of the proof. There was a discussion if the use of so called Grothendieck universes, which are sets with extremely large cardinalities, can be avoided in the proof. See [10] for more information. We conclude our article with a natural problem. Could one cast the whole proof of FLT by means of CMA in CNT?1 Footnote 1: We are greeting Ivan Mádek, a little known member of Oulipo. If not, where does one get stuck?
2308.04273
Double-gluon charmonium hybrid states with various (exotic) quantum numbers
We study the double-gluon charmonium hybrid states with various quantum numbers, each of which is composed of one valence charm quark and one valence charm antiquark as well as two valence gluons. We concentrate on the exotic quantum numbers $J^{PC} =0^{--}/0^{+-}/1^{-+}/2^{+-}/3^{-+}$ that the conventional $\bar q q$ mesons can not reach. We apply the QCD sum rule method to calculate their masses to be $7.28^{+0.38}_{-0.43}$ GeV, $5.19^{+0.36}_{-0.46}$ GeV, $5.46^{+0.41}_{-0.62}$ GeV, $4.48^{+0.25}_{-0.31}$ GeV, and $5.54^{+0.35}_{-0.43}$ GeV, respectively. We study their possible decay patterns and propose to search for the $J^{PC}=2^{+-}/3^{-+}$ states in the $D^*\bar D^{(*)}/D^{*}_s \bar D^{(*)}_s/\Sigma_c^* \bar \Sigma_c^{(*)}/\Xi_c^* \bar \Xi_c^{(\prime,*)}$ channels. Experimental investigations on these states and decay channels can be useful in classifying the nature of the hybrid state, thus serving as a direct test of QCD in the low energy sector.
Niu Su, Hua-Xing Chen, Wei Chen, Shi-Lin Zhu
2023-08-08T14:12:34Z
http://arxiv.org/abs/2308.04273v2
# Double-gluon charmonium hybrid states with various (exotic) quantum numbers ###### Abstract We study the double-gluon charmonium hybrid states with various quantum numbers, each of which is composed of one valence charm quark and one valence charm antiquark as well as two valence gluons. We concentrate on the exotic quantum numbers \(J^{PC}=0^{--}/0^{+-}/1^{++}/2^{+-}/3^{-+}\) that the conventional \(\bar{q}q\) mesons can not reach. We apply the QCD sum rule method to calculate their masses to be \(7.28^{+0.38}_{-0.43}\) GeV, \(5.19^{+0.36}_{-0.46}\) GeV, \(5.46^{+0.41}_{-0.62}\) GeV, \(4.48^{+0.25}_{-0.31}\) GeV, and \(5.54^{+0.35}_{-0.43}\) GeV, respectively. We study their possible decay patterns and propose to search for the \(J^{PC}=2^{++}/3^{-+}\) states in the \(D^{*}\bar{D}^{(*)}/D^{*}_{s}\bar{D}^{(*)}_{s}/\Sigma^{*}_{c}\bar{\Sigma}^{(*)}_ {c}\bar{\Sigma}^{(*)}_{c}\) channels. Experimental investigations on these states and decay channels can be useful in classifying the nature of the hybrid state, thus serving as a direct test of QCD in the low energy sector. hybrid state, exotic hadron, exotic quantum number, QCD sum rules pacs: _Introduction_ -- A hybrid state is composed of one valence quark and one valence antiquark as well as one or more valence gluons. Especially, the hybrid states with \(J^{PC}=0^{--}/0^{+-}/1^{++}/2^{+-}/3^{-+}/\cdots\) are of particular interests, since these exotic quantum numbers can not be reached by the conventional \(\bar{q}q\) mesons [1]. Up to now there are four structures observed in experiments with the exotic quantum number \(J^{PC}=1^{-+}\), _i.e._, the \(\pi_{1}(1400)\)[2], \(\pi_{1}(1600)\)[3], \(\pi_{1}(2015)\)[4], and \(\eta_{1}(1855)\)[5]. They are good candidates for the single-gluon hybrid states that contain only one valence gluon, while they may also be explained as the compact tetraquark states or hadronic molecular states [6; 7; 8; 9; 10]. In the past half century there have been a lot of experimental and theoretical investigations on these hybrid states [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. However, their nature still remains elusive, partly due to the difficulty in differentiating the hybrid and multiquark pictures [23; 24; 25; 26]. This tough problem needs to be solved in future by experimentalists and theorists together. In this letter we investigate the double-gluon charmonium hybrid states, each of which is composed of one valence charm quark and one valence charm antiquark as well as two valence gluons. We construct twenty double-gluon charmonium hybrid currents with various quantum numbers, and use them to perform QCD sum rule analyses. Especially, these currents can reach the exotic quantum numbers \(J^{PC}=0^{--}/0^{+-}/1^{-+}/2^{+-}/3^{-+}\), whose masses are calculated to be \(7.28^{+0.38}_{-0.43}\) GeV, \(5.19^{+0.36}_{-0.46}\) GeV, \(5.46^{+0.41}_{-0.62}\) GeV, \(4.48^{+0.25}_{-0.31}\) GeV, and \(5.54^{+0.35}_{-0.43}\) GeV, respectively. These mass values are accessible in the LHC experiments. We further study their possible decay patterns from the two-/three-meson and two-baryon decay processes. Since these three processes are both at the \(\mathcal{O}(\alpha_{\rm s})\) order, the three-meson and two-baryon decay patterns are generally not suppressed severely compared to the two-meson decay pattern. Especially, we propose to search for the \(J^{PC}=2^{+-}/3^{-+}\) states in the \(D^{*}\bar{D}^{(*)}/D^{*}_{s}\bar{D}^{(*)}_{s}/\Sigma^{*}_{c}\bar{\Sigma}^{(*)}_ {c}/\Xi^{*}_{c}\bar{\Sigma}^{(*)}_{c}\) channels directly at LHC, given that they may have relatively smaller widths due to their limited decay patterns. Experimental investigations on these states and decay channels can be useful in classifying the nature of the hybrid state, thus serving as a direct test of QCD in the low energy sector. _Double-gluon charmonium hybrid currents_ -- As the first step, we combine the charm quark field \(c_{a}(x)\), the charm antiquark field \(\bar{c}_{a}(x)\), the gluon field strength tensor \(G^{n}_{\mu\nu}(x)\), and the dual gluon field strength tensor \(\tilde{G}^{n}_{\mu\nu}(x)=G^{n,\rho\sigma}(x)\times\epsilon_{\mu\nu\rho\sigma}/2\) to construct the double-gluon charmonium hybrid currents. Here \(a=1\cdots 3\) and \(n=1\cdots 8\) are color indices, and \(\mu\cdots\sigma\) are Lorentz indices. These currents can be generally constructed by combining the color-octet quark-antiquark fields \[\bar{c}_{a}\lambda^{ab}_{n}c_{b}\,,\,\bar{c}_{a}\lambda^{ab}_{n} \gamma_{5}c_{b}\,,\] \[\bar{c}_{a}\lambda^{ab}_{n}\gamma_{\mu}c_{b}\,,\,\bar{c}_{a} \lambda^{ab}_{n}\gamma_{\mu}\gamma_{5}c_{b}\,,\,\bar{c}_{a}\lambda^{ab}_{n} \sigma_{\mu\nu}c_{b}\,, \tag{1}\] and the color-octet double-gluon fields \[d^{npq}G^{\alpha\beta}_{p}G^{\gamma\delta}_{q}\,,\,f^{npq}G^{\alpha\beta}_{p}G ^{\gamma\delta}_{q}\,, \tag{2}\] where \(d^{npq}\) and \(f^{npq}\) are the totally symmetric and antisymmetric \(SU(3)\) structure constants, respectively. In the present study we shall investigate as many as twenty double-gluon charmonium hybrid currents with various quantum numbers \(J^{PC}\). We write them as \(J^{\alpha_{1}\beta_{1}\cdots\alpha_{j}\beta_{j}}_{J^{\alpha_{j}\bar{A}/B}_{A/B} }\) or \(J^{\alpha_{1}\cdots\alpha_{j}}_{J^{PC}}\), where the subscripts \(A\), \(B\), and \(C\) denote the quark-antiquark fields \(\bar{c}_{a}\lambda^{ab}_{n}\gamma_{5}c_{b}\), \(\bar{c}_{a}\lambda^{ab}_{n}\sigma_{\mu\nu}c_{b}\), and \(\bar{c}_{a}\lambda^{ab}_{n}\gamma_{\mu}c_{b}\), respectively: \[J_{0^{+}_{+}} = \bar{c}_{a}\gamma_{5}\lambda^{ab}_{n}c_{b}\,\,d^{npq}\,\,g^{2}_{s} G^{\mu\nu}_{p}\tilde{G}_{q,\mu\nu}\,,\] \[J_{0^{-+}_{-}} = \bar{c}_{a}\gamma_{5}\lambda^{ab}_{n}c_{b}\,\,d^{npq}\,\,g^{2}_{s} G^{\mu\nu}_{p}G_{q,\mu\nu}\,,\] \[J^{\alpha\beta}_{1^{+-}} = \bar{c}_{a}\gamma_{5}\lambda^{ab}_{n}c_{b}\,\,f^{npq}\,\,g^{2}_{s} G^{\alpha\mu}_{p}\tilde{G}^{\beta}_{q,\mu}-\left\{\alpha\leftrightarrow\beta\right\},\] \[J^{\alpha\beta}_{1_{1}^{--}} = \bar{c}_{a}\gamma_{5}\lambda^{ab}_{n}c_{b}\ f^{npq}\ g_{s}^{2}G^{ \alpha\mu}_{p}G^{\beta}_{q,\mu}-\left\{\alpha\leftrightarrow\beta\right\},\] \[J^{\alpha_{1}\beta_{1},\alpha_{2}\beta_{2}}_{2^{++}} = \bar{c}_{a}\sigma_{5}\lambda^{ab}_{n}c_{b}\ f^{npq}\ S[g_{s}^{2}G^ {\alpha_{1}\beta_{1}}_{p}\tilde{G}^{\alpha_{2}\beta_{2}}_{q}]\,,\] \[J^{\alpha_{1}\beta_{1},\alpha_{2}\beta_{2}}_{2^{+}} = \bar{c}_{a}\gamma_{5}\lambda^{ab}_{n}c_{b}\ d^{npq}\ S[g_{s}^{2}G ^{\alpha_{1}\beta_{1}}_{p}G^{\alpha_{2}\beta_{2}}_{q}]\,,\] \[J^{\alpha_{1}\beta_{1},\alpha_{2}\beta_{2}}_{2^{+}} = \bar{c}_{a}\gamma_{5}\lambda^{ab}_{n}c_{b}\ f^{npq}\ S[g_{s}^{2}G ^{\alpha_{1}\beta_{1}}_{p}G^{\alpha_{2}\beta_{2}}_{q}]\,,\] \[J^{\alpha_{1}\beta_{1},\alpha_{2}\beta_{2}}_{2^{+}} = \bar{c}_{a}\gamma_{5}\lambda^{ab}_{n}c_{b}\ f^{npq}\ S[g_{s}^{2}G ^{\alpha_{1}\beta_{1}}_{p}G^{\alpha_{2}\beta_{2}}_{q}]\,,\] \[J_{0_{B}^{++}} = \bar{c}_{a}\sigma^{\mu\nu}\lambda^{ab}_{n}c_{b}\ f^{npq}\ g_{s}^{ 2}G_{p,\nu\rho}G^{\rho}_{q,\mu}\,,\] \[J^{\alpha\beta}_{0_{2}^{+}} = \bar{c}_{a}\sigma^{\mu\nu}\lambda^{ab}_{n}c_{b}\ f^{npq}\ g_{s}^{ 2}G_{p,\nu\rho}\tilde{G}^{\rho}_{q,\mu}\,,\] \[J^{\alpha\beta}_{1_{B}^{++}} = \mathcal{S}[\bar{c}_{a}\sigma_{\alpha_{1}\beta_{1}}\lambda^{ab}_ {n}c_{b}\ f^{npq}\ g_{s}^{2}G_{p,\alpha_{2}\mu}G^{\mu}_{q,\beta_{2}}]\] \[\qquad\times\,g^{\beta_{1}\beta_{2}}(g^{\alpha\alpha_{1}}g^{ \beta\alpha_{2}}-g^{\beta\alpha_{1}}g^{\alpha\alpha_{2}})\,,\] \[J^{\alpha\beta}_{1_{2}^{+}} = \mathcal{S}[\bar{c}_{a}\sigma_{\alpha_{1}\beta_{1}}\lambda^{ab}_ {n}c_{b}\ f^{npq}\ g_{s}^{2}G_{p,\alpha_{2}\mu}\tilde{G}^{\mu}_{q,\beta_{2}}]\] \[\qquad\times\,g^{\beta_{1}\beta_{2}}(g^{\alpha\alpha_{1}}g^{ \beta\alpha_{2}}-g^{\beta\alpha_{1}}g^{\alpha\alpha_{2}})\,,\] \[J^{\alpha\beta}_{1_{B}^{-}} = \bar{c}_{a}\sigma^{\alpha\beta}\lambda^{ab}_{n}c_{b}\ d^{npq}\ g_{s}^{ 2}G^{\mu\nu}_{p}G_{q,\mu\nu}\,,\] \[J^{\alpha\beta}_{1_{B}^{-}} = \bar{c}_{a}\sigma^{\alpha\beta}\lambda^{ab}_{n}c_{b}\ d^{npq}\ g_{s }^{2}G^{\mu\nu}_{p}\tilde{G}_{q,\mu\nu}\,,\] \[J^{\alpha_{1}\beta_{1}}_{2_{B}^{+}} = \bar{c}_{a}\sigma^{\alpha\beta_{1}}\lambda^{ab}_{n}c_{b}\ d^{npq} \ g_{s}^{2}G^{\alpha\mu\nu}_{p}\tilde{G}_{q,\mu\nu}\,,\] \[J^{\alpha_{1}\beta_{1},\alpha_{2}\beta_{2}}_{2_{B}^{+}} = \mathcal{S}[\bar{c}_{a}\sigma^{\alpha_{1}\beta_{1}}\lambda^{ab}_ {n}c_{b}\ f^{npq}\ g_{s}^{2}G^{\alpha_{2}\mu}_{p}G^{\beta_{2}}_{q,\mu}]\,,\] \[J^{\alpha_{1}\beta_{1},\alpha_{2}\beta_{2}}_{2_{B}^{+}} = \mathcal{S}[\bar{c}_{a}\sigma^{\alpha_{1}\beta_{1}}\lambda^{ab}_ {n}c_{b}\ f^{npq}\ g_{s}^{2}G^{\alpha_{2}\mu}_{p}\tilde{G}^{\beta_{2}}_{q,\mu }]\,,\] \[J^{\alpha_{1}\beta_{1}\cdots\alpha_{3}\beta_{3}}_{3_{B}^{+}} = \mathcal{S}[\bar{c}_{a}\sigma^{\alpha_{1}\beta_{1}}\lambda^{ab}_ {n}c_{b}\ f^{npq}\ g_{s}^{2}G^{\alpha_{2}\beta_{2}}_{p}G^{\alpha_{3}\beta_{3}}_{q }]\,,\] \[J^{\alpha_{1}\beta_{1}\cdots\alpha_{3}\beta_{3}}_{3_{B}^{+}} = \mathcal{S}[\bar{c}_{a}\sigma^{\alpha_{1}\beta_{1}}\lambda^{ab}_ {n}c_{b}\ d^{npq}\ g_{s}^{2}G^{\alpha_{2}\beta_{2}}_{p}G^{\alpha_{3}\beta_{3}}_{ q}]\,,\] \[J^{\alpha_{1}\beta_{1}\cdots\alpha_{3}\beta_{3}}_{3_{B}^{--}} = \mathcal{S}[\bar{c}_{a}\sigma^{\alpha_{1}\beta_{1}}\lambda^{ab}_ {n}c_{b}\ d^{npq}\ g_{s}^{2}G^{\alpha\beta_{2}}_{p}\tilde{G}^{\alpha_{3}\beta_{ 3}}_{q}]\,,\] \[J^{\alpha}_{1_{1}^{+-}} = \bar{c}_{a}\gamma^{\alpha}\lambda^{ab}_{n}c_{b}\ d^{npq}\ g_{s}^{ 2}G^{\mu\nu}_{p}\tilde{G}_{q,\mu\nu}\,,\] \[J^{\alpha}_{1_{--}^{-}} = \bar{c}_{a}\gamma^{\alpha}\lambda^{ab}_{n}c_{b}\ d^{npq}\ g_{s}^{ 2}G^{\mu\nu}_{p}G_{q,\mu\nu}\,. \tag{3}\] Here \(\mathcal{S}\) represents the symmetrization and subtracting trace terms in the two sets \(\{\alpha_{1}\cdots\alpha_{J}\}\) and \(\{\beta_{1}\cdots\beta_{J}\}\) as well as the anti-symmetrization in the sets \(\{\alpha_{1}\beta_{1}\}\cdots\{\alpha_{J}\beta_{J}\}\), simultaneously. The double-gluon hybrid currents with the light quark-antiquark fields \(\bar{q}_{a}\lambda^{ab}_{n}\gamma_{5}q_{b}\) and \(\bar{q}_{a}\lambda^{ab}_{n}\sigma_{\mu\nu}q_{b}\) (\(q=u,d,s\)) have been systematically investigated in Refs. [27; 28; 29], and in the present study we just need to replace the light quark fields by the charm quark fields. However, these currents can only reach the exotic quantum numbers \(J^{PC}=1^{-+}/2^{+-}/3^{-+}\), and we need the other two currents \(J^{\alpha}_{1_{\bar{c}}^{-}}\) with the quark-antiquark field \(\bar{c}_{a}\lambda^{ab}_{n}\gamma_{\mu}c_{b}\) in order to study the exotic quantum numbers \(J^{PC}=0^{--}/0^{+-}\), as discussed below. _QCD sum rule analyses_ -- The QCD sum rule method has been widely applied in the study of hadron physics [30; 31; 32; 33]. In this letter we apply this method to study the double-gluon charmonium hybrid currents listed in Eqs. (3). We use the current \(J^{\alpha}_{1_{\bar{c}}^{-}}\) as an example and calculate its two-point correlation function \[\Pi^{\alpha\beta}(q^{2}) \equiv i\int d^{4}xe^{iqx}\langle 0|\mathbf{T}[J^{\alpha}_{1_{\bar{C}}^{+-}}(x)J^{ \beta\dagger}_{1_{\bar{C}}^{-}}(0)]|0\rangle \tag{4}\] \[= (q^{\alpha}q^{\beta}-q^{2}g^{\alpha\beta})\ \Pi_{1}(q^{2})+q^{\alpha}q^{ \beta}\Pi_{0}(q^{2})\,,\] at both the hadron and quark-gluon levels. The correlation functions \(\Pi_{1}(q^{2})\) and \(\Pi_{0}(q^{2})\) are respectively contributed by the \(J^{PC}=1^{+-}\) and \(0^{--}\) states through \[\langle 0|J^{\alpha}_{1_{\bar{C}}^{-}}|X;1_{\bar{C}}^{+-}\rangle = \epsilon^{\alpha}f_{1_{\bar{C}}^{-}}\,,\] (5) \[\langle 0|J^{\alpha}_{1_{\bar{C}}^{-}}|X;0_{\bar{C}}^{--}\rangle = q^{\alpha}f_{0_{\bar{C} In the above expression we have taken into account the Feynman diagrams depicted in Fig. 1, and calculated \(\rho(s)\) up to the dimension eight (\(D=8\)) condensates. The gluon field strength tensor \[G_{\mu\nu}^{n}=\partial_{\mu}A_{\nu}^{n}-\partial_{\nu}A_{\mu}^{n}+g_{s}f^{npq}A_ {p,\mu}A_{q,\nu}\,, \tag{10}\] can be naturally separated into two parts: we use the single-gluon-line to describe the former two terms, and the double-gluon-line with a red vertex to describe the third term. In the present study we have calculated all the diagrams proportional to \(\alpha_{s}^{2}\times g_{s}^{0}\) and \(\alpha_{s}^{2}\times g_{s}^{1}\), while we have partly calculated the diagrams proportional to \(\alpha_{s}^{2}\times g_{s}^{n\geq 2}\). For completeness, we summarize in the supplementary file "OPE.nb" all the spectral densities calculated in the present study. After performing the Borel transformation to Eq. (7) at both the hadron and quark-gluon levels, we obtain \[\Pi(s_{0},M_{B}^{2})\equiv f_{X}^{2}e^{-M_{X}^{2}/M_{B}^{2}}=\int_{s_{<}}^{s_{ 0}}\rho(s)e^{-s/M_{B}^{2}}ds\,, \tag{11}\] where the continuum has been approximated as the OPE spectral density above the threshold value \(s_{0}\). Eq. (11) can be used to calculate the mass of \(|X;0_{C}^{--}\rangle\) through \[M_{X}^{2}(s_{0},M_{B})=\frac{\int_{s_{<}}^{s_{0}}\rho(s)e^{-s/M_{B}^{2}}ds}{ \int_{s_{<}}^{s_{0}}\rho(s)e^{-s/M_{B}^{2}}ds}\,. \tag{12}\] _Numerical analyses_ -- We study the spectral density given in Eq. (10) numerically using the following values for various QCD parameters at the QCD scale \(\Lambda_{\rm QCD}=300\) MeV and the renormalization scale 2 GeV [1; 34; 35]: \[\alpha_{s}(Q^{2}) = \frac{4\pi}{11\ln(Q^{2}/\Lambda_{\rm QCD}^{2})}\,,\] \[m_{c}(m_{c}) = 1.27\pm 0.02\ {\rm GeV}\,,\] \[\langle\alpha_{s}GG\rangle = (6.35\pm 0.35)\times 10^{-2}\ {\rm GeV}^{4}\,,\] \[\langle g_{s}^{3}G^{3}\rangle = (8.2\pm 1.0)\times\langle\alpha_{s}GG\rangle\ {\rm GeV}^{2}\,.\] As shown in Eq. (12), the mass of \(|X;0_{C}^{--}\rangle\) depends on two free parameters: the Borel mass \(M_{B}\) and the threshold value \(s_{0}\). Firstly, we investigate the OPE convergence by requiring a) the \(\alpha_{s}^{2}\times g_{s}^{n\geq 2}\) terms to be less than 5%, b) the \(D=8\) terms to be less than 10%, and c) the \(D=6\) terms to be less than 20%: \[{\rm CVG}_{A} \equiv \left|\frac{\Pi_{s}^{n\geq 6}(s_{0},M_{B}^{2})}{\Pi(s_{0},M_{B} ^{2})}\right|\leq 5\%\,, \tag{13}\] \[{\rm CVG}_{B} \equiv \left|\frac{\Pi^{\rm D=8}(s_{0},M_{B}^{2})}{\Pi(s_{0},M_{B}^{2})} \right|\leq 10\%\,,\] (14) \[{\rm CVG}_{C} \equiv \left|\frac{\Pi^{\rm D=6}(s_{0},M_{B}^{2})}{\Pi(s_{0},M_{B}^{2})} \right|\leq 20\%\,. \tag{15}\] Secondly, we investigate the one-pole-dominance assumption by requiring the pole contribution (PC) to be larger than 40%: \[{\rm PC}\equiv\left|\frac{\Pi(s_{0},M_{B}^{2})}{\Pi(\infty,M_{B}^{2})}\right| \geq 40\%\,. \tag{16}\] Altogether, we determine the Borel window to be 8.86 GeV\({}^{2}\leq M_{B}^{2}\leq 10.30\) GeV\({}^{2}\) when setting \(s_{0}=64.0\) GeV\({}^{2}\). We redo the same procedures and find that there exist the Borel windows as long as \(s_{0}\geq s_{0}^{\rm min}=58.3\) GeV\({}^{2}\). Accordingly, we set \(s_{0}\) to be slightly larger and determine the working regions to be 51.0 GeV\({}^{2}\leq s_{0}\leq 77.0\) GeV\({}^{2}\) and 8.86 GeV\({}^{2}\leq M_{B}^{2}\leq 10.30\) GeV\({}^{2}\), where the mass of \(|X;0_{C}^{--}\rangle\) is calculated to be \[M_{|X;0_{C}^{--}\rangle}=7.28^{+0.38}_{-0.43}\ {\rm GeV}\,. \tag{17}\] Its uncertainty comes from the threshold value \(s_{0}\), the Borel mass \(M_{B}\), and various QCD parameters listed in Eqs. (13). Similarly, we apply the QCD sum rule method to study the other nineteen double-gluon charmonium hybrid currents listed in Eqs. (3). The obtained results are summarized in Table 1. _Decay analyses_ -- As depicted in Fig. 2, the double-gluon charmonium hybrid states can decay after exciting two \(\bar{q}q\) (\(q=u,d,s\)) pairs from two gluons, followed by recombining three color-octet \(\bar{c}c/\bar{q}q\) pairs into two/three color-singlet mesons or two color-singlet baryons: \[(\bar{c}c)_{{\bf 8}_{C}}\times(\bar{q}q)_{{\bf 8}_{C}} \rightarrow (\bar{c}q)_{{\bf 1}_{C}}(\bar{q}c)_{{\bf 1}_{C}}\,, \tag{18}\] \[(\bar{c}c)_{{\bf 8}_{C}}\times(\bar{q}q)^{2}_{{\bf 8}_{C}} \rightarrow (\bar{q}q)_{{\bf 1}_{C}}(\bar{c}q)_{{\bf 1}_{C}}(\bar{q}c)_{{\bf 1}_{C}}\,,\] (19) \[(\bar{c}c)_{{\bf 8}_{C}}\times(\bar{q}q)^{2}_{{\bf 8}_{C}} \rightarrow (\bar{c}\bar{q}q)_{{\bf 1}_{C}}(eqn\bar{c}cq)_{{\bf 1}_{C}}\,. \tag{20}\] These three decay processes are both at the \({\cal O}(\alpha_{s})\) order, so the three-meson and two-baryon decay patterns are generally not suppressed severely compared to the two-meson decay patterns. Comparatively speaking, their decays into one charmonium meson and light mesons are at the \({\cal O}(\alpha_{s}^{2})\) order and so suppressed, but these channels can be observed in experiments more easily, such as \(J/\psi\pi\pi\), \(J/\psi\pi\pi\pi\), and \(J/\psi K\bar{K}\), etc. We list in Table 2 possible \(S\)-wave and \(P\)-wave as well as several \(D\)-wave decay patterns of the double-gluon charmonium hybrid states with the exotic quantum numbers \(J^{PC}=0^{--}/0^{+-}/1^{-+}/2^{+-}/3^{-+}\), separately for the two-,three-meson and two-baryon decay processes. _Summary_ -- In this letter we study the double-gluon charmonium hybrid states with various quantum numbers. We construct twenty double-gluon charmonium hybrid currents and use them to perform QCD sum rule analyses. These currents can reach the exotic quantum numbers \(J^{PC}=0^{--}/0^{+-}/1^{-+}/2^{+-}/3^{-+}\) that the conventional \(\bar{q}q\) mesons can not reach, from which we obtain \[M_{|X;0^{--}\rangle} = 7.28^{+0.38}_{-0.43}\,\,{\rm GeV}\,,\] \[M_{|X;0^{+-}\rangle} = 5.19^{+0.36}_{-0.46}\,\,{\rm GeV}\,,\] \[M_{|X;1^{--}\rangle} = 5.46^{+0.41}_{-0.62}\,\,{\rm GeV}\,,\] \begin{table} \begin{tabular}{l l l} \hline \hline \(J^{PC}\) & Two-Meson & Three-Meson \\ \hline \(0^{--}\) & \(D^{+}\bar{D},D_{s}^{*}\bar{D}_{s}\) & \(D\bar{D}\pi/\eta^{(\prime)},D^{*}\bar{D}^{*}\pi/\eta^{(\prime)},D^{*}\bar{D}^{ (*)}\rho/\omega\) \\ & \(\Sigma_{c}^{*}\bar{\Sigma}_{c},\Xi_{c}^{*}\bar{\Xi}_{c}^{(\prime)}\) & \(D_{s}\bar{D}_{s}\eta^{(\prime)},D_{s}^{*}\bar{D}_{s}^{*(\prime)}\rho_{s}^{( *)}D_{s}^{*(\phi)}\phi\) \\ & & \(D\bar{D}_{s}K,D^{*}\bar{D}_{s}^{*}K,D\bar{D}_{s}^{*}K^{*},D\bar{D}_{s}^{*}K^{*}\) \\ & & \(D^{*}\bar{D}_{s}^{*}\bar{\Xi}_{c}^{*}\bar{\Xi}_{c}^{(\prime)}\) & \(D^{*}\bar{D}^{(*)}\pi/\eta^{(\prime)},D^{(*)}\bar{D}^{(*)}\rho/\omega\) \\ & & \(D^{*}\bar{D}_{s}^{*}K,D^{*}\bar{D}_{s}^{(\prime)}K,D^{(*)}\bar{D}_{s}^{(*)}\phi\) \\ \(1^{-+}\) & \(D^{*}\bar{D}^{(*)},D_{s}^{*}\bar{D}_{s}^{(\prime)}\) & \(D^{*}\bar{D}^{(*)}\pi/\eta^{(\prime)},D^{(*)}\bar{D}^{(*)}\rho/\omega\) \\ & \(\Sigma_{c}^{*}\bar{\Sigma}_{c},\Xi_{c}^{*}\bar{\Xi}_{c}^{(\prime)}\) & \(D^{*}\bar{D}_{s}^{(*)}\eta^{(\prime)},D^{(*)}\bar{D}_{s}^{(*)}\phi\) \\ & & \(D^{*}\bar{D}_{s}^{*}K,D^{*}\bar{D}_{s}^{(*)}K,D^{(*)}\bar{D}_{s}^{(*)}K^{*}\) \\ \(2^{+-}\) & \(D^{*}\bar{D}^{(*)},D_{s}^{*}\bar{D}_{s}^{(*)}\) & \(D^{*}\bar{D}^{(*)}\pi/\eta,D^{(*)}\rho/\omega,D_{s}\bar{D}_{s}^{*}\eta\) \\ & & \(D\bar{D}_{s}^{*}K,D^{*}\bar{D}_{s}^{(*)}K,D\bar{D}_{s}^{(*)}K,D\bar{D}_{s}K^{*}\) \\ \(3^{-+}\) & \(\Sigma_{c}^{*}\bar{\Sigma}_{c}^{(*)},\Xi_{c}^{*}\bar{\Xi}_{c}^{(\prime)}\) & \(D^{*}\bar{D}^{*}\rho/\omega,D_{s}^{*}\bar{D}_{s}^{*}\phi,D^{*}\bar{D}_{s}^{*}K^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Possible \(S\)-wave (red) and \(P\)-wave (blue) as well as several \(D\)-wave (green) decay patterns of the double-gluon charmonium hybrid states with the exotic quantum numbers \(J^{PC}=0^{--}/0^{+-}/1^{-+}/2^{+-}/3^{-+}\), separately for the two-,three-meson and two-baryon decay processes. Some charge-conjugated decay patterns are omitted for simplicity. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & & \multicolumn{4}{c}{Working Regions} \\ \(J^{PC}\) & \(s_{0}^{min}[{\rm GeV}^{2}]\) & \(\overline{M_{B}^{2}[{\rm GeV}^{2}]}\) & \(s_{0}[{\rm GeV}^{2}]\) & Pole[\%] & Mass[GeV] \\ \hline \(0^{++}_{A}\) & 56.3 & 7.89–9.36 & \(62\pm 12\) & 40–54 & \(7.29^{+0.33}_{-0.26}\) \\ \(0^{+-}_{B}\) & 48.0 & 6.38–7.63 & \(53\pm 11\) & 40–54 & \(6.76^{+0.32}_{-0.24}\) \\ \(0^{--}_{A}\) & 41.2 & 6.73–7.30 & \(45\pm 9\) & 40–49 & \(5.70^{+0.43}_{-0.57}\) \\ \(0^{-+}_{B}\) & 39.7 & 5.47–6.26 & \(44\pm 9\) & 40–53 & \(5.87^{+0.38}_{-0.56}\) \\ \(0^{--}_{C}\) & 32.9 & 5.03–5.53 & \(36\pm 7\) & 40–50 & \(5.19^{+0.36}_{-0.46}\) \\ \(0^{--}_{C}\) & 58.3 & 8.86–10.30 & \(64\pm 13\) & 40–52 & \(7.28^{+0.38}_{-0.43}\) \\ \(1^{++}_{B}\) & 47.5 & 6.29–7.47 & \(52\pm 10\) & 40–53 & \(6.74^{+0.30}_{-0.18}\) \\ \(1^{--}_{B}\) & 36.3 & 5.18–5.77 & \(40\pm 8\) & 40–51 & \(5.46^{+0.41}_{-0.62}\) \\ \(1^{+-}_{A}\) & 49.4 & 6.70–8.01 & \(54\pm 11\) & 40–53 & \(6.92^{+0.32}_{-0.42}\) \\ \(1^{+-}_{B}\) & 34.2 & 5.28–5.72 & \(38\pm 8\) & 40–49 & \(5.15^{+0.44}_{-0.54}\) \\ \(1^{+-}_{C}\) & 55.1 & 7.73–9.26 & \(61\pm 12\) & 40–54 & \(7.22^{+0.33}_{-0.27}\) \\ \(1^{--}_{A}\) & 37.7 & 6. \[M_{[X;2^{+-})} = 4.48^{+0.25}_{-0.31}\ {\rm GeV}\,,\] \[M_{[X;3^{-+})} = 5.54^{+0.35}_{-0.43}\ {\rm GeV}\,. \tag{21}\] The above mass values are accessible in the LHC experiments. We further study possible decay patterns of the double-gluon charmonium hybrid states with the exotic quantum numbers \(J^{PC}=0^{--}/0^{+-}/1^{-+}/2^{+-}/3^{-+}\), separately for the two-/three-meson and two-baryon decay processes. We propose to search for them experimentally in their possible decay channels \(D^{(*)}\bar{D}^{(*)}(\pi/\eta/\eta^{\prime}/\rho/\omega)\), \(D_{s}^{(*)}\bar{D}_{s}^{(*)}(\eta/\eta^{\prime}/\phi)\), \(D^{(*)}\bar{D}_{s}^{(*)}K^{(*)}\), and \(\Lambda_{c}\bar{\Lambda}_{c}/\Sigma_{c}^{(*)}\bar{\Sigma}_{c}^{(*)}\), \(\Xi_{c}^{(*)}\bar{\Xi}_{c}^{(*)}/\Omega_{c}^{(*)}\bar{\Omega}_{c}^{(*)}\) etc. Especially, the \(J^{PC}=2^{+-}/3^{-+}\) states may have relatively smaller widths due to their limited decay patterns, so we propose to search for them in the \(D^{*}\bar{D}^{(*)}/D_{s}^{*}\bar{D}_{s}^{(*)}/\Sigma_{c}^{*}\bar{\Sigma}_{c}^ {(*)}/\Xi_{c}^{*}\bar{\Xi}_{c}^{(*)}\) channels directly at LHC. Experimental investigations on these states and decay channels can be useful in classifying the nature of the hybrid state, thus serving as a direct test of QCD in the low energy sector. ###### Acknowledgements. This project is supported by the National Natural Science Foundation of China under Grant No. 11975033, No. 12075019, No. 12175318, and No. 12070131001, the National Key R&D Program of China under Contracts No. 2020YFA0406400, the Jiangsu Provincial Double-Innovation Program under Grant No. JSSCRC2021488, and the Fundamental Research Funds for the Central Universities.
2306.04907
Estimation of Poverty Measures for Small Areas Under a Two-Fold Nested Error Linear Regression Model: Comparison of Two Methods
Demand for reliable statistics at a local area (small area) level has greatly increased in recent years. Traditional area-specific estimators based on probability samples are not adequate because of small sample size or even zero sample size in a local area. As a result, methods based on models linking the areas are widely used. World Bank focused on estimating poverty measures, in particular poverty incidence and poverty gap called FGT measures, using a simulated census method, called ELL, based on a one-fold nested error model for a suitable transformation of the welfare variable. Modified ELL methods leading to significant gain in efficiency over ELL also have been proposed under the one-fold model. An advantage of ELL and modified ELL methods is that distributional assumptions on the random effects in the model are not needed. In this paper, we extend ELL and modified ELL to two-fold nested error models to estimate poverty indicators for areas (say a state) and subareas (say counties within a state). Our simulation results indicate that the modified ELL estimators lead to large efficiency gains over ELL at the area level and subarea level. Further, modified ELL method retaining both area and subarea estimated effects in the model (called MELL2) performs significantly better in terms of mean squared error (MSE) for sampled subareas than the modified ELL retaining only estimated area effect in the model (called MELL1).
Maryam Sohrabi, J. N. K. Rao
2023-06-08T03:22:59Z
http://arxiv.org/abs/2306.04907v1
Estimation of poverty measures for small areas under a two-fold nested error linear regression model: comparison of two methods ###### Abstract Demand for reliable statistics at a local area (small area) level has greatly increased in recent years. Traditional area-specific estimators based on probability samples are not adequate because of small sample size or even zero sample size in a local area. As a result, methods based on models linking the areas are widely used. World Bank focused on estimating poverty measures, in particular poverty incidence and poverty gap called FGT measures, using a simulated census method, called ELL, based on a one-fold nested error model for a suitable transformation of the welfare variable. Modified ELL methods leading to significant gain in efficiency over ELL also have been proposed under the one-fold model. An advantage of ELL and modified ELL methods is that distributional assumptions on the random effects in the model are not needed. In this paper, we extend ELL and modified ELL to two-fold nested error models to estimate poverty indicators for areas (say a state) and subareas (say counties within a state). Our simulation results indicate that the modified ELL estimators lead to large efficiency gains over ELL at the area level and subarea level. Further, modified ELL method retaining both area and subarea estimated effects in the model (called MELL2) performs significantly better in terms of mean squared error (MSE) for sampled subareas than the modified ELL retaining only estimated area effect in the model (called MELL1). Areas and subareas, ELL and modified ELL methods, Poverty incidence and gap, Two-fold nested error model. ## 1 Introduction Data collected from probability samples can provide reliable estimates of parameters of interest for domains (subpopulations) with large enough sample sizes to permit direct, domain-specific estimators of desired precision. We call such domains as large areas. On the other hand, sample sizes can be very small or even zero for local areas (called small areas) and direct estimators are not adequate or feasible. Demand for reliable statistics at the level of small areas has increased greatly and it is necessary to use model-based methods that can yield reliable estimates for small areas by integrating information across areas through linking models. Rao and Molina (2015) provide a comprehensive account of model-based small area estimation of means, totals, and more complex parameters like poverty measures. In this paper, we focus on the estimation of FGT poverty measures, proposed by Foster, Greer and Thorbeck (1984). Poverty incidence, gap and severity belong to the family of FGT measures. World Bank widely used a method proposed by Elbers, Lanjouw and Lanjouw (2003), called the ELL method, to provide FGT poverty measures for specified local areas in many developing countries. The ELL method involves the following steps: (1) Simulate multiple censuses of the welfare variable of interest based on an assumed model relating the welfare variable to auxiliary variables obtained from a recent census. (2) Calculate the FGT measure for specified local areas from each simulated census and then take the average over the censuses as the ELL estimator. (3) Variance of the simulated census estimators is taken as the estimator of mean squared error (MSE) of the ELL estimator. An advantage of the ELL method is that it is free of parametric distributional assumptions and computationally simple. However, Molina and Rao (2010) showed that the ELL method can lead to large MSE compared to an optimal method, called the Empirical Best (EB) method, assuming a one-fold nested error linear regression model with normally distributed random effects. Diallo and Rao (2018) developed a modification to ELL method that leads to substantial reduction in MSE and compares favorably to the normality-based EB method. As in the ELL method, the modified ELL method is free of parametric distributional assumptions. The proposed ELL, modified ELL and EB methods are based on a one-fold nested error linear regression model relating a suitable function of the welfare variable to the census variables and a random area effect. Sample survey data observing the welfare variable and the census variables, based on two-stage cluster sampling, are used to fit the one-fold model. In the traditional ELL method, random cluster effects are included in the model and simulated censuses are generated. From a simulated census, a desired poverty measure is calculated for any desired small area. Note that it is not necessary to specify the areas in advance because area effects are not included in the ELL one-fold model. Hossain et al. (2020) used a two-stage sample of districts and household within districts to estimate a food insecurity measure at the district level in Bangladesh. In this case, clusters are areas. In this paper, we focus on two-fold random effect models involving area and subarea random effects. For example, an area could refer to a state and a subarea to a county within a state. Marhuenda et al. (2017) studied EB estimation of FGT poverty measures under the two-fold model, assuming that the random effects in the model are normally distributed, as in the case of the one-fold model studied by Molina and Rao (2010). In their application to Spanish survey data, areas are provinces and subareas are comarcas, and it is of interest to obtain estimates of poverty measures at the domain as well as subdomain level. Section 2 introduces the two-fold model and the associated FGT poverty measures for domains and subdomains. Section 3 extends the ELL and modified ELL methods to two-fold models with no distributional assumptions on the random effects in the model, as in the case of the one-fold model. Section 4 presents some results of a simulation study on the performance of ELL and modified ELL estimators. Finally, some remarks on the estimation of MSE of the estimators are given in Section Two-fold Nested Error Model The finite population of interest consists of \(D\) areas (domains) \(d=1,\ldots,D\), and area \(d\) is divided into \(M_{d}\) subareas (subdomains) \(j=1,\ldots,M_{d}\). The subdomain \(j\) within the domain \(d\) contains \(N_{d\!j}\) elements \(k=1,\ldots,N_{d\!j}\). The population data is denoted as \(\{(E_{djk},{\bf x}_{djk}^{T}),d=1,\ldots,D;j=1,\ldots,M_{d};k=1,\ldots,N_{d\!j}\}\), where \(E_{djk}\) is the welfare variable of interest and \({\bf x}_{djk}^{T}=(x_{1djk},\ldots,x_{pdjk})\) is a \(p\)-vector of known census variables. If an intercept term is needed, then we set \(x_{1djk}=1\) for all the population units. To reduce positive skewness of the welfare variable we make a log transformation \(y_{d\!jk}=\log(E_{d\!jk})\). A two-fold nested error population model relating the transformed variable \(y_{d\!jk}\) to the census variables \({\bf x}_{d\!jk}\) is given by \[y_{djk}={\bf x}_{djk}^{T}\boldsymbol{\beta}+u_{d}+v_{d\!j}+e_{djk};\ \ d=1, \ldots,D,\ j=1,\ldots,M_{d},\ k=1,\ldots,N_{d\!j}, \tag{2.1}\] where \(\boldsymbol{\beta}\) is a \(p\times 1\) vector of unknown regression parameters, \(u_{d}\) are the area effects, \(v_{d\!j}\) are the cluster effects, and \(e_{d\!jk}\) are the residual errors. The three random errors \(u_{d}\), \(v_{d\!j}\), and \(e_{d\!jk}\) are independent with \({\rm E}(u_{d})={\rm E}(v_{d\!j})={\rm E}(e_{d\!jk})=0\). Parametric distributions on the two random effects and the unit errors are not assumed. We assume two-stage sampling in each area: a sample, \(s_{d}\), of \(m_{d}(\leq M_{d})\) subareas is selected from area \(d\) and if subarea \(j\)th is sampled then a subsample, \(s_{d\!j}\) of \(n_{d\!j}\) elements is selected from subarea \(j\). We further assume that the population model (2.1) also holds for the sample data \(\{(y_{d\!jk},{\bf x}_{d\!jk}),d=1,\ldots,D;j=1,\ldots,m_{d};k=1,\ldots,n_{d\!j}\}\). Therefore, the model for sample data is given by \[y_{d\!jk}={\bf x}_{d\!jk}^{T}\boldsymbol{\beta}+u_{d}+v_{d\!j}+e_{d\!jk};\ \ d=1,\ldots,D,\ j=1,\ldots,m_{d},\ k=1,\ldots,n_{d\!j}. \tag{2.2}\] The FGT population measure for area \(d\) is given by \[F_{\alpha d}(z)=\frac{1}{N_{d}}\sum_{j=1}^{M_{d}}\sum_{k=1}^{N_{d\!j}}F_{ \alpha djk}, \tag{2.3}\] where \(N_{d}=\sum_{j}N_{d\!j}\) and \[F_{\alpha d\!jk}=\left(\frac{z-E_{d\!jk}}{z}\right)^{\alpha}I(E_{d\!jk}<z). \tag{2.4}\] In (2.4), \(z\) is the known poverty line and \(I(E_{d\!jk}<z)\) is the indicator variable taking the value \(1\) when \(E_{d\!jk}\) is smaller than \(z\) and 0 otherwise. Poverty incidence, poverty gap and poverty severity correspond to \(\alpha=0\), \(\alpha=1\), and \(\alpha=2\), respectively. Also, the FGT measure for subarea \(j\) within area \(d\) is given by \[F_{\alpha d\!j}(z)=\frac{1}{N_{d\!j}}\sum_{k=1}^{N_{d\!j}}F_{\alpha djk}. \tag{2.5}\] Estimators of FGT poverty measures In this section, we describe how to estimate FGT poverty measures (2.3) and (2.5) for areas and subareas, respectively. Suppose that there is a one-to-one transformation \(y_{djk}=\log(E_{djk})\) of the welfare variables, \(E_{djk}\). Then we can express \(F_{\alpha djk}\) given as (2.4) in terms of \(y_{djk}\): \[F_{\alpha djk}=\left(\frac{z-\exp(y_{djk})+c}{z}\right)^{\alpha}I(\exp(y_{djk} )-c<z):=h_{\alpha}(y_{djk}).\] ### **ELL Method** Elbers, Lanjow and Lanjow (2003) consider a linear mixed model for a log-transformation of the variable measuring welfare of individuals, with random effects for the sampling clusters. In small area context, we can assume that the sampling clusters are the areas. In this case, the model becomes the one-fold nested error model of Battese, Harter and Fuller (1988) for the log-transformation of the welfare variables, that is, \(y_{dj}=\log(E_{dj})\). The World Bank applied the ELL method extensively to obtain poverty and inequality measures for many countries: for more details see Elbers, Lanjow and Lanjow (2003). In this section, we extend the ELL method under the one-fold nested error model to the two-fold nested error model (2.1) when the model has area level random effect term, subarea level random effect term, and subarea level error term. The ELL method consists of drawing from the estimated area, cluster and unit level residuals to create a simulated census. The steps of the ELL method can be summarized as follows: 1. Estimate \(\boldsymbol{\beta}\) from the nested error model given by (2.2), using the ordinary least squares (OLS) method, and obtain unit level residuals \(\hat{r}_{djk}=y_{djk}-\mathbf{x}_{djk}^{T}\hat{\boldsymbol{\beta}}_{OLS}\), where \(\hat{\boldsymbol{\beta}}_{OLS}\) denotes the estimator of \(\boldsymbol{\beta}\)." 2. The area effect \(u_{d}\), the subarea effect \(v_{dj}\), and the unit level errors \(e_{djk}\) are estimated as \[\hat{u}_{d}=\frac{1}{n_{d}}\sum_{j=1}^{m_{d}}\sum_{k=1}^{n_{dj}}\hat{r}_{djk},\] \[\hat{v}_{dj}=\frac{1}{n_{dj}}\sum_{k=1}^{n_{dj}}\hat{r}_{djk}-\hat{u}_{d},\] and \[\hat{e}_{djk}=\hat{r}_{djk}-\frac{1}{n_{dj}}\sum_{k=1}^{n_{dj}}\hat{r}_{djk}.\] 3. Draw \(\hat{\boldsymbol{\beta}}^{*(b)}\), \(u_{d}^{*(b)}\), \(v_{dj}^{*(b)}\), and \(e_{djk}^{*(b)}\), \(b=1,\ldots,B\) from \(N(\hat{\boldsymbol{\beta}}_{OLS},Cov(\hat{\boldsymbol{\beta}}_{OLS}))\), the empirical distribution of \(\hat{u}_{d}\), the empirical distribution of \(\hat{v}_{dj}\), and the empirical distribution of \(\hat{e}_{djk}\), respectively. 4. Construct \(B\) simulated census values \(\{y^{*(b)}_{djk};k=1,\ldots,N_{dj},j=1,\ldots,M_{d},d=1,\ldots,D\}\) as follows: \(y^{*(b)}_{djk}=\mathbf{x}^{T}_{djk}\hat{\boldsymbol{\beta}}^{*(b)}+u^{*(b)}_{d} +v^{*(b)}_{dj}+e^{*(b)}_{djk}\), using the census values of the covariates. 5. The simulated population measures \(F^{*(b)}_{\alpha d}=\frac{1}{N_{d}}\sum_{j=1}^{M_{d}}\sum_{k=1}^{N_{dj}}F^{*(b )}_{\alpha djk}\) and \(F^{*(b)}_{\alpha dj}=\frac{1}{N_{dj}}\sum_{k=1}^{N_{dj}}F^{*(b)}_{\alpha djk}\) are calculated from each simulated census \(b\), where \(F^{*(b)}_{\alpha djk}=h_{\alpha}(y^{*(b)}_{djk})\), \(b=1,\ldots,B\). 6. The ELL estimators of \(F_{\alpha d}\) and \(F_{\alpha dj}\) are calculated by averaging over the \(B\) simulated measures as follows: \[\hat{F}^{ELL}_{\alpha d}=\frac{1}{B}\sum_{b=1}^{B}F^{*(b)}_{\alpha d}\] and \[\hat{F}^{ELL}_{\alpha dj}=\frac{1}{B}\sum_{b=1}^{B}F^{*(b)}_{\alpha dj}.\] ### Modified ELL **Method 1.** This modification retains \(\hat{u}_{d}\) in constructing the predictors \(y^{*(b)}_{djk}\), unlike the use of \(u^{*(b)}_{d}\) in the ELL method. We have the following modified ELL method: 1. From the nested error model given by (2.2), estimate the fixed effects \(\boldsymbol{\beta}\) using OLS. 2. Estimate \(u_{d}\), \(v_{dj}\), and \(e_{djk}\) as in the traditional ELL method. 3. Draw \(v^{*(b)}_{dj}\) and \(e^{*(b)}_{djk}\), \(b=1,\ldots,B\) from the empirical distributions of \(\hat{v}_{dj}\) and \(\hat{e}_{djk}\), respectively. 4. Construct \(B\) simulated census values \(\{y^{*(b)}_{djk};k=1,\ldots,N_{dj},j=1,\ldots,M_{d},d=1,\ldots,D\}\) as follows: \[y^{*(b)}_{djk}=\mathbf{x}^{T}_{djk}\hat{\boldsymbol{\beta}}_{OLS}+\hat{u}_{d} +v^{*(b)}_{dj}+e^{*(b)}_{djk}.\] 5. Then, the simulated population measures \(F^{*(b)}_{\alpha d}\) and \(F^{*(b)}_{\alpha dj}\) are calculated as in the traditional ELL method from each simulated census \(b\). The modified ELL estimators of \(F_{\alpha d}\) and \(F_{\alpha dj}\), denoted by \(\hat{F}^{MELL1}_{\alpha d}\) and \(\hat{F}^{MELL1}_{\alpha dj}\), respectively, are as follows: \[\hat{F}^{MELL1}_{\alpha d}=\frac{1}{B}\sum_{b=1}^{B}F^{*(b)}_{\alpha d}\] and \[\hat{F}^{MELL1}_{\alpha dj}=\frac{1}{B}\sum_{b=1}^{B}F^{*(b)}_{\alpha dj}.\] **Method 2.** This modification retains \(\hat{u}_{d}\) and \(\hat{v}_{d\hat{j}}\), for \(j\in s_{d}\), and uses \(v^{*(b)}_{d\hat{j}}\) for subarea \(j\) not sampled from area \(d\) in constructing the predictors \(y^{*(b)}_{d\hat{j}k}\). Then, the modification is as follows: 1. From the nested error model given by (2.2), estimate the fixed effects \(\mathbf{\beta}\) using OLS. 2. Estimate \(u_{d}\), \(v_{d\hat{j}}\), and \(e_{d\hat{j}k}\) as in the traditional ELL method. 3. Draw \(e^{*(b)}_{d\hat{j}k}\), \(b=1,\ldots,B\) from the empirical distribution of \(\hat{e}_{d\hat{j}k}\). 4. Construct \(B\) simulated census values \(y^{*(b)}_{d\hat{j}k}\) for the units in the sampled subareas as \[y^{*(b)}_{d\hat{j}k}=\mathbf{x}^{T}_{d\hat{j}k}\hat{\mathbf{\beta}}_{OLS}+\hat{u}_ {d}+\hat{v}_{d\hat{j}}+e^{*(b)}_{d\hat{j}k}\] and for subareas that are not sampled \(y^{*(b)}_{d\hat{j}}\) are generated from \[y^{*(b)}_{d\hat{j}k}=\mathbf{x}^{T}_{d\hat{j}k}\hat{\mathbf{\beta}}_{OLS}+\hat{u}_ {d}+v^{*(b)}_{d\hat{j}}+e^{*(b)}_{d\hat{j}k},\] where \(v^{*(b)}_{d\hat{j}}\), \(b=1,\ldots,B\), are drawn from the empirical distribution \(\hat{v}_{d\hat{j}}\). 5. Then, the simulated population measures \(F^{*(b)}_{\alpha d}\) and \(F^{*(b)}_{\alpha d\hat{j}}\) are calculated as in the traditional ELL method from each simulated census \(b\), and the second modified ELL estimators of \(F_{\alpha d}\) and \(F_{\alpha d\hat{j}}\), denoted by \(\hat{F}^{MELL2}_{\alpha d}\) and \(\hat{F}^{MELL2}_{\alpha d\hat{j}}\), respectively, are as follows: \[\hat{F}^{MELL2}_{\alpha d}=\frac{1}{B}\sum_{b=1}^{B}F^{*(b)}_{\alpha d}\] and \[\hat{F}^{MELL2}_{\alpha d\hat{j}}=\frac{1}{B}\sum_{b=1}^{B}F^{*(b)}_{\alpha d \hat{j}}.\] ## 4 Simulation Study A simulation study is undertaken to examine the performance of the two modified ELL methods under the two-fold nested error linear regression model (2.1). Marhuenda et al. (2017) conducted a simulation study on the performance of EB estimators of FGT measures for areas and subareas under a two-fold nested error model assuming \(u_{d}\), \(v_{d\hat{j}}\) and \(e_{d\hat{j}k}\) are normally distributed. We follow their simulation set-up but also consider skew normal scenarios: (1) \((u_{d},v_{d\hat{j}})\) normal (N) and \(e_{d\hat{j}k}\) skew normal (SN). (2) \(u_{d}\) normal and \((v_{d\hat{j}},e_{d\hat{j}k})\) skew normal. Section 4.1 reports results for case 1 and results for case 2 are given in section 4.2. We also include the case of \((u_{d},v_{dj},e_{djk})\) normal (N) studied by Marheunda et al. (2017). We generated \(I=1000\) populations each of size \(N=20,000\) composed of \(D=40\) areas each containing \(M_{d}=10\) subareas each containing \(N_{d\!j}=50\) units. We first generated the covariate vector \(\mathbf{x}_{djk}=(1,x_{1djk},x_{2djk})^{{}^{\prime}}\) for each population unit, based on \(x_{1djk}\sim\text{B}(1,p_{1d\!j})\) and \(x_{2djk}\sim\text{B}(1,p_{2d\!j})\) with probabilities \(p_{1d\!j}=0.2+\frac{0.4d}{D}+\frac{0.4j}{M_{d}}\) and \(p_{2d\!j}=0.2,\,j=1,\ldots,10,\,d=1,\ldots,40\). The generated population covariate values are held fixed and used to generate the dependent variable \(y_{d\!jk}\) from the two-fold model using \(\beta=(3,0.03,-0.04)^{{}^{\prime}}\) and specified distributions for \(u_{d}\), \(v_{d\!j}\), and \(e_{d\!jk}\) with mean zero and standard deviations \(\sigma_{u}=0.5\), \(\sigma_{v}=0.25\) and \(\sigma_{e}=0.50\), respectively. In case 1, \(u_{d}\sim\text{N}(0,\sigma_{u}^{2})\), \(v_{d\!j}\sim\text{N}(0,\sigma_{v}^{2})\) and \(e_{djk}\sim\text{SN}(\mu,\sigma^{2},\lambda)\) with \(\mu\) and \(\sigma\) chosen to make the mean and standard deviation of \(e_{d\!jk}\) equal to zero and \(\sigma_{e}\), and \(\lambda=\lambda_{e}=3\) which leads to moderate skewness. Note that the two- fold model is applied to \(y_{djk}=\log(E_{d\!jk})\) which reduces the skewness in the welfare variable \(E_{d\!jk}\). As a result, moderate skewness in the errors \(e_{d\!jk}\) is realistic. In case 2, \(u_{d}\sim\text{N}(0,\sigma_{u}^{2})\) and \((v_{d\!j},e_{d\!jk})\) skew normal with mean zero and standard deviations \(\sigma_{v}=0.25\) and \(\sigma_{e}=0.50\) respectively, and \(\lambda_{v}=1\) and \(\lambda_{e}=3\), respectively. The above process was repeated to generate \(I=1000\) population values \(\{y_{d\!jk},i=1,\ldots,1000\}\). We calculated the FGT measures \(F_{\alpha d}^{(i)}\) for each area and \(F_{\alpha d\!j}^{(i)}\) for each subarea from each of the simulated populations \(i=1,\ldots,1000\). We focus on poverty incidence \((\alpha=0)\) and poverty gap \((\alpha=1)\). Following Marheunda et al. (2017), we took the poverty line as \(z=0.6\text{med}(E_{d\!jk})\) for a population generated as above, where \(E_{d\!jk}=\exp(y_{d\!jk})\). We considered two cases for generating a sample of units. In case I, all subareas are sampled \((m_{d}=M_{d}=10)\) by selecting \(n_{d\!j}=10\) units from each subarea by simple random sampling. In case II, a simple random sample of \(m_{d}=5\) subareas is selected from each area and then a simple random sample of \(n_{d\!j}=20\) units is drawn from each sampled subarea. In both cases, the over all sample size within each area is equal to \(100\). We used a model-based set up by conditioning on the selected sample of units and extracting the corresponding sample data \((y_{d\!jk}^{(i)},\mathbf{x}_{d\!jk})\) from each simulated population \(i\). Using the sample data, we then obtained the desired estimates for areas and subareas from the assumed two-fold model. Denoting the estimators for areas and subareas for any given method by \(\hat{F}_{\alpha d}\) and \(\hat{F}_{\alpha d\!j}\) respectively, we computed empirical biases of the estimators for areas and subareas as \[B(\hat{F}_{\alpha d})=I^{-1}\sum_{i=1}^{I}(\hat{F}_{\alpha d}^{(i)}-F_{\alpha d }^{(i)}),\,\,\,B(\hat{F}_{\alpha d\!j})=I^{-1}\sum_{i=1}^{I}(\hat{F}_{\alpha d \!j}^{(i)}-F_{\alpha d\!j}^{(i)}),\] where \(\hat{F}_{\alpha d}^{(i)}\) and \(\hat{F}_{\alpha d\!j}^{(i)}\) denote the estimators for the simulated population \(i\). Similarly, we computed empirical MSEs of the estimators for areas and subareas as \[MSE(\hat{F}_{\alpha d})=I^{-1}\sum_{i=1}^{I}(\hat{F}_{\alpha d}^{(i)}-F_{ \alpha d}^{(i)})^{2},\,\,\,MSE(\hat{F}_{\alpha d\!j})=I^{-1}\sum_{i=1}^{I}( \hat{F}_{\alpha d\!j}^{(i)}-F_{\alpha d\!j}^{(i)})^{2}.\] ### \(e_{djk}\) skew normal Figure 1 presents box plots of bias \((\%)\) for ELL, modified ELL1 (MELL1), modified ELL2 (MELL2), and EBtwo estimators of FGT poverty incidence and poverty gap for areas and subareas under scenario 1 with skew normal errors \(e_{djk}\). Here EBtwo denotes empirical best estimator of Marheunda et al. (2017) assuming \((u_{d},v_{dj},e_{djk})\) normal. Box plots in Figure 1 show that ELL performs significantly worse than the other methods, leading to substantial underestimation in all cases, particularly for poverty incidence. Overall, MELL2 Figure 1: Boxplots of biases \((\times 100)\) over simulated populations of EBtwo, MELL1, MELL2, and ELL estimators of the poverty incidence (left side) and the poverty gap (right side) for the area and subareas in Case 1 are presented in (a) and (b) and for non-sampled subareas of Case 2 is presented in (c) (\(e_{djk}\) is SN). and EBtwo perform better than MELL1 although EBtwo leads to slight overestimation for poverty gap. Table 1 reports results on average MSE for the areas, sampled subareas and non-sampled subareas. Table 1 shows that ELL leads to very large average MSE in all cases compared to the other methods. For areas, MELL2 and MELL1 are comparable and slightly better than EBtwo in terms of average MSE. For the case where all subareas are sampled (case 1), MELL2 is significantly better than MELL1. This is to be expected because MELL1 does not use subarea specific method unlike MELL2. Also, EBtwo seems to be somewhat better than MELL2 in terms of average MSE: \(8.81\) for EBtwo vs. \(11.39\) for MELL2 in the case of poverty gap. Turning to case 2 where not all subareas are sampled, results for sampled subareas are similar those for case 1 where all subareas are sampled. Note that the average MSE is significantly decreased for sampled subareas because the sample size in those subareas is doubled relative to case 1. On the other hand, for areas the average MSE is significantly increased in case 2 compared to case 1 because the number of sampled subareas is reduced by half compared to case 1. For nonsampled subareas (case 2), MELL1, MELL2 and EBtwo are comparable in terms of average MSE. This is expected because for non-sampled subareas MELL1 and MELL2 are similar. Note that the average MSE is significantly increased for nonsampled subareas compared to corresponding values for sampled subareas. Figure 2 presents box plots of MSE for areas, sampled subareas and non-sampled subareas. Conclusions from those plots are like those arrived from the values of average MSE. ### \((v_{dj},e_{djk})\) skew normal We also considered the case where both \(v_{dj}\) and \(e_{djk}\) are skew normal and \(u_{d}\) normal. Average MSE and Box plots of MSE for areas, sampled subareas and nonsampled subareas, reported in Table \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Poverty} & \multicolumn{4}{c}{Estimation method} \\ \cline{3-6} & & indicator & EBtwo & MELL1 & MELL2 & ELL \\ \hline \multirow{4}{*}{Case 1} & Area & inc & 8.70 & 7.88 & 7.28 & 557.44 \\ & & gap & 1.28 & 1.30 & 1.17 & 95.33 \\ \cline{2-6} & Subarea & inc & 56.09 & 187.69 & 69.35 & 737.22 \\ & & gap & 8.81 & 33.52 & 11.39 & 127.53 \\ \hline \multirow{4}{*}{Case 2} & Area & inc & 24.15 & 23.69 & 23.11 & 562.45 \\ & & gap & 4.22 & 4.27 & 4.17 & 96.46 \\ \cline{1-1} \cline{2-6} & Sampled- & inc & 27.55 & 167.68 & 34.49 & 742.50 \\ \cline{1-1} \cline{2-6} & subarea & gap & 3.97 & 29.81 & 5.02 & 128.94 \\ \cline{1-1} \cline{2-6} & Nonsampled- & inc & 233.85 & 236.05 & 236.59 & 738.95 \\ \cline{1-1} \cline{2-6} & subarea & gap & 41.77 & 42.39 & 42.43 & 127.65 \\ \hline \hline \end{tabular} \end{table} Table 1: Average of MSEs \((\times 10^{4})\). Case 1 when all subareas are sampled and case 2 is when all subareas are not sampled (\(e_{djk}\) is SN). 2 and Figure 3 respectively, are very similar to those reported for the case where only \(e_{djk}\) is SN. Therefore, our conclusions for the two cases are similar. ### \((u_{d}\), \(v_{dj}\), \(e_{djk})\) Normal We also considered the case where \(u_{d}\), \(v_{dj}\) and \(e_{djk}\) are normally distributed. Marheunda et al. (2017) studied this case in the context of EBtwo estimators. Again our results on MSE for Figure 2: Boxplots of MSEs \((\times 10^{4})\) over simulated populations of EBtwo, MELL1, MELL2, and ELL estimators of the poverty incidence (left side) and the poverty gap (right side) for the area and subareas in Case 1 are presented in (a) and (b) and for non-sampled subareas of Case 2 is presented in (c) (\(e_{djk}\) is SN). areas, sampled subareas and non-sampled subareas indicate similarity with the results in sections 4.1 and 4.2 corresponding to \(e_{djk}\) skew normal and \((v_{d\hat{j}},e_{djk})\) skew normal. We report results only on average MSE in Table 3. We note that EBtwo leads to substantial reduction in average MSE over MELL2 for subareas in case 1 where all subareas are sampled: \(49.77\) vs. \(64.30\) for incidence and \(8.84\) vs. \(11.38\) for gap. This is to be expected because EBtwo is optimal under normality. ### Comparison with one-fold model In this section, we study the effect of ignoring the area effect and use a one-fold model con \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Poverty} & \multicolumn{4}{c}{Estimation method} \\ \cline{3-6} & & indicator & EBtwo & MELL1 & MELL2 & ELL \\ \hline \multirow{4}{*}{Case 1} & Area & inc & 8.89 & 7.92 & 7.36 & 564.12 \\ & & gap & 1.32 & 1.31 & 1.19 & 95.71 \\ \cline{2-6} & Subarea & inc & 55.79 & 182.97 & 69.11 & 739.27 \\ & & gap & 8.74 & 32.29 & 11.43 & 126.68 \\ \hline \multirow{4}{*}{Case 2} & Area & inc & 23.76 & 23.48 & 22.76 & 561.67 \\ & & gap & 4.13 & 4.23 & 4.09 & 96.70 \\ \cline{1-1} \cline{2-6} & Sampled- & inc & 27.28 & 164.16 & 34.18 & 739.02 \\ \cline{1-1} \cline{2-6} & subarea & gap & 3.91 & 28.82 & 5.00 & 128.50 \\ \cline{1-1} \cline{2-6} & Nonsampled- & inc & 230.19 & 232.75 & 232.49 & 734.29 \\ \cline{1-1} \cline{2-6} & subarea & gap & 40.69 & 41.35 & 41.27 & 126.62 \\ \hline \hline \end{tabular} \end{table} Table 2: Average of MSEs \((\times 10^{4})\). Case 1 when all subareas are sampled and case 2 is when all subareas are not sampled (\(v_{d\hat{j}}\) and \(e_{djk}\) are SN). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Poverty} & \multicolumn{4}{c}{Estimation method} \\ \cline{3-6} & & indicator & EBtwo & MELL1 & MELL2 & ELL \\ \hline \multirow{4}{*}{Case 1} & Area & inc & 5.87 & 6.82 & 6.42 & 508.88 \\ & & gap & 1.07 & 1.23 & 1.14 & 93.07 \\ \cline{1-1} \cline{2-6} & Subarea & inc & 49.77 & 168.60 & 64.30 & 670.71 \\ & & gap & 8.84 & 32.14 & 11.38 & 123.98 \\ \hline \multirow{4}{*}{Case 2} & Area & inc & 19.92 & 21.73 & 21.32 & 514.96 \\ & & gap & 3.81 & 4.11 & 4.03 & 94.63 \\ \cline{1-1} \cline{2-6} & Sampled- & inc & 24.62 & 151.31 & 31.31 & 677.23 \\ \cline{1-1} & subarea & gap & 4.11 & 29.04 & 4.83 & 126.31 \\ \cline{1-1} \cline{2-6} & Nonsampled- & inc & 209.02 & 214.87 & 215.08 & 675.51 \\ \cline{1-1} \cline{2-6} & subarea & gap & 40.22 & 41.28 & 41.32 & 125.06 \\ \hline \hline \end{tabular} \end{table} Table 3: Average of MSEs \((\times 10^{4})\). Case 1 when all subareas are sampled and case 2 is when all subareas are not sampled (\(u_{d}\), \(v_{d\hat{j}}\) and \(e_{djk}\) all N). taining only subarea random effects. In particular, we are interested in the performance of MSE for non-sampled subareas when the true model is the two-fold model. Denoting the subareas by a single index \(t\), the one-fold model may be written as \(y_{tk}=\mathbf{x}_{tk}^{{}^{\prime}}\boldsymbol{\beta}+u_{t}+e_{tk}\), like the model studied by Diallo and Rao (2017). We can then use their results to get ELL estimators for non-sampled subareas. They have also studied modified ELL under the one-fold model, but for non-sample subareas it is essentially the same as ELL. For sampled subareas, modified ELL was shown to be more efficient than ELL under the one-fold model. We used the case 2 set up of our simulation study and fitted the one-fold model to the sample Figure 3: Boxplots of MSEs \((\times 10^{4})\) over simulated populations of ELL, MELL1, MELL2, and EBtwo estimators of the poverty incidence (left side) and the poverty gap (right side) for the area and subareas in Case 1 are presented in (a) and (b) and for non-sampled subareas of Case 2 is presented in (c) (\(v_{d\!j}\) and \(e_{d\!j\!k}\) is SN). observations and obtained ELL estimators for nonsampled subareas for each simulated sample. Resulting box plots of MSE of ELL based on one-fold model, denoted ELL1, and ELL, MELL1 and MELL 2 based on two-fold model for the nonsampled subareas are reported in Figure 4. Average MSE values are reported in Table 4. It is clear from the box plots and average MSE values that MELL1 and MELL2 behave similarly in terms of MSE and lead to large reduction in MSE relative to ELL1 and ELL. We also note that ELL based on the two-fold model and ELL1 based on the one-fold model give similar results in terms of MSE. ## 5 MSE Estimation In the ELL method for the one-fold model, the variability of the simulated census measures is taken as the estimator of MSE of the ELL estimator. Similarly, under the two-fold model the corresponding MSE estimators of ELL for areas and subareas are given by \begin{table} \begin{tabular}{c c c c c c} \hline & \multicolumn{2}{c}{Poverty} & \multicolumn{4}{c}{Estimation method} \\ \cline{3-6} & indicator & ELL & MELL1 & MELL2 & ELL1 \\ \hline Nonsampled- & inc & 738.95 & 236.05 & 236.59 & 740.31 \\ subarea & gap & 127.65 & 42.39 & 42.43 & 127.94 \\ \hline \end{tabular} \end{table} Table 4: Average of MSEs \((\times 10^{4})\) of all non-sampled subareas (\(e_{djk}\) is SN) Figure 4: Boxplots of MSEs \((\times 10^{4})\) over simulated populations of two fold and one fold estimators of the poverty incidence (left side) and the poverty gap (right side) for each non-sampled subarea of Case 2 ( \(e_{djk}\) is SN). \[MSE(\hat{F}_{\alpha d}^{ELL})=B^{-1}\sum_{i=1}^{B}(F_{\alpha dj}^{*(b)}-\hat{F}_{ \alpha dj}^{ELL})^{2} \tag{5.1}\] and \[MSE(\hat{F}_{\alpha dj}^{ELL})=B^{-1}\sum_{i=1}^{B}(F_{\alpha dj}^{*(b)}-\hat{F} _{\alpha dj}^{ELL})^{2}. \tag{5.2}\] MSE estimators similar to (5.1) and (5.2) are applicable to MELL1 and MELL2, using simulated census measures. The proposed MSE estimators are simple, but they can lead to significant underestimation of the true MSE because the model parameters and the random effects in the model are not re-estimated in each replicate from the replicated sample data \((y_{djk}^{*(b)},\mathbf{x}_{djk})\). Marheuda et al. (2017) proposed a proper parametric bootstrap MSE estimator for EBTwo estimators, based on re-estimating model parameters and random effects in the two-fold model under normality. A similar procedure may be developed for ELL and MELL using a distribution free bootstrap, like the ELL method. ## 6 Concluding Remarks We considered the estimation of FGT poverty measures under a two-fold nested error model. We developed extensions of the ELL method and the modified ELL method of Diallo and Rao (2017) to two-fold models. The methods are free of parametric distributional assumptions on the random effects in the two-fold model. Our simulation results indicate that the proposed modified ELL methods lead to large efficiency gains over the ELL for both areas and subareas. Further, MELL2 leads to significant reduction in MSE over MELL1 for sampled subareas, and it is comparable to the EBTwo method of Marheuda et al. (2017) under normality assumption. An advantage of MELL2 is that it is applicable to more complex parameters, not necessarily additive in the individual values like FGT measures, unlike EBTwo. Bootstrap MSE estimation for MELL methods, along the lines of Marheuda et al. (2017) but without normality assumption, needs a detailed investigation.
2308.12411
A Theory of Intelligences
Intelligence is a human construct to represent the ability to achieve goals. Given this wide berth, intelligence has been defined countless times, studied in a variety of ways and represented using numerous measures. Understanding intelligence ultimately requires theory and quantification, both of which have proved elusive. I develop a framework -- the Theory of Intelligences (TIS) -- that applies across all systems from physics, to biology, humans and AI. TIS likens intelligence to a calculus, differentiating, correlating and integrating information. Intelligence operates at many levels and scales and TIS distils these into a parsimonious macroscopic framework centered on solving, planning and their optimization to accomplish goals. Notably, intelligence can be expressed in informational units or in units relative to goal difficulty, the latter defined as complexity relative to system (individual or benchmarked) ability. I present general equations for intelligence and its components, and a simple expression for the evolution of intelligence traits. The measures developed here could serve to gauge different facets of intelligence for any step-wise transformation of information. I argue that proxies such as environment, technology, society and collectives are essential to a general theory of intelligence and to possible evolutionary transitions in intelligence, particularly in humans. I conclude with testable predictions of TIS and offer several speculations.
Michael E. Hochberg
2023-08-23T20:18:43Z
http://arxiv.org/abs/2308.12411v2
# A Theory of Intelligences: Concepts, Models, Implications ###### Abstract Intelligence is a human construct to represent the ability to achieve goals. Given this wide berth, intelligence has been defined countless times, studied in a variety of ways and quantified using numerous measures. Understanding intelligence ultimately requires theory and quantification, both of which are elusive. My main objectives are to identify some of the central elements in and surrounding intelligence, discuss some of its challenges and propose a theory based on first principles. I focus on intelligence as defined by and for humans, frequently in comparison to machines, with the intention of setting the stage for more general characterizations in life, collectives, human designs such as AI and in non-designed physical and chemical systems. I discuss key features of intelligence, including path efficiency and goal accuracy, intelligence as a Black Box, environmental influences, flexibility to deal with surprisal, the regress of intelligence, the relativistic nature of intelligence and difficulty, and temporal changes in intelligence including its evolution. I present a framework for a first principles Theory of IntelligenceS (TIS), based on the quantifiable macro-scale system features of difficulty, surprisal and goal resolution accuracy. The key advances of this theory are the partitioning of intelligence into (1) uncertainty reduction ("solving") and goal accuracy ("understanding"); (2) challenges in the forms of goal difficulty and goal surprisal; (3) temporal spaces, including past sources, present proxies, environments and the core system, present and near-future transmission, and distant evolution. The proposed partitioning of uncertainty/solving and accuracy/understanding is particularly novel since it predicts that paths to a goal not only function to accurately achieve goals, but as experimentations leading to higher probabilities for future attainable goals and increased breadth to enter new goal spaces. TIS can therefore explain endeavors that do not necessarily affect Darwinian fitness, such as leisure, politics, games and art. I conclude with several conceptual advances of TIS including a compact mathematical form of surprisal and difficulty, the theoretical basis of TIS, and open questions. **Keywords:** affordance, complexity, crystallized intelligence, difficulty, efficiency, evolution, first principles, fluid intelligence, goals, information, knowledge, life, models, priors, recombination, skills, surprisal, system, uncertainty ## 1 Defining Intelligence The famous quote "_information is the resolution of uncertainty"_ is often attributed to Claude Shannon1. This statement is somewhat tautological since information relates to the inverse of uncertainty. But what Shannon's seminal 1948 article "A mathematical theory of communication" and the above quote more interestingly indicate is that information is a _potential_: order in the environment that can be perceived, filtered, deciphered, recombined and applied towards a goal. Shannon had previously equated information with intelligence in his 1939 correspondence with Vannevar Bush2, but as we will see below this is but one (important) element of intelligence. By associating 'the use of information in resolving uncertainty' with 'the resolution of a goal', one arrives at the conjunction: _intelligence is the resolution of uncertainty towards the resolution of a goal_. One also arrives at the implication that intelligence is an operator that increases information. My objective is to unpack this simple statement towards a more inclusive definition of intelligence and propose the Theory of IntelligenceS (TIS). I do not discuss the evaluation of intelligence in detail, for which the recent overview by Hernandez-Orallo [49] sets the stage for AI, but also yields insights into animal intelligence and in particular, humans. I do not discuss specifics of different abilities such as creativity, emotional, social and collective intelligence, and physical agility, nor goals such as expression, problem solving and seeking opportunities. Footnote 1: This and related quotes although implied by the 1948 article, are not actually found in the document [99]. A quoted exchange between Shannon and John von Neumann on the terms information, uncertainty and entropy suggest the interrelations of these concepts [109]. Footnote 2: Container 102, Vannevar Bush Papers, Manuscript Division, Library of Congress, Washington, D.C. See [92]. The above and other definitions of intelligence3 have in common the ability to deal with uncertainty (see **Glossary** for definitions). Key is the arc of acquiring and using knowledge and skills in new situations. Producing a correct answer to what _others_ would regard as a difficult problem does not necessarily invoke intelligence. Think of a computer simply executing an algorithm to correctly generate the next prime number beyond one inputted by a programmer. The programmer may very well be impressed by the computer's performance, particularly if the answer is rendered in less time than for smaller prime numbers! In this narrow comparative sense, if the programmer did not know that the faster computation was in fact made possible by _another_ programmer surreptitiously updating the computer's internal algorithm, then the computer would appear to the impressed programmer as having learned something from past experience - an indicator of intelligence. This concocted scenario turns out to contain elements - notably _relativity_ and _regress_ - complicating assessments of intelligence. Footnote 3: François Chollet [18] defines intelligence of a system as “a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty”. Similarly, Pei Wang defines it as adaptation with insufficient knowledge and resources [113]. Karl Friston stresses the roles of active inference, planning, curiosity (evaluating alternatives) and resolution of uncertainty [38]. Many have defined intelligence as the ability to plan for and to predict the future. See [69] for a survey of definitions including theirs: “an agent’s ability to achieve goals in a wide range of environments.” As we will see below, relativity and regress are central to the larger picture of intelligence. The most familiar demarcation of _relative_ comes from the Turing test - the assessment by a human as to whether a hidden entity (Turing himself took this as a computer) could pass for a human [110]. Beyond the subjectivity of such an assessment and therefore its statistical nature when evaluated by a large sample of observers, the question is whether intelligence is a property completely intrinsic to the entity being assessed. This is the problem of _regress_. Is a machine programmed to solve a problem no human can solve, intelligent? After all, the machine is just a robot following orders. But if not in the machine, where lies the intelligence? The programmers who wrote the algorithm? Or perhaps the engineers who designed the machine? The person or entity posing the problem in the first place? Or the person who realizes that only _this_ machine can solve the problem? An entity's intelligence necessarily derives - completely or in part - from multiple sources, each of which derives from its own sources, and so on.4 Footnote 4: Arguably the first glints of intelligence on Earth were the implication of the laws of physics and chemistry in the very first negative entropic systems [98]. I will refer to computers throughout this essay since they embody what we often associate with intelligence5: process efficiency and accuracy. Computers and artificial intelligence more generally - even if performing impressive feats from a human perspective - are still far simpler than biological systems and humans in particular [93]. The main features added in the huge and fuzzy steps from machine to human intelligence are goal definition, reasoning and flexibility, but also environmental sensitivity associated with active inference [58]. By virtue of our large brains, problem-solving abilities and planetary dominance, we as humans are arguably _ipso facto_ the most intelligent entities on Earth. But even if correlating with certain definitions of intelligence, brain size, cognitive performance and world influence do not define intelligence. Rather, as discussed below, intelligence is the ability to operate in the _relevant_ world: in ecological parlance, single or multi-dimensional niches, and have the latitude to explore and push the boundaries of these niches. Footnote 5: This is related to the ELIZA Effect [25], where humans imbue human-like qualities to functional objects. Similar to the Reverse Turing Test, a Reverse ELIZA Effect is a computer that imbues computer-like qualities to humans. Intelligent entities have a model of their world6. Active intelligence becomes possible when the model has the flexibility to address goals relevant to the entity. Simply possessing a model does not invoke intelligence. Thus, systems from atoms to molecules to gasses, liquids and solids are all governed by physical laws - models of a sort - but these laws, although capable of generating fantastic structures as diverse as crystals, water bodies and stars, are reactive and immutable: at sufficiently macroscopic scales, they always generate a finite number of behaviors. The laws of physics and chemistry can lead to a local reduction in information uncertainty (\(\approx\) entropy) and therefore a most primitive instantiation of intelligence, for example in the formation of crystals [22], [35]. Applying the observation above that intelligence increases information and knowing that overall entropy increases in instantiations ranging from crystal formation to machine computation to biological organism problem solving, we come to the hypothesis that all intelligent systems _decrease_ information entropy in some bounded space strictly relevant to the goal, but _increase_ entropy overall. Minerals, computers and brains assemble information and in doing so, do work. Footnote 6: In the book _A Thousand Brains_ Jeff Hawkins argues that humans have multiple world models [46]. Laws apply to software employing immutable, deterministic, algorithmic models. Software functions in hardware environments with the user-defined objective of accuracy in goal completion based on data input, the extraction of useful information from the data and information processing by algorithmic models. In so doing, uncertainty is reduced typically in a stepwise forward, lateral or recursive fashion7. Goal achievement depends on the capacities and integration of hardware and software. Any given input _always_ results in the same output as long as both software and hardware function are unaffected by random errors and there is no intentional randomness in the algorithm. Thus, a purely unthinking, deterministic computer _can_ meet the barebones definition of intelligence being _the resolution of uncertainty towards a goal_. To apply broadly across both unthinking and thinking systems a more inclusive and nu ### A Working Definition Consider the following working definition **A** for intelligence: **A: The Use of Knowledge and Data Towards the Achievement of a Goal8** Footnote 8: Although important contrasts exist, I loosely use the following terms interchangeably: entity, solver, system, individual; goal, problem, task, challenge, opportunity; and process, method, path, computation. **A** makes plain the temporality in intelligence. Uncertainty is reduced or abrogated through how knowledge and data are used towards a goal. The process requires integrating _previously acquired_ knowledge and skills9 and _current_ data in an existing model towards a _future_ goal10, producing a desired result. Thus, the model is the pivot that both simulates and predicts. Although complex, the description of **A** could be considerably expanded (e.g., [18]), but as developed below, these and other similar definitions can be distilled into a small number of abstract, yet meaningful parameters and processes. Consider _Achievement of a Goal_, which can be restated as: Footnote 9: Unless otherwise specified, hereafter I lump the informational concepts of knowledge and priors into the term ‘knowledge’. Thus, the important instruments employed in **B** are knowledge (abstract concepts) and skills (the practical application of knowledge). See [18] for discussion. Footnote 10: Goals may include the non-mutually exclusive: planning for the future (e.g., goal definition), addressing a problem, challenge or opportunity, or acquiring the knowledge or skills necessary to address future goals. Thus, intelligence extends to knowledge and skill acquisition. **B: Goal \(\rightarrow^{y}\) Path \(\rightarrow^{z}\) Result** Regardless of how hard the problem and ingenious the path underlying **B**, seen from the outside, the system might appear to be a mere reporter. If we could look inside of **B** in the limit of no computation, then the system is simply using input to find output from an existing list or **Goal \(\rightarrow\) Result**. When some form of computation occurs, that part of intelligence operating in the latter part of **A** begins in **B** at \(\rightarrow^{y}\): signal processing, simulation, interpretation and understanding, and eventual reformulation of the problem. It then engages in the decided method of computation to \(\rightarrow^{z}\): predictive processing and checking for decision errors, prediction errors or insufficiency, and decides whether to go back to \(\rightarrow^{y}\) and possibly use what previously appeared to be useless information (or computational methods), or continue on and render the result. Intelligence is reflected in abilities to navigate novel situations so as to winnow alternative paths, extract useful information and detect and not waste time on useless confetti and noise [94]. Minimizing decision and prediction errors through path checking and eventual correction and doing this at different organizational levels is part of what differentiates thinking from non-thinking systems [52],[107]. Path choices in thinking entities may take the non-mutually exclusive forms of abstraction, stochastic choices, lateral thinking and experimenting new paths, transiently accepting absurdity, counter-intuition or sub-optimality, and looping backwards to revisit previous paths. Thus the prevailing view that trajectories are constrained to affordances (environmentally-dependent opportunities and impediments) is an oversimplification of what in reality may be huge sets of alternatives based on the many ways that environments can be informative [94]. Path trajectories become more difficult to predict as goals become more complex and agents less able to resolve goals of a given complexity. Planning horizons are central to path efficiency and accuracy. In some ways analogous to the "adjacent possible" [56], non-planned, myopic strategies tend to decrease the predictability of future path nodes and goal outcomes (Figure 1). A myopic path can as such manifest in anything from ingenious routes leading to a satisfactory resolution, to truncated or omitted routes leading to approximate or incorrect answers, to highly reticulated (stupid) routes that may or may not lead to any answer at all... or may indeed produce a correct answer. Thus, a tendency towards myopic decisions may either decrease or increase result sensitivity to the path taken depending on the topology of the continuously evolving landscape of alternative paths, such that more than one distinct path might lead to the same result and imperceptible differences among alternative paths may produce very different results. These observations reflect the microscopic basis for the macroscopic Theory of Intelligences proposed below. **EXAMPLE 1**. To see how parameters and context affect \(\mathbf{B}\) consider the following example. You are a birder and walking through a forest, bird watching. Your eyes are in the forest canopy. Suddenly you hear a growl in the distance. You freeze and listen. You turn your head from side to side to gauge direction and distance. You estimate that the source is at about 100m, 45\({}^{\circ}\) to your right. By the nature of the growl and based on past experience you deduce it is likely a dog, but do not see the dog nor see or hear a possible owner. You wait motionless, now looking for an exit, either a path of escape or a tree to climb. There's no clear path to outrun an attacking dog and the closest tree that will support your weight is 10m in front of you. You quickly realize you are not a good climber, but there is little time to think, since the animal is now running and from the soundbites, it must be in your direction. You spot it and see it is a bear. Without thinking, you execute what you were told when you entered what is in fact a national park: Stand perfectly still. After a few scary minutes the bear leaves. You wait a few more and although in shock, you come away unscathed. Although unverifiable since a specific course of action was taken and no alternative could be tested, this appeared to be an intelligent response relative to others, including ignoring the growl, rolling-up in a ball on the ground, running away when seeing the bear approaching, or flailing at the bear once it was upon you. Your response was based on experience of estimating direction and distance and what an expert informed you to do in case of a bear attack. You now have a great story to tell. Figure 1: Time courses of path node changes for three hypothetical scenarios. Each point corresponds to a transient resolution in the path (potential node changes represented by a continuous line). The magnitude of the node change corresponds to the reduction in information entropy. Goal resolution is indicated by an open circle. A: Monotonic decrease in path node changes leading to an intermediate resolution time. B: Complex trajectory in path node change indicative of the discovery of a more accurate path and a consequential delay in resolution (not shown). C: Efficient path corresponding to a rapid resolution. ### 1.2 _Four Phases_ Statement **A** says that **B** is embedded in something greater. Specifically, factors preceding and surrounding **B** predicate what happens in **B**[94]. Factors preceding **B** include innate priors, accumulated knowledge and skills. Factors surrounding **B** are goal context and the environment. These sets of factors have been hypothesized to influence intelligence in complementary yet complex ways [18]. Thus, what frustrates the descriptions in **A** and **B** - and for that matter any multifactorial definition of intelligence - is how each component is weighted in decreeing or quantifying intelligence. In this respect - and again simplifying - a central ambiguity of proposals **A** and **B** is the relative importance of marshalling priors, inventing, choosing or being confronted with a goal, articulating the path, and achieving the desired result. Clearly, each phase depends to some extent those preceding it. More interestingly, each phase can depend on predictions of those not having yet occurred. We can therefore further generalize **B** to how information is accessed, stored, processed and oriented towards a goal: ### 1.3 _The Challenge of Modularity_ Assuming the inner-workings of a goal-seeking system can be understood, without a theory of intelligence (and even then) there is no objective way to apportion the relative weights among two or more of the phases in **C**. To better appreciate issues surrounding the modular nature of intelligence, I focus below on the path and the result. Many questions emerge. Is the path-component of intelligence the minimization of prediction errors, computational length, time or energy expenditure? Path simplicity? Understanding the path and why it works? Path creativity? Beauty? What if the path is ingenious, but the result does not meet the goal, due to, for example, either insufficient data or a momentary lapse in thought or processing? There are other possibilities. I may apply someone else's brilliant insight correctly (as might a computer) and swiftly solve a hard problem. Or, I may take an inefficient, reticulated path towards the correct answer. **EXAMPLE 2** For illustration, consider two different algorithms available to an entity - an efficient (smart) one and an inefficient (stupid) one - each yielding an answer to the same problem. There are two outcomes for each algorithm. Stupid algorithm, wrong answer (0,0); Smart algorithm, but wrong answer (1,0); Smart, correct (1,1); And yes, especially for multiple choice questions, Stupid, correct (0,1). Undoubtedly, (1,1) and (0,0) are the maximal and minimal scores respectively. But what about (1,0)'s rank compared to (0,1)? If a multiple-choice test, then one gets full points for what may be a random guess and (0,1). If the path taken is judged much more important than the answer, then an interrogator would be more impressed by (1,0) than (0,1). ## 2 Inside Intelligence ### Difficulty Active intelligence can only express if there is some degree of difficulty involved in attaining a goal. In the above example, the binder had never before encountered a bear, this increasing surprise and effective difficulty of the situation, but the binder had the tools to assess, react to and resolve her predicament. The solution to the bear problem is actually very simple, don't move and stay silent, but the path to the solution is not at all obvious. In addition to understanding bear behavior, the solution requires overcoming fear. It would be unlikely for the binder to solve the problem based on related experiences. Rather, a short-cut is taken by applying the solution learned when entering the park. Circumstances can be complicated and cloud the assessment of difficulty, but even should extrinsic influences be eliminated, it is an open question as to whether a first-principles theory of difficulty is even possible. As such, difficulty is the _outcome_ of the intrinsic feature of complexity, the latter being goal characteristics independent of the solver. Loosely speaking, complexity is associated with some combination of path length (decision node number, internode (edge) lengths), multi-order paths (integration of multiple variables, inclusion of higher dimensions), and/or disjoint path grammar (network structure conditional on path segments so far taken). Unless agent capacities are low, dedicated intelligence is not necessary towards simple goals. Similarly, even at high capacity, an agent might waste time addressing extremely complex or complicated goals (but see SSSS7.2, 7.3 below). Although unexplored to my knowledge, I predict intelligence is most functional in some intermediate range of uncertainties and complexities. Difficulty can refer to an individual's ability to achieve a given goal in a given context/environment, or be statistical, that is the fraction of agents capable of attaining a given goal in a given context/environment [5], [86]. Assuming a gradient in efficient solutions to each of a range of problems (equivalent to graded complexity), problem difficulty is reflected as the distribution in individual abilities for each of the problems and over the range of problems [6], [23]. Thus, difficulty can be attributed to a single individual or a group of individuals confronting either a single, given goal, or graded sample of goal complexities. A potential issue with attributing difficulty is the extent to which the underlying system has ordered structure and can potentially be understood (i.e., complex), or rather disordered and cannot be understood (i.e., complicated). Intelligence applies to complex systems and (arguably) cannot be meaningfully expressed in complicated ones. ### Efficiency and Generality Intelligence is often evaluated as the result without sufficient regard to how the result was obtained. This is because the path can be complex or reticulated and - importantly as developed below - partly or completely hidden from observers, whereas the result is the external, process-free, often tightly-packaged product11. As a consequence, an observer cannot always appreciate the level of ingenuity (or lack thereof) in the solver's method [103] - and indeed sometimes the solver can't appreciate it either!12 Because the ingenuity of a method should negatively relate to the time it takes to complete a task, time is sometimes the only way to infer ingenuity. Ingenuity implies path intelligence, but a better, more measurable metric is efficiency. Efficiency is the tendency towards fewer path nodes and/or less time to achieve a given result (Figure 1C). Moreso than the ambiguous notion of ingenuity13, efficiency measures are expected to correlate strongly with the time it takes to achieve a goal. Footnote 13: Whereas accessing pieces to a puzzle in a stepwise, logical, efficient manner satisfies many definitions of intelligence, path ingenuity has no first principles basis, since it implies taking forbidden, illogical, experimental or random, turns, segments, or jumps to arrive at _a priori_ unexpected trajectories towards a resolution. To the extent that intelligence is reflected in path efficiency and result accuracy, a multi-layered understanding of the path - although fostering accuracy - could compromise efficiency. This is why using short-cuts based on prior knowledge (memory, reflexes, intuitions, skills, etc.) can be superior to needlessly computing or attempting to understand path decisions, be they second-to-second calculations or contemplative future planning [60]. A magician, or for that matter a musician, accumulates knowledge and skills and may achieve understanding through learning, training and practice, but once mastered, uses skilled recall (i.e., unconscious automation for magicians; muscle memory for musicians) in execution. This suggests intelligence is both the higher-level _knowing_ when to actively infer and/or reactively recall, and the consequential lower-level _applying_ of one or both strategies. All else being equal there is an expected bias towards already available, fast, cheap memory rather than slower, more uncertain inference and reasoning14. This will be revisited below. Footnote 14: Although to my knowledge not investigated, the _free energy principle_ would predict an optimum allocation between these strategies. Free energy reduction through Bayesian updating and active inference delimits, gathers, filters and processes information using a world model [84]. Uncertainty is thereby reduced by predicting and anticipating environments and by triaging or even altering environments to better match model predictions. To understand efficiency as an indicator of path intelligence consider designed systems such as immutable computers and adaptive artificial systems. At one extreme, a computational device receives input, which might include data and algorithms, and then executes a program, producing output. This is what occurs everywhere from the simplest calculator to powerful but nonadaptive super-computers. At the other extreme (as I write) is the constellation of machine learning, adaptive machine learning, and deep learning [67]. These systems improve scope and performance by training on what may be huge, but nevertheless finite, databases and/or learning from free experiences. Today's machine learning platforms include AlphaZero (deep learning) that can defeat any human in either Go or Chess. Certain machine learning platforms have been referred to as "pointalsic" [73], meaning they are only as good as their trained coverage of the space of possibilities. Indeed, AlphaZeroGo was defeated by a human [74], but this failing is likely a one-off since (similar to humans) the program can learn to avoid previously unexperienced traps. All else being equal15, genitalia lacuna will be greater for open learning systems such as self-driving cars compared to closed games such as AlphaZeroGo, the former thus requiring the sampling of more experiences and environmental contingencies than the latter for similar levels of performance. Footnote 15: All else is not necessarily equal. The learning curves will ultimately depend on probability distributions of experience types and the consequences of making errors. An important mediator of efficiency is the scope of attainable goals. Current AI systems show _narrow_ intelligence relative to humans, since the former are limited to a finite (and usually small) set of specific goals (e.g., [18]). For example, self-driving cars live in the finite universe of what drivers can ask of a car. Even if there are many intermediate decisions (right turn, speed up, avoid collisions...) leading to the ultimate goal (arriving at designated place), the universe is still the limited set of decisions and goals set for cars. The modes and implications of generality in intelligence are little understood [18], [32], [81], but to the extent that predictions from evolutionary ecology apply [39], then we might expect greater adaptation and robustness for the limited number of local goals in specialized systems, whereas general intelligence systems should have greater flexibility (interpolation, exploration, extrapolation) to achieve a range of goals, but be subjected to tradeoffs [4] - the latitude to be highly adapted (similar to specialized intelligence systems) on certain important tasks at a cost to performance on other more difficult, more peripheral, or less important tasks. Of course, many systems are not either/or narrow/general - there will be a continuum and the recently launched large language model GPT-4 is a good example of narrow AI having elements artificial general intelligence [11], [115]16. Akin to humans and ecological systems more generally, it is therefore reasonable to assume variation in capabilities across task types for AGI systems. Footnote 16: But see [https://www.nature.com/articles/d41586-023-02361-7](https://www.nature.com/articles/d41586-023-02361-7) Despite ideas that generality is more advanced than specificity, debates about the importance or superiority of general intelligence risk being sterile, since the evaluation of intelligence needs be in the context of relevant goals [70], that is, the _intelligence niche_. Intelligence niches can be unidimensional (e.g., find prime numbers) or multi-dimensional (e.g., abilities to hunt and escape predators) and have distinct, fuzzy or discontinuous edges (e.g., difficulty trapping certain prey). Intelligence could be extended by (i) extending capacity niche boundaries (_cf._ pointillistic [73]) or (ii) adapting to novel goal niches (general intelligence). The general intelligence promoting such extensions increases within human lifetimes [17], [100], [108] and increases through population time [12]. Both phenomena are particularly relevant to the current transition from AI to AGI, where humans are attempting to develop platforms with (human) thinking-like abilities [113], and once achieved, it is an open question as to whether such systems could autonomously acquire additional novel capacities. ### Black Boxes Intelligence is making what an observer regards as a difficult problem, look easy17. The processes inside the problem-solving system however could be anything from mechanical computation to imaginative reasoning, or some of both. Excepting the simplest systems and those amenable to revealing their inner logic, how can we really know what happens inside? Footnote 17: Quote from David Krakauer [https://www.samharris.org/blog/complexity-stupidity](https://www.samharris.org/blog/complexity-stupidity) **EXAMPLE 3**. Consider the quintessential magic trick of the rabbit pulled out of a hat. As in **B**: Goal, method, result. Only the first and third are known by spectators. The magician appears other-worldly, because he makes a hard if not impossible problem look easy. But the magician's resolution is just appearance: he learned the trick from others and practiced hundreds of times before going on stage. He is effectively a sophisticated pocket calculator doing the same calculation again and again. But should an untrained spectator be asked to do the trick, she may succeed, and indeed, there are several ways to accomplish this illusion [63]. Regardless, the spectator used intellectual flexibility to pull-off this unexpected challenge and she is arguably more intelligent in this respect than the trained magician. If the audience did not know who was who, then the perceived intelligences of the magician and spectator would be equal! Interrogation would suffice to understand the spectator's ingenuity and how the magician simply executed what his magic professors taught him. The magician would have elements of intelligence since he had the capacity to learn the skills enabling the trick. Thus, his performance reveals a past (transmitted) accomplishment, whereas the spectator had to rapidly apply non-specific knowledge to model the sequence of physical and visual illusions to pull-off the trick. Being somewhat destabilized by the sudden challenge, the spectator's detailed decision diagram would likely require more description than the magician's efficient recipe. The phenomenon of intelligence implies capacities that are not immediately transparent to the observer. This applies both to biological [29] and artificial [8], [14] systems. System opacity depends on goal difficulty, data availability, the complexity of the system hardware, and the ability of the observer to decipher inner-system states and process causality18. Moreover, the Black Box feature of intelligence is not necessarily limited to understanding how and why a given instance of a goal is or is not achieved - it extends to a given goal in alternative contexts or using alternative paths. An example of the latter applying both to humans and AI is the game of chess, where the number of possible legal positions is unimaginably huge (e.g., [102]). Observed positions are many orders of magnitudes fewer, constrained by the games actually played and by hypothetical coherent suites of moves (actually played or not). Chess Grandmasters can coarsely describe logic for each move and possible uncertainties regarding alternatives, since a move need factor-in anticipation of the opponent's next moves and one's longer-term strategy. Insofar as moves are a culmination of instinct, knowledge and calculation, it would be laborious if not impossible to describe every detail of what goes through a Grandmaster's mind. Footnote 18: This relates to ideas that intelligence is an emergent feature of goal seeking systems (e.g., [50]). A challenge to understanding efficiency is mapping the inner workings of \(\mathbf{C}\), that is \(\rightarrow^{x,x^{{}^{\prime}}}\cdots\ \rightarrow^{y}\cdots\)\(\rightarrow^{z}\)[103] and how processes and their structure contribute different attributes in intelligence [4],[27]. _Network neuroscience theory_ holds the promise of linking network structure and crystallized and fluid components of intelligence [4], but is impractical for dissecting reasoning pathways and short-cuts. A stand-alone or complementary approach is the use of interrogation [20], where the assessor asks informative questions to evaluate path steps based on what may be some combination of complete/partial, accurate/biased and correct/erroneous answers. Thus, akin to the Turing test, unpacking the Black Box depends on both the assessor and on interactive channels with and in the Black Box. Given a discoverable structure inside the Black Box, determination of its inner-workings would depend on variation in questioning by a single entity of a given ability, or by a population of entities of diverse capacities. To the extent that the Black box is complex, identifying and understanding its inner-workings would require commensurate intelligence on the part of the interrogator(s) [20]. Limited inference due to the convolution of finite capacities of interrogation with opaque, complex systems has implications for understanding and overseeing AI. Although process in AI is currently generally more accessible than for biological systems - and for humans in particular - humans do not always understand AI decision making [8]. Beyond challenges stemming from deterministic system complexity [106],[116], artificial systems can possess probabilistic components (e.g., in machine learning [41]). This is true for certain search algorithms (e.g., [87]) and can also manifest in hardware and software, the prime example being the emerging technology of quantum computers. Inferring probabilities may enhance task achievement (e.g., [40]) and in the realm of biology, there is some evidence for stochastic processes in cellular decision-making [3]. Similar to higher-order deterministic system behavior, _intelligent randomness_19 could make Turing-like test assessments challenging, such that what appears to be a signature of thinking may in fact be huge, difficult to predict yet hard-wired repertoires. Footnote 19: See brief observation in 1955 by McCarthy and colleagues [76] ## 3 Past, Future, Present ### Preparedness The ability to forge a path towards resolution \(\mathbf{A}\) depends on preparedness. Preparedness includes the priors, knowledge and skills that will form the parametric substrate upon which models feed and articulate, but also the genesis of the models themselves as the ultimate substrate reducing uncertainty in enabling **C**. Raymond Cattell parsed acquired and active intelligence, defining crystallized intelligence as the ability to accumulate and recall knowledge (much like a computer), and fluid intelligence as abilities to learn new skills and to apply knowledge to new situations (a thinking entity) [16]. Although an oversimplification of the many factors and interactions forming intelligence discussed by Cattell and others [32], [79], this basic dichotomy is useful in differentiating the functional significance of storage/recall and active decision making/future planning [84]. The amount and nature of previous experience at an agent's disposal potentially enters into accomplishing each segment of **C**. It is thus tempting to conclude that intelligence must increase with the quantity, quality and diversity of previously acquired parametric substrate. This will be true to a point [2], but without abilities to parse and apply abundant, contrasting and possibly conflicting information, cognitive load may result in sub-optimal resolution or even failure [26], [91]. This suggests a Goldilocks range of parametric substrate and environmental data for any given solver addressing a given goal (Figure 2). A sham path or an erroneous result occurs when information is either too sparse or too dense. More speculatively, information levels across the Goldilocks range could produce interesting, contrasting outcomes: for example, sparseness producing an inefficient, time-consuming path, to intermediate density giving an efficient, possibly ingenious path, and finally information completeness leading to a rapid, linear path. Fluid intelligence would be most useful with intermediate data/information, whereas the usefulness of crystallized intelligence would generally increase with data/information availability. Figure 2: The hypothetical effects of data richness on the accuracy of a resolution. Below a threshold (dashed line) data is too sparse to produce an accurate resolution. Increasing data under fixed search permits greater accuracy up to a point, but beyond this, too much data challenges abilities to efficiently parse information (convex curve). The three cases to the left of the fixed search curve represent abilities to bootstrap sparse data so as to increase accuracy. Intelligence corresponds to an increase in both elevation and slope from cases \(i\) to _iii_. The three cases to the right of the fixed search case represent abilities to compress and sort dense or noisy data. Intelligence corresponds to an increase in both elevation and slope from cases _iv_ to _vi_. ### Planning Intelligence is the ability to set goals and attain these rapidly and efficiently despite constraints. All else being equal, the most difficult goals are either strongly time-constrained or those projected far into the future. The bear encounter is a good example of an immediate problem - unfamiliarity, uncertainty of what to do and the necessity to control fear. Goals in the far future on the other hand generate their own uncertainties stemming from either greater challenges or unpredictable future environments. Uncertainty in achieving future goals is reduced through prediction and planning, which tend to optimize information use and foresee and dynamically adapt to environmental conditions, thereby reducing surprises and achieving more efficient paths [72], [112]. Planning can also reduce goal difficulty by engineering the surrounding environment or altering unachievable goals on-route, meaning, for example, that satisficing may emerge as the most intelligent outcome [9], [47]. ### Flexibility and Surprisal Flexibility is important to fluid intelligence and a hallmark of general intelligence [12]. Flexibility becomes increasingly important to goal resolution as a solver goes from familiar to unfamiliar goals and high to low preparedness. In the bear example, the birder quickly jettisoned the various inferred plans of action and opted for memory of the knowledge provided by an official whom the birder regarded as an expert. The intelligence here is weighing the uncertainty of the birder's own plans and consequences of failure with the more certain, but counterintuitive action advised by the official and trusting that person's expertise. Risk is compounded by the unfamiliar environmental setting, surprise of the situation and time constraint. Flexibility is also important on the path to goal resolution, for example, the capacity to find alternative paths when predicting or encountering a road-block [55]. Low preparedness introduces the important notion of surprisal, that is an unfamiliar or unexpected goal, context, or environmental situation. Greater surprisal is usually associated with greater difficulty, particularly if a goal is both complex _and_ unexpected, though some individuals deal well with unexpected challenges and may even achieve better solutions when surprised [28]. Simply repeatedly executing the same algorithm towards the same goal in the same environment with the same knowledge reveals nothing more than situation recognition and memory recall20. Footnote 20: This relates to the “AI effect”: once a problem is solved and the solution understood its solution is no longer in the domain of intelligence [77]. How does surprisal enter into intelligence? Recall the unthinking computer. It makes path choices based on existing, proximal, immutable, deterministic alternatives21,22. The computer might be able to solve problems out of human reach, yet the computer's surprisal level is _zero_. But if running the optimal routine and quickly producing a correct answer is not an indication of intelligence, then surely the opposite of this - correct answers despite complete surprise, environmental adversity and time-limits - is. This hints at a paradox: more knowledge makes surprise less likely _and_ better equips the entity to handle any remaining surprise. In other words, increased crystallized intelligence tends to reduce the need for fluid intelligence23. So, some degree of incompleteness or perturbation is necessary for flexibility in path reasoning - that is fluid intelligence - to be relevant [18]. Moreover, mastering environments and goals may make an entity _appear_ intelligent, but this is only true insofar as the entity has previously laid the groundwork through accumulated experience and crystallized intelligence. The observer may not be able to discern the actual inner workings behind an accurate resolution. ## 4 Temporality ### 4.1 _Regress_ An unresolved question is whether intelligence is a _de novo_ property of a system, that is, with no antecedents whatsoever. Clearly non-thinking systems such as calculators and computers cannot create goals or resolve them with imagination, invention and insight. Still, given the inevitability of past sources in AI, biological systems and in humans in particular, it remains an open question of how to attribute potential _de novo_ intelligence beyond apparent invention or the recombination of learned, time-worn knowledge [94]24. Thus, consider a human confronting a very difficult problem. If the person were to believe that the problem is soluble by AI, submit the problem to an AI platform and get back an accurate answer, then does what is a routine task for AI reveal intelligence in the human or AI platform, or some of both? Footnote 24: Moreover, whereas social learning through imitation alone does not create novelty, individual learning in combination with social learning can [78]. Despite challenges in parsing sources, intelligence evolves ([12] and see below) in biological and artificial systems. This means that phenotypic variants, be they genetically, culturally or technologically based, potentially contribute to future abilities. In the case of human social learning, both established and novel resolutions are the raw material for others to observe, record and emulate, thereby contributing to the diffusion and cumulative evolution of knowledge [94], [48], [80]. In embodying knowledge and its transmission, culture and society are at the foundation of acquired - and what is interpreted as _de novo_ - intelligence. This raises the question of the extent to which the substrates of intelligence such as priors, knowledge and skills are transmitted, that is in the extreme, the receiver is born a 'blank slate' [65]. Or, the extent to which facets of intelligence come from genes, the environment and interaction between them. In the extreme case of a non-thinking computer all intelligence is crystallized and stems from those responsible for the hardware and software algorithms and concordance between a user's goals and the computer's abilities. But then, what are the sources of the computer engineer's and programmer's knowledge and skills? Unaccountably large numbers of people through time have ultimately contributed to capacities in each individual computing system and through culture, technology, education systems and society, in each individual human being. Analogous to vertical regress is the horizontal transmission of factors facilitating intelligence [33], [66], [71], [104]. Here, an entity benefits from the contributions of social interactions (e.g., collectives [61]) or from technology (e.g., tools [7]) and thus the embodiment of shared intelligence can extend beyond the usual notions of the individual [59]25. Collectives are particularly interesting since they can range from an individual benefiting from or depending on information from one or more others (e.g., outsourcing), to a transient or more persistent group, with or without borders, where individuals exchange information and act towards a goal (e.g., certain primitive social insects), to a division of intelligence labor where different functions in goal attainment are distributed among individuals (e.g., eusocial insects, any corporation). The intelligence substrate provided by these and other proxies could complement, substitute, enhance, or extend an entity's existing facilities [68]. Vertical and horizontal factors are not always independent, since for example technology evolves and a teacher or software engineer, although achieving their skills prior to educating or programming, respectively, do interact with the receiving entity contemporaneously. Back to the computer example above, new hardware and new software can be introduced (horizontally) to an experienced computer, suggesting that problem solving is a dynamic process of interaction among computer, programmer, technological possibilities and human goals. These dependencies and ameliorations are not static. Whereas both regress and contemporaneous association can increase an entity's intelligence, only the latter can generate a reciprocal dynamic and coevolve with the entity. For example, computers and AI complement or even replace existing intelligence functions in humans, similar in some ways to crystallized mechanisms lessening the need for certain fluid ones. The intelligence symbioses among humans and between humans and technology coevolve through invention (new ideas), recombination (repurposing existing ideas), lateral transfer (information sharing), and sorting and selection (preferences, performance). ### Change Human intelligence changes temporally both in the population and over individual lifetimes. Evidence for changes in population intelligence comes from studies showing increases in IQ scores through time [34]. Cumulative invention, learning and their increased accessibility through social networks lead to greater, more general knowledge, innovation, increased skill-sets, and more flexible application to achieving goals [97]. Similarly at the level of human society, culture, urban development and technological evolution [104] constitute _intelligence resources_ that enable more ambitious goals. Evidence also exists for increases in individual crystallized intelligence into adulthood [53], [83]. Humans gain general intelligence faculties [17] through childhood and adolescence, but they encounter fewer never before seen problems as they age and are less able to maintain processing speeds [96], speculatively suggesting for humans a relative shift from the employment of proxies (parents, social) and memory when very young to fluid (thinking, flexibility) to crystallized (knowledge, skills) intelligence (with more proxies) when old. In other words, even if nuanced [45], the accumulation of knowledge and skills enabling goal attainment in a predictable spectrum of environments and contexts anticipates likely future experiences and gradually replaces the necessity to employ novel reasoning26. Footnote 26: This concords with the idea of “conjuring the child within us” when referring to creativity in older individuals. These elements indicate that intelligence is a dynamic phenomenon through life, both in the definition of and the ability to attain goals. We can modify \(\mathbf{C}\) to account for temporal feedbacks that ameliorate both crystallized and fluid intelligences and that expand existing and introduce new intelligence niches (general intelligence). Consider system \(\mathbf{D}\): where we have not scripted the new arrows with letters, but rather used dots for how goal attainments in the form of results can reinforce future paths taken (crystallized intelligence) and introduce novel setpoints for future goals (general intelligence). Likewise, the dashed line indicates how path experimentation (model flexibility; recombining alternatives) could influence future paths (fluid intelligence) and priors, knowledge and skills (crystallized intelligence). These additions are clearly oversimplifications of the complexity of real feedbacks and interactions, but make the point that even in the absence of significant proxies and environmental inputs, individual experience influences future operations through a selective feedback process. ## 5 Relativity ### Observer However defined, intelligence is always relative27 to zero intelligence, an initial state or an arbitrary reference, benchmark problem or assessment. Arbitrary intelligence metrics can be relative to either a threshold decree (yes/no), a quantitative reference (points), or a statistically-calibrated population distribution on one or more benchmark tests (percentiles). In the first two, relativity stems from an assessor's personal experience or application of a correlate. "He must be intelligent because he answered the hard question correctly" (personal comparison) or "...because she has a PhD" (correlate). Respectively, the assessor subjectively calibrated question difficulty (yes/no) or the significance of a label of achievement (PhD). This has implications for assessment _value_, with the expectation that as an assessor's own cognitive abilities are increasingly limited, the perception of intelligence in others grows, but becomes less accurate. Even should the assessor be a group of individuals leading to greater accuracy through consensus, the problem of the arbitrary nature of references or benchmarks remains. Footnote 27: Consider a Boolean definition “true/false”. This is not absolute since the result is necessarily relative to a query, which necessarily has a defined reference. For example, “The person did not run 30km/hr. True or false?”. The relativity problem therefore comes down to the abilities of observers and observed. An example. In 1995, Andrew Wiles published his proof applying to Fermat's Last Theorem. Many people with a university STEM education will know of the conjecture, but few could claim to have even a basic understanding of the proof, and only a vanishingly small number of these had the ability to actually check the proof. This highly skewed distribution of expertise is a reflection of why it took more than 300 years to prove the conjecture despite many attempts. Although Wiles's proof is an extreme example, distributions in abilities to attain non-trivial goals and understand goal attainment is a general expectation in populations. Back to the bear example. It is quite possible that given an extra minute to think, our binder would have climbed a nearby tree. This would be better than running and risking being attacked by the bear, but if up in a tree and the supporting branch were to break with the bear waiting underneath, then the tree-climb could spell the worst possible outcome. The many unknowns of the situation and inability to actually evaluate alternative paths mean that intelligence here is not independent reasoning, but rather low-risk memory recall and the outcome (i.e., the binder is either dead, some degree of scathed, or unscathed). Inferring paths minimizing risk of injury would require a statistical sample of birding situations similar to that in the example; indeed, the advice from the expert at the park entrance was likely based on such a sample. This highlights the statistical nature of outcomes for a given path and outcomes over alternative paths. For the former, a large enough sample of birders following the stand-still strategy would show a distribution in outcomes. This is due to random variables, the two main ones being a complex forest environment and a bear's unpredictable reaction. Not surprisingly, random variables lessen the accuracy of basing intelligence assessments on a single or a small number of events. ### Standardization Standardized assessments of intelligence are based on resolving tasks over a range of difficulties in controlled environments [103]. Stratified difficulty serves to delimit the transitions from ease (little time consumed; correct answers), to challenge (time consuming; some correct answers), to impossibility (either little or very time consuming; either unanswered, or answered and either incorrect or randomly correct) for a given individual. The accuracy of assessments depends on how problems represent hypothesized components of general intelligence and on the graining of difficulty so as to identify accomplishment thresholds (e.g., [15]). Although reasoning becomes more influential with problem difficulty, the relative contributions of crystallized and fluid intelligence to outcomes as a function of problem difficulty is little understood. Standardization has an inevitable built-in limitation. Insofar as it attempts to accurately represent certain features of intelligence outside of the test facility and insofar as the test is implemented in a fair, controlled and consistent way across a population of test takers, the latter (environmental control) necessarily limits interpretation of the former (accurate representation). Real-life contexts and environments can be highly variable, differ considerably from sterile testing rooms and standardized test objectives, meaning the relevance of standardized metrics hinge on the assumption that rank problem difficulty reflects real-life goal attainment and does not appreciably change across non-controlled environments for different test takers. Thus, for example, I may perform better than you in stressful environments and you better than me in relaxed environments, but a standardized test cannot control for either. Test scores are often unreliable indictors of achievement in the real world. ## 6 A Theory of Intelligences (TIS) The above discussion only scratches the surface of the concepts and massive literature on intelligence. Despite a multi-factorial, multi-scale and multi-dimensional phenomenon, simplifying assumptions can produce a core framework upon which theory can be developed. My objective below is to propose such a framework and bare bones theoretical models as prototypes for more accurate, general models of intelligence. Theoretical developments to characterize and predict intelligence are scattered among many disciplines and given the absence of fundamental theory these frameworks are necessarily epistemological. For example LeCun and colleagues focused on learning and the successive resolution of prediction errors in AI [67] and Chollet developed a theory of intelligence as realized ability relative to prior knowledge [18]. The contrasts among these and other conceptual, statistical, structural and mathematical models (e.g., [1], [43],[89],[105],[114]) reflect the complexities discussed in the previous sections and the many ways in which intelligence, its correlates and components can be represented in theoretical developments. The multi-faceted nature of intelligence challenges prospects for a general theory based on first principles. Here I take a macroscopic perspective to propose phenomena that could contribute to such a theory. In more microscopic perspectives, factors in the temporal sequence from access to priors and accumulating knowledge and skills through learning [18], or prediction error-checking in seeking a goal [67] are explicitly modeled. The macroscopic theory presented here - the Theory of Intelligences (TIS) - in contrast, considers two higher system-level quantities which are central to the definitions of intelligence developed above. They are reducing uncertainty along a path (i.e., _solving_) and increasing accuracy in its resolution (i.e., _understanding_). Importantly, understanding need not evoke higher level thinking, nor even cognitive processes. Understanding is the capacity to causally link the solving process to attainment of goals (i.e., accuracy). Thus, the functioning of a computer program based on logic invokes some form of embodiment of understanding on the part of programmers even though the computer has no emergent understanding _per se_28. Footnote 28: For recent discussion of the debate over understanding in AI Large Language Models, see [82]. TIS is based on the observation that goals can be attained with little or no insight or computation, and that great solving abilities do not ensure goal realization. Previous theory on cognition partitions goal attainment into perception and action (those these need not be independent, see e.g., [10]), both of which are optimized by minimizing free energy [36]. TIS generalizes these insights to any uncertainty/entropy-reducing process by positing that both perception and action may enter into either or both of the two partitions: solving and understanding. ### _Efficiently reducing uncertainty through solving_ We assume the goal system is a network of nodes corresponding to information states. The only given information state at the start is the goal \(g\) at node \(n\). The remaining nodes are probabilistic possibilities that each depend on successive decisions, analogous to the concept of the "adjacent possible". We do not explicitly model the microscopic processes forging the path through the information network. The agent enters the network at _n=g_ with stored information (memory) in the form of priors, knowledge and skills and thereafter uses (and possibly recombines) this stored information with (and actively imports) ambient and targeted data through the path nodes to resolution at node _n=r_. Data importation and information interpretation, understanding and recombination are leveraged in forging the path. They are not explicitly modelled below. Path intelligence relates to how path decisions - ostensibly to achieve a specific goal - lower the information uncertainty component of the total change in entropy. We assume that entropy change has two parts: the change in information uncertainty \(\Delta U\) and the total work done. The total work done will include both reduction in information uncertainty \(-\Delta U\) and any additional work in the path to possible resolution (see below). Maximal path intelligence is formally the maximization in uncertainty reduction (towards a putative resolution) and minimization of both information entropy and total work expended. The change in that part of entropy relating purely to changes in path uncertainty can be represented as: \(\Delta U=\ U_{r}-\ U_{g}\) (1a) where \(U_{g}\) is the initial level of information uncertainty presented by the goal given reference factors (assumed implicit here). These reference factors correspond to an agent in a given context and environment. \(U_{r}\) is the level of information uncertainty once a resolution is complete and therefore path intelligence requires \(U_{r}<U_{g}\). Equation (1a) is a gross oversimplification of resolution processes. First, a path will typically have multiple nodes. Each node will depend on the previous path segments taken, current assessments and future predictions over different network scales. Thus, again inspired by the "adjacent possible" [56], for a highly complex goal, regional and possibly even local network geometry will be difficult if not impossible to predict from the start node _n=g_ and subsequent key junctures in the path. Second and related, equation (1a) considers the resolution of uncertainty as the simple difference from the start to the end of the full sequence of intermediate resolutions. This assumes a linear decent in uncertainty, whereas more realistic non-linear trajectories would reveal that effective difficulty is better represented as some integration of challenges along the path to resolution. Path intelligence is the efficient reduction in uncertainty along the path to resolution. As above, the work done is both what is useful to \(\Delta U\) and energy lost in inefficiencies (e.g., the number and optimality of trajectories to \(U_{r}\), time spent (needlessly) evaluating alternative paths...). We assume that work can be expressed as a function of the time \(T\) it takes to resolve the goal (even should the resolution be suboptimal or a failure; see below). Path intelligence, \(\mathbb{U}_{rg}\) is simply expressed as \(\mathbb{U}_{rg}=\ -\Delta U/\ T^{\alpha}\) (1b) where \(T^{\alpha}\geq 1\) and constant \(\alpha\geq 0\). When \(\alpha=0\) time and energy do not enter into the assessment of path intelligence. For \(0<\alpha<1\), fixed increases in time are of increasingly less importance to path intelligence, whereas for \(\alpha>1\), increased time and energy expended have an increasingly negative association with path intelligence. Equation (1b) does not explicitly account for the possible influence of efficiency on the path actually taken and _vice versa_ (i.e., \(\Delta U\) and \(T^{\alpha}\) are not explicit functions of one another). Thus, for example, all else being equal, rapid solutions (_T_\(\rightarrow\) 1) expend little energy, but are also expected to (but not necessarily will) have a negligible influence on uncertainty reduction. We therefore implicitly assume that \(U\) and \(T\) are functions of one another. Equation (1b) can be expanded to account for uncertainty reduction with each successive node, _n=g_, _g+1_,... \(\mathbb{U}_{rg}=-\ \sum_{n=g}^{r-1}\ \big{(}U_{n+1}-U_{n}\big{)}/T^{\alpha}\ (2)\) where \(n\) is the node in the series from start (_n=g_=goal set) to resolution (_n=r_-1=goal resolution). Equation (2) reflects both the differential and integrative nature of intelligence. In general, \(U_{n}\) will depend on current informational states and predictions of how future decisions will solve adjacent unknowns and bring the system closer to the goal. Although not explored here, a possible way forward is to base path decisions, solving and accuracy checking on key past and current informational states and their predicted futures [37]. Path intelligence \(\mathbb{U}_{rg}\) requires that at resolution \(\mathbb{U}_{rg}>0\), and therefore because \(\mathbb{U}_{rg}\) in (2) is the simple sum of uncertainty differences, uncertainty may increase, decrease or remain unchanged in any subset of the _r_-_g_-1 nodes. Equation (2) simplifies a process whereby future nodes are contingent on past choices and current inference (e.g., whether a small step in uncertainty reduction is easily found and taken, or a larger step with more difficulty is taken). It also ignores the possibility that path intelligence can be iteratively determined for each node pair (i.e., \(\Delta U\) is normalized by \(T^{\alpha}_{n,n+1}\)). Finally, although a satisfactory resolution requires \(\mathbb{U}_{rg}>0\), complete resolution of uncertainty \(\mathbb{U}_{rg}\rightarrow\ U_{g}/T^{\alpha}\) does not ensure goal optimality nor even goal sufficiency, since, for example one may execute a complex series of computations but introduce a numerical error that carries over to the resolution. ### Accurately assembling information through understanding Importantly, reduction in uncertainty does not necessarily result in a resolution that qualitatively matches or quantitatively maximizes a specific goal. Thus, for example, a contestant may be on a treasure hunt and correctly solve all of series of riddles but arrive too late to claim the treasure (i.e., high \(T^{\alpha}\)). Alternatively, the contestant may claim the treasure but achieve this based on luck, outside help or cheating (i.e., partly or completely non-intelligent maximization of -\(\Delta U\)). Accurate goal resolution can involve _matching_ a target or _maximizing_ a quantity. An example of the former is the game of chess where each move tends to reduce alternative games towards the ultimate goal of checkmate, but a brilliant move or game does not necessarily result in victory. An example of the latter is maximizing the number of units acquired, for example profits in investments or points in sporting events (where there is both a victory threshold (profit, winning) and quantity beyond (accumulation, self-esteem)). We do not distinguish matching from maximization (or satisficing) in the present study, but rather highlight that resolution could involve one, the other, or some combination of both these objectives. Cognitive phenomena such as embarking on tangents or transiently pursuing hopeless dead-ends may indeed reduce uncertainty depending on how intermediate tasks and ultimate goals are framed. That is, an accurate resolution to a complex goal requires an understanding of the goal. Understanding may or may not be complete, and can vary during the course of addressing a goal (e.g., there may be a Eureka! moment). Thus, in the game of chess a "gambit" is a path (usually sacrificing a piece) that supposes a deep understanding of the game, despite a transient point and possible positional disadvantage. Although a longer-term strategy, a brilliant chess-piece sacrifice may ultimately fail, with reasons including errors or a blunder in future moves and/or a brilliantly adaptive or unpredictable adversary. Lowered uncertainty is associated with information gain. Recognizing that this may or may not correlate with goal accuracy, we partition an observable cofactor \(\Delta A\) of resolution accuracy of information at entry into the goal network \(A_{g}\) and accuracy at resolution \(A_{r}\) \(\Delta A=\ A_{r}-A_{g}\) (3) and that as a cofactor, efficiency is already accounted for in \(\mathbb{U}_{rg}\), meaning \(\mathbb{A}_{rg}=\Delta A\) (4) In terms of implementation of intelligence measures, isolating (4) as a separate measure differentiates \(\mathbb{U}\) (black box) from \(\mathbb{A}\) (the observable result). ### Intelligences Path intelligence and goal resolution may be co-dependent to some extent. Clearly an accurate answer based wholly on understanding is indicative of the path and resolution being tightly connected. Importantly the extent to which \(\mathbb{U}\) and \(\mathbb{A}\) are correlated reflects goal simplicity, or goal complexity, but the agent understands how to resolve the goal (it is not difficult). We do not model codependences explicitly and rather I claim that, by definition, difficult goals tend to lower associations between \(\mathbb{U}\) and \(\mathbb{A}\). We also recognize that certain goal types can only be attained if particular path nodes are taken and all nodes are satisfied, whereas other goals have many alternative paths where failure to satisfy a node may or may not impact goal attainment. As discussed in SS5, intelligence is most meaningful if relative to an initial state or a reference, even if the reference is simply zero intelligence. A reference can be either a theoretical or an empirical (population) minimum, mean expectation, maximum or an arbitrary point. Figure 3 illustrates some possibilities. We might for example determine the relevant measures of the outcomes of \(\mathbb{E}\) and \(\mathbb{A}\) not to be with respect to starting values \(U_{g}\) and \(A_{g}\), but rather the optimally efficient resolution of uncertainty \({U^{\prime}}_{r}/{T^{\prime}}^{\alpha}\) and maximum accuracy \({A^{\prime}}_{r}\) \(\mathbb{U^{\prime}}_{r}=(U_{r}/\ T^{\alpha})\ /\ ({U^{\prime}}_{r}/{T^{\prime}}^{\alpha})\) (5a) \(\mathbb{A^{\prime}}_{r}=(A_{r}/\ A_{r}^{\prime})\) (5b) However as above, there is no assurance that maximal path intelligence \({U^{{}^{\prime}}}_{r}/{T^{{}^{\prime}}}^{\alpha}\)will yield the optimal goal resolution \(A^{{}^{\prime}}_{r}\). The reverse is also true: the most accurate resolution to the goal might be attainable via a sub-optimal path \(\mathbb{U}_{rg}\). Moreover, it is possible that there is more than one feasible resolution to the goal, in which case there can be many candidate \(\mathbb{U}^{\prime},\mathbb{A}^{\prime}\) pairs, where one, the other or both of each pair is equal or superior to the path actually taken and/or the final resolution. Figure 3 shows how an entity's \(\mathbb{U}\), \(\mathbb{A}\) pair plots onto a hypothetical universe of achievements. Any one of the points in this universe could serve as a reference and both, one or the other of \(\mathbb{U}\) and \(\mathbb{A}\) might be superior to one or more of these references. We can partition the universe into approximate domains, whereby low \(\mathbb{U}\), low \(\mathbb{A}\) corresponds to a random guess or an oblique hunch, low \(\mathbb{U}\), high \(\mathbb{A}\) to an educated guess, high \(\mathbb{U}\), low \(\mathbb{A}\) to a process error leading to a suboptimal or sham resolution, and finally high \(\mathbb{U}\), high \(\mathbb{A}\) being the direction of intelligence. #### 6.3.1 An evolutionary index Consider first an intelligence index \(\breve{\mathbb{I}}^{e}_{rg}\) for a single event r,g with respect to an arbitrary reference, denoted \(\breve{\mathbb{U}}_{rg}\), \(\breve{\mathbb{A}}_{rg}\): \(\breve{\mathbb{I}}^{e}_{rg}=\psi\ (\mathbb{U}_{rg}\ /\ \breve{\mathbb{U}}_{rg})\ ( \mathbb{A}_{rg}\ /\ \breve{\mathbb{A}}_{rg})\ \ (6)\) Figure 3: The space of changes in information uncertainty \(\mathbb{U}\) and in goal-useful information \(\mathbb{A}\). The global optimal solution is denoted _ra_. Other possible outcomes depending on agent ability and goal complexity are labelled _rb_, _rc_, _rd_ and _re_. Path trajectories can vary in terms of direction, length, and the number of nodes from debut to resolution. The segments \(g\) to _ra_ reflect an efficient trajectory and accurate resolution. The agent resolves unknowns and has an actual (e.g., human) or embedded (e.g., AI) understanding of how to attain the goal. The greater number of smaller segments from \(g\) to _rd_ correspond to an agent that is both less able to move forward and less accurate in moves. Compound arrows show general directions corresponding to expected outcomes of random guessing, educated guessing, process errors, and intelligence. where \(\psi\geq 0\) is the magnitude of the achievement and both \(\breve{\mathbb{U}}_{rg}\) and \(\breve{\mathbb{A}}_{rg}\) are assumed positive. The introduction of \(\psi\) reflects utility of attaining the goal and therefore gives goal resolution a functional interpretation. When \(\psi=0\) achieving the goal is meaningless. Three things to note. First, as above, there is no explicit interaction term. To the extent that nodal changes in \(\breve{\mathbb{U}}\) correlate with those in \(\mathbb{A}\) (e.g., trajectory \(g\rightarrow\rightarrow\rightarrow\)_ra_ in Figure 3), an additional interaction term in eqn. (6) would be needed to discount redundancy from principal factor effects. Second, because index \(\breve{\mathbb{I}}_{rg}^{e}\) decreases as either \(\mathbb{U}_{rg}\to 0\) or \(\mathbb{A}_{rg}\to 0\), this index can be used to reflect the potential for evolution on traits associated with goal resolution. As such (i) a random guess and correct answer \(\breve{\mathbb{U}}_{rg}=0\) will produce no selection for any underlying intelligence traits and (ii) a brilliant path (maximal \(\mathbb{U}_{rg}/\breve{\mathbb{U}}_{rg}\)) and poor resolution (\(\mathbb{A}_{rg}\to 0\)) result in no trait selection either. (Note that equation (6) would need to be appropriately modified to accurately model the evolution of traits underlying \(\breve{\mathbb{U}}\) and/or \(\mathbb{A}\)). And third, each of the two terms in parentheses can take on any positive value. Should a term be less than one, then we would expect negative selection on one or more of its underlying traits relative to the arbitrary reference. Greater than one would result in positive selection relative to the arbitrary reference. These and other considerations (e.g., trait linkage) will require future dedicated investigation. #### 6.3.2 A comparative index. An alternative index for intelligence \(\breve{\mathbb{I}}_{rg}^{c}\) relates to practical measures, whereby the observer weights path \(\alpha\) and resolution \(\beta\) components: \(\breve{\mathbb{I}}_{rg}^{c}\ =\ \frac{\alpha\ (\mathbb{U}_{rg}/\ \breve{ \mathbb{U}}_{rg})\ +\ \beta\ (\mathbb{A}_{rg}/\ \breve{\mathbb{A}}_{rg})}{\alpha +\ \beta}\ (7)\) Greater than baseline performance increasingly manifests in (7) as the relevant coefficient takes on larger values. Analogous to equation (2), expressions (6) and (7) can be decomposed into a sequence of nodes to produce more microscopic-based measures of intelligence. Thus, a more microscopic perspective would integrate capacities for how crystallized (memory) and fluid (reasoning, inference) mechanisms generate and are arbitrated in alternative strategies to reduce \(U_{n}\) and increase \(A_{n}\). These decisions would be based in part on the iterative path and its perceived accuracy leading up to \(n\), and the current assembly of information towards the goal, whilst taking into account how each alternative might influence future decisions along the path (Figure 3). #### 6.3.3 Incorporating Difficulty and Surprisal. The evolutionary (6) and comparative (7) indices normalize intelligence with respect to an arbitrary reference or benchmark. In the case of an evolutionary index, the reference could be a mean value of a heritable trait influencing fitness [12]. For the comparative index, the reference could be benchmarked achievements [8],[49]. Using concepts developed in earlier sections, goal complexity, surrounding environment and agent ability can be integrated into a measure of intelligence corresponding to task (or goal) difficulty or surprisal as \(\breve{\mathbb{I}}_{rg}^{xy}\ =\ \big{(}C_{y}\ \{U_{g},\ A_{g}\}\ -E\ Q_{x \to y}^{\max}\big{)}\ \mathbb{U}_{rg}\ \mathbb{A}_{rg}\ (8)\) with the minimal value of 0 should the term in parentheses be negative. \(C_{y}\) is the complexity of task \(y\) and is a function of initial path uncertainty and goal accuracy, \(E\) is environmental conditions with _E_\(\rightarrow\)0 indicating poor and _E_\(\rightarrow\)1 optimal conditions, and \(Q_{x\to y}^{\max}>0\) is the maximum ability of the agent with expertise \(x\) to achieve task \(y\) in an optimal environment (_E_ = 1). Note that as \(x\) deviates from \(y\) the task has greater surprisal. When the system achieves its maximum ability for task \(y\), given environmental states, \(E\ Q_{y}^{\max}\ =\ \mathbb{U}_{rg}\ \mathbb{A}_{rg}\). All else being equal, \(Q_{x\to y}^{\max}\) is expected to decrease as \(x\) and \(y\) diverge (i.e., greater surprisal). Regardless of whether or not there is surprisal, the term in parentheses is a concise measure of difficulty. Equation (8) could be expanded to account for vertical or horizontal influences in abilities (SS4), path dynamics (_cf._ eqn. 2) and entropies [35], [95], and appropriately modified to partition the independent effects of information uncertainty and goal accuracy as in (eqn. 7). Note that if \(\mathbb{U}_{rg}\) is objectively assessed (i.e., there are no benchmarks, _cf._ SS5.2, and no Black Box, _cf._ SS2.3), then (eqn. 8) is an absolute measure of intelligence (i.e., it is only relative to initial states \(U_{g}\) and \(A_{g}\)). Finally, the potential of proxies such as social interactions and technology can be incorporated to give the general form: \(\mathbb{I}_{rg}^{xy}=\left(C_{y}\left\{U_{g},\ A_{g}\right\}\ -\left(P_{x\to y}^{\max}+E\ Q_{x\to y}^{\max}\right)\right)\ \mathbb{U}_{rg}\ \mathbb{A}_{rg}\) (9) where \(P_{x\to y}^{\max}\) is analogous to \(Q_{x\to y}^{\max}\), and in the limit of dependence on proxies alone (\(P_{x\to y}^{\max}>0,\ E\ Q_{x\to y}^{\max}\to 0\)) the system approximates AI. Thus, if \((C_{y}\left\{U_{g}\ A_{g}\right\}-P_{x\to y}^{\max}>0)\) then equation (9) is the intelligence of the AI system. Equation (9) assumes no environmental influence on proxies and that the use of proxies implicitly may enter into resolution of the path \(\mathbb{U}_{rg}\) and accuracy \(\mathbb{A}_{rg}\). Equation 9 could be modified to account for ability degeneracy associated with proxy robustness[24], that is, \(Q\propto 1/P\). Figure 4: Schematic diagram of intelligence ecosystem integrating TIS. The SYSTEM phenotype is composed of the CONTROLLER, PROCESSOR and MEMORY. The SYSTEM interacts with the ENVIRONMENT both in setting and addressing GOALS. The extended phenotype are PROXIES such as social interactions, technology and culture. The current SYSTEM is based on past TRANSMISSION and intelligence trait EVOLUTION (not shown), develops and integrates intelligence traits over the SYSTEM’s lifetime (not shown), and influences future TRANSMISSION and EVOLUTION. Intelligence resources are augmented by SYSTEM EVOLUTION and possibly SYSTEM-dependent PROXY EVOLUTION and this occasionally produces intelligence innovations [51]. Intelligence can be codified in the SYSTEM as phenotypic traits, stored MEMORY and hard-wired or plastic behaviors or active inference. Intelligence can also be codified outside the SYSTEM in (non-mutually exclusive) PROXIES, such as society, culture, artefacts, technology and institutions. TIS is central in bringing the SYSTEM to a GOAL via searching, solving and understanding. ## 7 Conclusions Until recently the study of intelligence has largely focused on psychometrics in humans. With the development of AI and a greater emphasis on interdisciplinarity in scientific inquiry, progress is being made towards theories of intelligence, with the prospect of a general theory based on first principles. Here, I have surveyed recent advances towards conceptual unification of definitions of intelligence, arguing that a general framework needs to recognize scale in the nature of intelligence. This introduces the daunting challenge of explicitly accounting for events and interactions over different system scales, time scales, among different systems, in multi-dimensional heterogeneous environments, and for a diverse range of system goals. The approach taken here is to model intelligence at macroscopic scales based on implicits of information theory and thermodynamics, recognizing the underlying influences of more microscopic states and processes. Importantly, I propose a compact mathematical expression for the concept of goal difficulty, integrating goal complexity, environment and agent maximal ability to the goal, given past experience (i.e., surprisal). Based on model objectives (eqns. 6-9), the Theory of Intelligences proposed here is represented as a central postulate and basic features. ### Central Postulate of TIS _Intelligence integrates two fundamental processes: reducing information uncertainty (identifying and solving) and attaining goal accuracy (understanding)_. TIS is general, encompassing physical, biological and artificial domains, and in humans in particular, applies to a wide scope of endeavors, including intellectual and physical performance, art, social interactions and political influence. ### Basic Features of TIS _7.2a Information and Processing_ (SS1): Intelligence systems control information. Because external environments require dedicated interpretation and coding, the greater system-environment ensemble manifests at a coarse level as a 2x2 fundamental structure (Figure 4): (1) Information: (1a) External free data and proxies, (1b) Internal free data and stored constructions, and (2) Information processing: (2a) Internal and proxy lower-level receptors, processors, (2b) Higher-level controller and integrator. _Implications_: Intelligence integrates system, proxies and environment, and codes all three into the greater individual centered on but not limited to the controller [59]. _7.2b Information Uncertainty and Entropy_ (SS1): Intelligent systems are dissipative and therefore increase entropy overall, but in the system subsets relevant to goal attainment, information uncertainty is decreased. System and system-environment partitions are only beginning to be understood [37],[59]. Uncertainty (or goal-useful entropy) reduction may manifest as some combination of sorting (i.e., prioritizing), selecting (i.e., eliminating), morphing and/or recombining code. In the extreme of passive intelligence, physical laws express in environments so as to produce locally stable uniqueness, for example, the formation of extremely long-lived inorganic crystals such as diamonds [88]. Towards the extreme of active intelligence, intentionality, inference and creativity - together with passive instruments - generate dissipative structures, for example, AI systems or a work of art, leading to entropy in users or admirers, respectively. _Implications_: Intelligence generates novel objects, pattern, complexity and diversity. _7.2c Development and Complexity_ (SSSS2.2, 3.1, 3.3, 4): Intelligence progressively builds on existing information and novel substrates, including prior achievements [101], tending towards greater capacities (efficiency and generality). Greater capacity permits successful encounters with new, more complex goals and therefore the need for enhanced higher-level abilities (e.g., reasoning and inference in thinking systems). _Implications_: Intelligence accumulates and is cumulative through the development of individuals, collectives, and proxies. Speculatively, capacity categories are expected to manifest in the following (overlapping) sequence (for computers; for humans): execution of laws or recipes (algorithms, programs; DNA expression), contribution of proxies (user, programmer; parents), innate priors (input data compatibility; error checking; nourishment, avoid dangers), acquired knowledge and skills (memory, training in AI; memory, learning, experience), active inference (unknown as present; defining and addressing goals), proxies (computational networks; social interactions, technology). Systems evolve complexity, permitting both increased efficiencies to address attainable goals and extended capabilities to increasingly complex environments and goals [42]. _7.2d Tradeoffs_ (SSSS2.2, 3, 4.2): Capacities have costs (time, energy), logistical constraints (limited experience) and structural limitations (computational abilities). Tradeoffs emerge which suggest the existence of one or more locally optimal goal resolution strategies. The deployments of acquired skills, novel inference, reasoning and proxies will therefore depend on an agent's evaluation of their relative utilities to achieve goals. Two expected and related tradeoffs are between specialization and generality at a macro level and, correspondingly, between efficiency and exploration at a more micro level. _Implications_: The interdependencies of intelligence traits and costs and constraints on their deployment will constrain tradeoff surfaces, producing a limited set of intelligence strategy motifs. _7.2e Relativity and Regress_ (SSSS2.1, 2.3, 3.3, 5): Active intelligence is relative to levels of difficulty and surprisal, that is the integration of the complexity of the goal, agent abilities, the environment and observer perception (arbitrary references). References may include consensual benchmarks, maximal intelligence or zero intelligence. Observer effects can be partially discounted as in equations (8) and (9), but not completely if perceptual uncertainty remains in the evaluation of the path (i.e., the Black Box). Although not invoking active intelligence, goals without difficulty or surprisal _implicitly_ evoke past intelligence, either in evolutionary adaptations or previous experience and memory. Actively inferring intelligence systems derive certain capacities from the environment, in particular proxies such as social interactions and technology. _Implications_: There is no known pure, "Boltzmann Brain" of intelligence. More probable would be the sudden emergence of a non-functional object with random features and no intelligent capacities. Similar to the evolution of life [75], intelligence therefore requires some previous assembly beyond purely physical scaffolding. _7.3 Status as a Theory_ TIS integrates tenets and facts about intelligence. TIS implicits key micro and meso-level concepts at a macro level and can explain different observations. TIS is however limited in not explicitly integrating the causal roles of more microscopic levels and therefore lacks criteria for a general theory. Specifically, the Black Box described in SS2.3 is an impediment to a general theory, since caveats are necessary to explain what might appear to be random effects and performances that deviate from expectations (i.e., when \(E\ Q_{y}^{\max}\neq\ \mathbb{U}_{rg}\ \mathbb{A}_{rg}\) from eqn. 8). TIS replies to the following: _Why Intelligences?_ Intelligence signifies the ability to attain goals and therefore maintain and reproduce otherwise dissipative structures, without the necessity of storing huge amounts of information. Storing information in the form, e.g., of memory takes time and energy and given hard to predict environments and the quantity and diversity of future novel challenges, intelligence - if diversified in multiple capacities - is both more efficient and enables exploration of new niches (Figure 4). Partitioning different facets of intelligence underlines the role that uncertainty reduction plays _regardless_ of goal accuracy. The prevailing view that inference (thinking) is necessary and even dominant in intelligence neglects the finer partition of how a processor uses multiple instruments (including inference) in local and regional path resolution and more multi-scale integrations towards the global goal (accuracy). I suggest that paths not only function to accurately achieve goals, but as experimentations providing feedback, improvement and therefore both higher accuracy and efficiency for future attainable goals and the necessary latitude to attempt new goal spaces. As such, TIS can explain human undertakings that do not necessarily affect Darwinian fitness, such as leisure, politics, games and art. TIS is therefore applicable to theories of evolution (genetic, cultural, technological) and explains forms of success that are not directly under selection. _Causality._ Many theories address the origins of life systems and the evolution of their complexities. TIS in positing that intelligence fosters simplicity (efficiency) on reachable goals, and promotes the evolution of complexity (division of labor) to attain difficult or otherwise unreachable goals, _provides a causal framework_ and therefore a greater understanding of biological assembly and diversification. The empirical facts supporting these claims are (1) on a micro/mesoscale, path decisions are influenced by past experience and influence sequential path decisions and future goals [85], and (2) on a macroscale, natural selection drives responses to needs and opportunities - successful responses (goals met) invoke intelligence. As above, even partly or unsuccessful responses can have an evolutionary effect, suggesting the robustness of intelligence as a primary force in understanding life. _Falsifiability._ The main prediction of TIS is that uncertainty reduction and accuracy gain, although expected to be correlated, each involve unique traits. Although the genetics and heritability of these traits are unknown, the traits would affect micro- and meso-scale identification and arbitration of decision alternatives and multi-scale assessments of the accuracy of path decisions and trajectories with respect to the goal. TIS predicts that accuracy is generally increasingly easier to assess as a path is forged, whereas more myopic uncertainty reduction may or may not show a temporal (path) pattern. Evidence to the contrary would falsify TIS, for example, clear views from the start of what accurately attaining a goal entails. (Though in such cases the goal is neither very difficult nor surprising and so active intelligence according to equation (8) is not needed). A final test of TIS regards its evolutionary significance, with predictions including (i) impact of intelligence traits on reproductive fitness [12] as evidenced by (ii) assortative mating [90], and a decline in capacities, particularly fluid capacities, with old age [19]. ### Future Research and Open Questions TIS is a macroscopic framework that omits explicit consideration of uncertainty and information entropy dynamics as well as the dynamics of system components, controller strategies and interactions with proxies and the environment [21], [111]. Incorporating these and other features is a daunting challenge and although surely question-dependent, it is currently not clear how these will significantly change our understanding of intelligence. Future research should address the following questions: 1. To what extent are microscopic processes necessary to understand macroscopic patterns in intelligences? 2. Can we identify a core set of phenotypic traits explaining the main effects and variances in intelligences? 3. Are there typical patterns in hierarchical influences [57],[67] and entropy and accuracy changes along paths to goal resolutions? 4. How do intelligence and goal complexity change during a lifetime and coevolve in populations? 5. Does the age-related expression of intelligence traits parallel the age-dependent force of selection [44], and if not, then why? 6. Does TIS apply across physical, biological and designed systems, and across biological levels and levels of individuality? [31],[54] 7. Will intelligence in humans invariably evolve towards the domination of proxies, such as information storage (books, cultures), social interactions and collectives, and artificial intelligence? 8. What do scaling relationships resemble across different intelligence systems? 9. What are minimally intelligent life systems? [13] ## Glossary \begin{tabular}{p{142.3pt} p{284.5pt}} \hline **Ability** & The potential to successfully apply instruments to complete a task or achieve a goal \\ **Accuracy** & How close an achievement is to a goal \\ **Complexity** & The structure and dynamics of objects with a function, goal or purpose \\ **Computation** & Use of an algorithm to manipulate data and produce information, e.g., a calculation. \\ **Crystallized intelligence** & Priors, knowledge and skills that can be applied to goal attainment \\ **Data** & Patterns, e.g., numbers, symbols, language... that can be processed to become information \\ **Difficulty** & The convolution of agent ability, environment and goal complexity \\ **Efficiency** & The amount of work required to reduce uncertainty by a fixed amount \\ **Entropy** & A change in information content \\ **Environment** & Contexts, settings, structures and conditions potentially interacting with a system \\ **Experience** & Accomplishing a task resulting in learning \\ **Flexibility** & The ability to deal with difficulty or surprise \\ **Fluid intelligence** & Active use of reasoning, invention, creativity applied to goal attainment \\ **Information** & Non-random structure of significance \\ **Ingenuity** & A creative, unexpected or indescribable method or process towards a goal \\ **Goal** & The objective in addressing a challenge or opportunity \\ **Knowledge** & Acquired information that may involve understanding, in the form of interrelations, facts or skills \\ **Learning** & Gain in knowledge or understanding associated with experience \\ **Model** & Abstraction of reality used to simulate situations or make predictions \\ **Path** & The choices made in following or forging a (possibly evolving) decision tree \\ \hline \end{tabular} **Path intelligence**: Efficiently resolving uncertainty **Priors**: Foundations of knowledge, e.g., numeracy, spatial relations, physical properties... **Reasoning**: Use of logic towards a conclusion, involving induction, deduction or abduction **Skill**: A specific inferred or learned repertoire applied to perform a task **Surprisal**: Unfamiliarity due to inexperience, improbability, or lack of relevant priors, knowledge or skills **Task**: Bounded process leading to a goal **Thinking**: Contemplation involving reasoning, leading to an opinion, decision or result **Uncertainty**: Unresolved questions stemming from latent, missing, unknown, or noisy information **Understanding**: Insight based on knowledge, logic or causal reasoning
2307.02087
Different Games in Dialogue: Combining character and conversational types in strategic choice
In this paper, we show that investigating the interaction of conversational type (often known as language game or speech genre) with the character types of the interlocutors is worthwhile. We present a method of calculating the decision making process for selecting dialogue moves that combines character type and conversational type. We also present a mathematical model that illustrate these factors' interactions in a quantitative way.
Alafate Abulimiti
2023-07-05T07:51:47Z
http://arxiv.org/abs/2307.02087v1
# Different Games in Dialogue: Combining character and conversational types in strategic choice ###### Abstract In this paper, we show that investigating the interaction of _conversational type_ (often known as language game or speech genre) with the character types of the interlocutors is worthwhile. We present a method of calculating the decision making process for selecting dialogue moves that combines character type and conversational type. We also present a mathematical model that illustrate these factors' interactions in a quantitative way. ## 1 Introduction Wittgenstein (1953); Bakhtin (2010) introduced language games/speech genres as notions tying diversity of linguistic behaviors to activity. Building on this, and insights of pragmatists such as Hymes (1974); Allwood (2000), and earlier work in AI by Allen and Perrault (1980); Cohen and Perrault (1979)Larsson (2002); Ginzburg (2012); Wong and Ginzburg (2018) showed how to associate global structure with conversations in a way that captures the range of possible topics and idiosyncratic moves. Larsson (2002) is also the basis for an approach to building spoken dialogue systems (SDS) which is essentially domain general, offers a fine-grained treatment of grounding interaction, and which was extended to clarification interaction in Purver (2004). This body of work does not address, however the issue of strategic choice in conversation, which is the core issue underlying work in Game Theory. Asher et al. (2017) used game theoretic tools to develop a theory of strategic choice for dialogue. Although there are a variety of interesting insights captured in this approach, it is based on two assumptions that apply only to a restricted class of language games--games continue indefinitely and there exists a jury that assigns winning conditions to participants. We seek to develop an approach to strategic choice applicable to the general case of dialogical interaction, where termination is an important consideration and where assessment is internal to the participants. Strategic choice is modelled by combining structure from conversational types with psychological and cognitive notions associated with the players. Character type as a relatively independent factor abstracted out of the conversational type is important for dialogue. Although there is some analysis concerning both character effects and conversational types in the dialogue, combining them and analyzing their interactions in a quantitative way has not, to our knowledge, been carried out before. The purposes of this paper is, hence, to combine character type effect and conversational type analysis to yield a method that could help to analyse strategic choice in dialogue. ## 2 Background The starting point of Asher et al. (2017) is the framework of Banach-Mazurckiewicz games. They modify this framework to make it more amenable to analyzing certain kinds of NL dialogue, the emergent framework being _BM messaging games_. Asher et al. (2017) argued that each dialogue potentially continues indefinitely and has a winner adjudged by a third party jury. This is useful for modelling political discourse between rival groups or individual contenders in the public domain. But clearly this sort of conception is not appropriate for a variety, arguably most types of dialogue.1 These have beginnings (\(InitState\)) and a variety of distinct terminations (\(FinState\) (Wong, 2018), and there is no 'external jury' in most cases. Burnett (2019) developed a formal model called _social meaning games_ which explains how social variants affect the linguistic style of dialogue participants. And conversely, how the speaker's intrinsic linguistic style affects dialogue moves. Pennebaker and King (1999) shows that linguistic style is an independent and meaningful way of exploring personality. There is evidence that people's personality traits influence their use of language. For instance, extroverted individuals are able to deceive others more easily, while neurotic individuals are less likely to be deceived (Riggio et al., 1988). The charisma of a speaker has been shown to be closely related to an extroverted character (Bono and Judge, 2004). There is also a strong relation between extroversion and conscientiousness and positive influences, as well as between neuroticism and dissent and various negative influences(Watson and Clark, 1992). Thus, an individual's personality does affect decision-making process in dialogue. Cooperation in dialogue is a widespread phenomenon, and Allwood et al. (2000) identified four features of cooperation: cognitive consideration, joint purpose, ethical consideration and trust. When engaging in a collaborative dialogue, where the interlocutor decides his next move based on the intentions of the other and a variety of information deriving from the context of the dialogue, it seems that character has a broad influence on the course of the dialogue. Thus, it seems natural that a dialogue participant (DP) should also take into account the other's character traits in order to choose the appropriate move. In the next section, we will explain the method we propose of combining character type effects and conversational type. ## 3 Methodology In this section, we wish to explore the interaction between character type and conversational type, by considering how given a possible range of moves, a quantitative analysis can be provided for move selection. ### Character Type Researchers have developed a relatively unanimous consensus on personality description patterns, proposing the Big Five model of personality(Goldberg, 1992). Within this model there are five traits that can be used to capture many aspects of human personality. The Big Five Personality (OCEAN) can be assessed by the NEO-PI-R(Costa Jr and McCrae, 2008). The traits can be described as follows: * Openness: the ability to be imaginative, aesthetic, emotional, creative, intelligent, etc. * Conscientiousness: displays characteristics such as competence, fairness, organization, due diligence, achievement, self-discipline, caution, restraint, etc. * Extroversion: exhibits qualities such as enthusiasm, sociability, decisiveness, activity, adventure, optimism, etc. * Agreeableness: the qualities of trust, altruism, straightforwardness, compliance, humility, empathy, etc. * Neuroticism: difficulty balancing emotional traits such as anxiety, hostility, repression, self-awareness, impulsivity, vulnerability, etc. Goldberg (1992) gave a pertinent method to quantify character types in terms of 5 dimensional vector \([o,c,e,a,n]\). We define \(\chi_{s}\) for the self character type scale vector and \(\chi_{o}\) represents other character type scale vector. In addition, with the development of machine learning and deep learning methods within NLP, a variety of approaches have been implemented for automatic recognition of personality in conversation and text (Mairesse et al., 2007). Jiang et al. (2019) used attentive network and contextual embeddings to detect personality traits from monologues and multiparty dialogues. Given a text(or utterance), one can calculate the character type scale vector of this sentence with a robust prediction model. We define \(c_{i}\) as \(i\)th dialogue move vector prediction. We note that by calculating the similarity between the \(\chi\) and \(c_{i}\), we obtain the extent to which a given dialogue move can fit either the self character type or the other character type. Note that \(\chi_{s}\) is a dialogue interlocutor's intrinsic property which does not show great change during conversation, but considering one's imperfect information situation, \(\chi_{o}\) will change once new evidence arises and can be modified by applying Bayes' rule. ### Conversational Type Pennebaker and King (1999) also indicated that linguistic style gets influenced by the situation in which the interlocutor find themselves. Wong and Ginzburg (2018) provided a topological perspective on the space of conversational types based on the distribution of Non-Sentential Utterances (NSU) within each type. Wong (2018) developed a model of a conversational type couched in TTR (Cooper, 2005). On this view, a conversational type is a 4-tuple \(\{ConvRules\), \(InitState\), \(FinState\), \(G\}\), here \(ConvRules\) represents a set of conversational rules, transition rules between different dialogue states (_dialogue gameboards_) of type \(DGBType\)(Ginzburg, 2012), \(InitState\) and \(FinState\) are the initial and final DGB states. \(DGBType\mapsto\) [style=style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,style=,,style=,style=,style=,style=,,style=,style=,style=,style=,style=,style=,style=,,style=,style=,style=,style=,style=,,style=,style=,style=,style=,style=,style=,,style=,style=,style=,style=,style=,style=,style=,style=,style=,,style=,style=,style=,,style=,style=,style=,,style=,style=,style=,,style=,style=,style=,,style=,style=,style=,,style=,style=,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,style=,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,style=,,style=,,style=,,,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,,style=,,style=,,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,,style=,,,style=,,style=,,,,style=,,style=,,,style=,,,style=,,style=,,,style=,,style=,,,style=,,,style=,,style=,,,style=,,,style=,,style=,,style=,,style=,,,style=,,style=,,,style=,,style=,,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,style=,,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,style=,,style=,,style=,,style=,style=,,style=,,style=,style=,,style=,,style=, BERTDevlin et al. (2018) exemplifies NLG capability with fine-tuning. We define here \(A\) for a dialogue move space vector composed by \(a_{i}\) representing the \(i\)th move. In the current work we will not discuss how to specify the possible moves, but will leave this for future work. ### Decision Making #### 3.4.1 Global View Levelt (1993) proposed that speakers monitors themselves while speaking. In other words, individuals have a self-criticism mechanism enabling them to reflect on their behaviors, emotions and speaking. We dub this the _SelfMonitor_. Given our analysis in the first two subsections, a DP's move choice in the dialogue is influenced by her character type, the other interlocutor's character type, and the conversational type. When a DP tries to respond to the other party's dialogue move, she first constructs a dialogue move space, which yields a set of possible utterances that the DP can use. The DP typically makes a conjecture about the other's character type in terms of individual personality traits, based on her a priori knowledge of that individual and the current state of the dialogue. In addition, the DP has a probabilistic assumption about the present conversational type given her cognitive state. Subsequently, the DP's _SelfMonitor_ determines which factor is more valuable in this context. Eventually a move is selected considering each possible move's value by comparing the affinity of each move for each factor with the skewness of each factor as determined by the _SelfMonitor_. #### 3.4.2 Mathematical Modeling After the above analysis, we offer a mathematical model to explicate this process. In evaluating possible moves, we have three important factors: self character type, other character type, and the conversational type. We want to provide a real valued function \(\rho\) to evaluate each move in the dialogue move space. * \(a_{i}\): \(i\)th move. * \(\overrightarrow{\chi_{s}}\): Self character type vector. * \(\overrightarrow{c_{i}}\): character type vector for \(i\)th move. * \(\alpha\) Weight for the self character type effect. * \(\overrightarrow{\chi_{o}}\): Current other character type vector estimation. * \(\beta\): Weight for the other character effect. * \(p\): Probabilistic conjecture of the conversational type. * \(d_{i}\): \(i\)th move conformity with the conversational type from -1 to 1. * \(\gamma\) Weight for probabilistic conversational type. * \(W=[\alpha,\beta,\gamma]\). Conformity represents the degree to which a dialogue move conforms to the current conversational type. In other words, it can be modeled as the evaluation score generated by this dialogue move in dialogue context. In order to calculate the "affinity" among character type vector \(\{\chi_{s}\),\(\chi_{o}\}\) and move vector (\(c_{i}\)), we use _cosine_ similarity, defined as follows: \[simi(A,B)=\cos(\theta)=\frac{A\cdot B}{\|A\|\|B\|} \tag{1}\] Then we define the function \(\rho(a_{i})\): \[\rho(a_{i})=\alpha\cdot simi(\overrightarrow{c_{i}},\overrightarrow{\chi_{s }})+\beta\cdot simi(\overrightarrow{c^{i}},\overrightarrow{\chi_{o}})+ \gamma d_{i}\cdot p \tag{2}\] and let \(X=[simi(\overrightarrow{c_{i}},\overrightarrow{\chi_{s}}),\;simi( \overrightarrow{c^{i}},\overrightarrow{\chi_{o}}),\;d_{i}\cdot p]^{T}\) as a decision factor matrix, obtaining: \[\rho(A)=W\cdot X \tag{3}\] where \(\alpha+\beta+\gamma=1\) \(\alpha\), \(\beta\), \(\gamma\) are in fact estimated by the _SelfMonitor_ based on information deriving from the information state. We believe those weights are mainly fixed in the beginning of the conversation, because great changes in strategic choice lead to the suspicion of deception in some cases Riggio et al. (1988), This estimation process can be back-engineered by observing DP's selection of moves in the dialogue. After the calculation of \(\rho\), we get the score for each move. This score alone cannot determine the final decision--we need to take into account the features that we have not yet discussed and observed, so here we will probabilize them, i.e., convert it into a probabilistic distribution space using the \(softmax\) function. \[softmax(a_{i})=\frac{exp(\rho(a_{i}))}{\sum_{j}exp(\rho(a_{j})))} \tag{4}\] We then obtain a probability estimate for each move, which indicates that the greater the probability the DP is more inclined to choose this move. ### Example Here we illustrate our proposed approach with an example. #### 3.5.1 Scenario In a bakery, we observe a customer and baker's buying and selling process during the pandemic of COVID-19. GoalsFor simplicity we fix the final goals as follows: 1. [label=(0)] 2. Customer goal: buy two croissants. 3. Baker goal: sell the two croissants and obtain the desired price. It is worth noting that both players may have more complex goals or a series of goals. For example, the customer might want to have appropriate desserts for dinner. In order to achieve those kind of goals, we often divide into several specific and simple goals. In such a case, the sub-goal might be: 1. what are the best desserts in this bakery? 2. From among those best desserts, which one will fit the dishes I prepare tonight? For our current purposes, however, we will avoid such complications, despite their importance in a variety of cases. #### 3.5.2 Move Space We assess a baker's possible responses to the customer's initial utterance: "2 croissants". We assume the following possible responses of the baker: 1. [label=(0)] 2. client: 2 croissants. * baker: (i) 1.90. * (ii) Get out of the bakery, you're not wearing a mask. * (iii) Please would be nice. * (iv) 1.90 and please would be nice. * (i) would lead to a quick end that meet their needs. This "style" is used in most cases in our life disregarding other conversational factors and attending only to the baker's final goals. If the baker considers particularly the conversational type's impact, he would choose (i) to advance the conversation, thereby looking forward to finalizing the conversation early and achieving his goal. (ii) would lead to an "unpleasant" conversation. The baker thinks that the customer's behavior is disrespectful (a short demand lacking politeness). He therefore uses the lack of the mask as a pretext or has in mind a competing goal for fighting against this disrespect. Finally, neither participants goals is achieved. (iii) This choice shows that the baker wants to have a pleasant and respectful conversation above all. It is clear that baker does not assign much weight to his final goals or the conversational type. Instead he prioritizes his psychological needs. The question under discussion would shift to another topic and the conversation might evolve into a dispute, though with lower probability than in (ii). (iv) indicate that the baker wants to persist with the trade without disturbing his final goals and his psychological needs. This seems to be very much of a compromise move, but still the final state would depend on the customer's interpretation of the dialogue pragmatics. #### 3.5.3 A worked example For this example, we have a character type vector of [o, c, e, a, n] we assume that baker character vector is \(\chi_{s}\) = [0.0, 0.3, 0.0, 0.0, 0.5] showing conscientiousness and relatively high neuroticism and _Conv-prob_ is \(p\) = 0.98 assuming the baker's conformity with the bakery conversational type is high. Following the initial state \(S_{0}\), the baker hears "2 croissants" from client, after which the baker updates the information state then the _SelfMonitor_ chooses values for \(\alpha,\beta,\gamma\). For example, we assume \(\alpha\) = 0.1, \(\beta\) = 0.1 and \(\gamma\) = 0.8, we assume, in this circumstance, that the baker thinks more about the conversational type rule than self character type and other character type, then: \[CharacterType.Other=\chi_{o}\] \[\chi_{o}=\mu(\Uparrow\ Tmp.CharacterType.Other,\Uparrow\ dgb)\] \(\mu\) is the other's character type updating function with parameters: the previous state's character type vector and the current DGB. We assume t that after the other's character type update we have \(\chi_{o}\) = [0.0, 0.0, -0.1, -0.4, 0.2] assuming the customer is not very agreeable (-0.4) and a little neurotic (0.2). (i): We assume that "1.90" shows disagreeableness (-0.4) and slight neuroticism (0.2) so \(c_{1}\) = [0.0, 0.0, -0.1, -0.4, 0.2] and \(d_{1}\) = 0.8 shows "1.90" shows high conformity with the bakery conversational type. Then according to function (2) we have: \(\rho(a_{1})\) = 0.7646 (ii): We assume that "Get out of the bakery, you're not wearing a mask" shows high disagreeableness (-0.7), low conscientiousness (-0.5) and high neuroticism (0.8) so \(c_{2}\) = [0.3, -0.5, 0.0, -0.7, 0.8] and \(d_{2}\) = -1.0 shows incongruity with the bakery type. As a result, we obtain \(\rho(a_{2})\) = -0.7080 (iii): We assume that "Please would be nice" shows high agreeableness (0.7) and slight extroversion (0.3) \(c_{3}\) = [0.2, 0.0, 0.3, 0.7, -0.2] and \(d_{3}\) = 0.3 shows low conformity with the conversational type. Consequently, we obtain \(\rho(a_{3})\) = 0.1201 (iv): We assume that "1.90 and please would be nice" shows relatively high openness (0.5), conscientiousness (0.6) and extroversion (0.4), high agreeableness (0.7) and low neuroticism (-0.4) so \(c_{4}\) = [0.5, 0.6, 0.4, 0.7, -0.4] and \(d_{4}\) = 0.7 shows high conformity with the bakery type. So we obtain, \(\rho(a_{4})\) = 0.4727 We then apply those preference score with the \(softmax\) function: \(softmax(\overrightarrow{\rho(a_{i})})\) = [0.3998, 0.0917, 0.2099, 0.2986] This indicates that the probability that the baker selects (i) is 39.98% (iv) is the next ranked score. All this assuming that the baker's character type is relative high neuroticism and the commitment degree to the conversational type is high. Given his character type, it is reasonable to conclude that he would not be extremely polite (option iv) but not rude because of the constraint of conformity with the conversational type. Now we want to illustrate that if we change the distribution of certain of the parameters, things can flip significantly. We change the distribution of different variables (\(\alpha,\beta,\gamma\)) to [0.3, 0.1, 0.6] showing that the baker is concerned a little more about his own character type (0.3 rather than 0.1) but slightly less than before on conversational type (0.6). And we assume the baker's character type vector is [0.5, 0.7, 0.3, 0.8, -0.5] which shows high conscientiousness (0.7), low neuroticism (-0.5) and high agreeableness. Then we obtain:\(\rho(\overrightarrow{A})\) = [0.3457, -0.7277, 0.3217, 0.6946]. the resulting following with probabilistic distribution is \(softmax(\overrightarrow{\rho(a_{i})})\) = [0.2677, 0.0915, 0.2613, 0.3795] This indicates that the baker would choose option (iv) for the next move with 38% probability, an option which balances the baker's final goal and personal psychological needs (high conscientiousness). Finally, we modify (\(\alpha,\beta,\gamma\)) to [0.8, 0.1, 0.1] and assume that the baker's character type vector is [0.2, -0.3, 0, -0.5, 0.8] which shows high neuroticism (0.8), low agreeableness (-0.5). What we obtain is \(\rho(\overrightarrow{A})\) = [0.8007, 0.7652, -0.5229, -0.5032]. From this what results is the following: \(softmax(\overrightarrow{\rho(a_{i})})\) = [0.3996, 0.3856, 0.1064, 0.1085] This indicates that the probability of choosing (i) and (ii) is now about the same. In this scenario the baker focuses more on his own character type. (ii) shows complete incongruity with the bakery type, which indicates a complete violation of the baker's original final goal. Hence, at this point, the baker is in a dilemma. It is worth noting that this state will lead to a "breakthrough point", and if the baker chooses (ii), it means that baker chooses to change his final goal or conversational type. The consequence of this is that the \(Goals\), \(ConvType\) and \(Conv-prob\) should be changed in the private state of information state. How to effect this in a formal way, we leave to future work. ## 4 Conclusion Character is a person's stable attitude towards reality and it also affects one's performance in dialogue. Conversational type is, in one way or another, since the early days of AI work on dialogue, one of the principal notions of dialogue research. It reflects domain specific interaction, in particular the move space given to the conversational participants. We have tried to show in this paper that investigating the interaction of these two factors is worthwhile. In particular, we present a method of the decision making process for moves that combines character type and conversational type. We present a mathematical model that combines these factors after assigning numerical values to the various parameters that are involved and demonstrate by means of an example how this works. Future Work In this paper, we have made a preliminary proposal concerning the modelling of character types and their combination with conversational types. We aim to refine this in future work in ways that include the following: * Wong (2018) gave a formal method to classify conversations into types. We believe that under realistic conditions, there are often multiple conversational types involved in a single conversation, which may involve sequential transformations or overlapping phenomena. * In the approach sketched here, for classifying the conversational type we use a probabilistic TTR. However, in practice this assessment can change as the dialogue unfolds. We hope to develops methods that incorporate such dynamism. * Personality analysis based on the Big Five theory is robust, but inference about each other's character types is also in flux during conversations, and examining the impact of the change process on our approach is worth looking forward to future work. * Ginzburg et al. (2019) provided a categorical approach to the response to the question. We hypothesize that the conversational type has the role of delimiting the range of possible moves. We aim to characterize this variability. * This article introduces the concept of move conformity, which is required for automatic detection of research based on different types of dialogue in the future work. This could be achieved by modifying an existing NLG evaluation model (e.g. BLEU Papineni et al. (2002), ADEM Lowe et al. (2017)). * This paper discussed several cases in detail, but they are all constructed examples, so fitting this model on actual conversations (e.g. the British National Corpus) or scripted dialogues annotated for character types (e.g. FriendQA Yang and Choi (2019)) with experimental predictions is desirable. ## Acknowledgments This work was supported by an internship funded by the Institut Universitaire de France, within Jonathan Ginzburg's project _Unifying Verbal and Non-verbal Interaction_. We also acknowledge the support of the French Investissements d'Avenir-Labex EFL program (ANR-10-LABX-0083). Thanks to Prof. Ginzburg's patient guidance during the internship. Thanks also to three anonymous reviewers for SemDial for their detailed and perceptive comments.
2302.08412
Steric engineering of point defects in lead halide perovskites
Due to their high photovoltaic efficiency and low-cost synthesis, lead halide perovskites have attracted wide interest for application in new solar cell technologies. The most stable and efficient ABX$_3$ perovskite solar cells employ mixed A-site cations, however the impact of cation mixing on carrier trapping and recombination -- key processes that limit photovoltaic performance -- is not fully understood. Here we analyse non-radiative carrier trapping in the mixed A-cation hybrid halide perovskite MA$_{1-x}$Cs$_x$PbI$_3$. By using rigorous first-principles simulations we show that cation mixing leads to a hole trapping rate at the iodine interstitial that is eight orders of magnitude greater than in the single cation system. We demonstrate that the same defect in the same material can display a wide variety of defect activity -- from electrically inactive to recombination centre -- and, in doing so, resolve conflicting reports in the literature. Finally, we propose a new mechanism in which steric effects can be used to determine the rate of carrier trapping; this is achieved by controlling the phase and dynamical response of the lattice through the A-site composition. Our findings elucidate crucial links between chemical composition, defect activity and optoelectronic performance, and suggest a general approach that can help to rationalise the development of new crystalline materials with target defect properties.
Lucy D. Whalley
2023-02-16T16:39:27Z
http://arxiv.org/abs/2302.08412v2
# Steric engineering of point defects in lead halide perovskites ###### Abstract Due to their high photovoltaic efficiency and low-cost synthesis, lead halide perovskites have attracted wide interest for application in new solar cell technologies. The most stable and efficient ABX\({}_{3}\) perovskite solar cells employ mixed A-site cations, however the impact of cation mixing on carrier trapping and recombination--key processes that limit photovoltaic performance--is not fully understood. Here we analyse non-radiative carrier trapping in the mixed A-cation hybrid halide perovskite MA\({}_{1\text{-}\text{x}}\)Cs\({}_{\text{x}}\)PbI\({}_{3}\). By using rigorous first-principles simulations combined with techniques initially developed for organic molecular materials, we show that cation mixing leads to a hole trapping rate at the iodine interstitial that is seven orders of magnitude greater than in the single cation system. We demonstrate that the same defect in the same material can display a wide variety of defect activity--from electrically inactive to recombination centre--and, in doing so, resolve conflicting reports in the literature. Finally, we propose a new mechanism in which A-site composition can be used to determine the rate of carrier trapping; this is achieved by controlling the phase and dynamical response of the lattice through the steric size of the molecular cations. Our findings elucidate crucial links between chemical composition, defect activity and optoelectronic performance, and present a general approach that can help to rationalise the development of new materials with target defect properties. ## 1 Introduction Organic-inorganic lead halide perovskites (OLHPs) have attracted wide interest for their application in optoelectronics. Single junction halide perovskite solar cells now exceed 25% power conversion efficiency [1], with the most stable and efficient devices employing mixed organic (methylammonium, MA and formamidinium, FA) or inorganic (Cs, Rb) cations on the A-site of the ABX\({}_{3}\) perovskite structure. A-site cation engineering is primarily used to improve thermal and chemical stability [2]; the impact of A-site mixing on defect activity is not fully understood, despite this being crucial for the development of devices with increased efficiency [3]. The A-site cation indirectly determines various optoelectronic, transport and defect properties through an influence on the symmetry and dynamic response of the crystal lattice [3, 4]. Although defect formation and activity is sensitive to the exact system under consideration, the common observation across OLHPs is that halide ions form abundant, mobile point defects which are active in carrier trapping and recombination [5, 6]. Furthermore, the bonding in OLHPs is relatively weak, leading to an easily distorted ('soft') lattice and large lattice relaxation after carrier capture at a defect site [7, 8]. Large lattice relaxation is not commonly observed in all-inorganic optoelectronic materials. It is more akin to what occurs in organic molecular materials [9], where steric engineering through the incorporation of bulky ligands is used to restrict the vibrational modes associated with lattice relaxation after electronic excitation [9, 10]. Here we propose that a similar approach can be used to rationalise the design of defect-tolerant OLHPs. Using first-principles methods and techniques adapted from the field of organic semiconductors, we show that cation mixing in MA\({}_{1\text{-}x}\)Cs\({}_{x}\)PbI\({}_{3}\) can be used to adjust the hole trapping rate at the iodine interstitial by seven orders of magnitude. Combining this with symmetry mode analysis, we demonstrate that defect activity can be tuned by controlling phase formation through the steric size of the molecular cations. This provides a route for engineering defect properties without altering the metal-halide chemistry that is beneficial for photovoltaic performance. ## 2 Results ### Carrier capture rates with molecular rotations We focus our analysis on the iodine interstitial defect in the negative (I\({}_{\mathrm{i}}^{-}\)) and neutral (I\({}_{\mathrm{i}}^{0}\)) charge states as these have been found to be most detrimental to solar cell efficiency [11]. The iodine interstitial is a negative-U defect so that I\({}_{\mathrm{i}}^{0}\) is metastable [5, 8, 12, 13, 14]. However I\({}_{\mathrm{i}}^{0}\) can still be formed through electron capture at I\({}_{\mathrm{i}}^{+}\) or hole capture at I\({}_{\mathrm{i}}^{-}\). As shown in Figure 1a, the neutral iodine interstitial I\({}_{\mathrm{i}}^{0}\) bonds with a lattice iodine to produce a molecular I\({}_{2}^{-}\) H-centre with a trapped hole [14]. After electron capture or hole release the negative charge state I\({}_{\mathrm{i}}^{-}\) is formed in a split-interstitial configuration. This is accompanied by tilting and distortions of the inorganic PbI\({}_{6}\) octahedra to accomodate the I-I bond lengthening. The inorganic structural changes are coupled to MA rotations, similar to the behaviour observed during thermal phase transitions [15]. To model the kinetics of charge capture at a defect site we must consider the coupling of electronic and atomic structure. We map the potential energy surface (PES) between two charge states as a function of a collective coordinate \[Q=\sqrt{\sum_{i}m_{i}\Delta r_{i}^{2}},\] where the sum is over atoms \(i\) with mass \(m_{i}\) and a displacement from equilibrium of \(\Delta r_{i}\). For inorganic and hybrid materials the standard procedure is to conduct a linear interpolation between each atomic position of the two equilibrium structures [16, 17, 18], and this is the procedure that has been previously applied to hybrid perovskites [8, 12, 13]. Contributions from the rotation of the MA cation have previously been ignored as linear interpolation adjusts the intramolecular bond lengths to give unphysical energies [8]. We introduce an interpolation method (which we will term 'Kabsch interpolation') most commonly used for molecular materials, but transferred here for the first time to a hybrid inorganic-organic material. To interpolate the molecular species we first express atomic positions in the initial and final geometries as a set of vectors, \(\mathbf{a}_{i}\) and \(\mathbf{a}_{f}\) respectively. We use the Kabsch algorithm to calculate an optimal rotation axis \(e\) and angle \(\theta\) that maps between \(\mathbf{a}_{i}\) and \(\mathbf{a}_{f}\) whilst minimising the root mean squared deviation between each vector pair [19]. This allows us to rotate the molecule around \(e\) using a linear interpolation of \(\theta\). Finally, we combine this with a linear interpolation of the molecular centre of mass and the inorganic framework. This method generates a more accurate energy surface that allows us to consider molecular translations and rotations. Unphysical energies are avoided as the molecule is treated as a rigid object that cannot be deformed. Figure 1b shows configuration coordinate diagrams for the \(\mathrm{I}_{\mathrm{i}}^{0}\Leftrightarrow\mathrm{I}_{\mathrm{i}}^{-}\) transitions in MAPbI\({}_{3}\), using standard linear interpolation (in grey) and Kabsch interpolation (in colour). We use the Heyd-Scurseria-Ernzerhof hybrid functional [20] alongside spin-orbit coupling to obtain accurate defect energetics. \(Q=0\) corresponds to the equilibrium configuration of \(\mathrm{I}_{\mathrm{i}}^{0}\) and \(Q=\Delta Q=36\,\mathrm{amu}^{1/2}\mathrm{\AA}\) corresponds to that of \(\mathrm{I}_{\mathrm{i}}^{-}\). A nonradiative recombination process beings at \(\mathrm{I}_{\mathrm{i}}^{0}\) with an electron at the conduction band minimum (CBM) and hole at the valence band maximum (VBM). This is represented with the orange dash line in Figure 1b. After electron capture there is a transition to \(\mathrm{I}_{\mathrm{i}}^{-}\) (blue dot line); in the semiclassical picture this requires overcoming the energy barrier \(E_{\mathrm{n}}\). Subsequent hole capture over the energy barrier \(E_{\mathrm{p}}\) to \(\mathrm{I}_{\mathrm{i}}^{0}\) (green solid line) completes the recombination cycle. For Kabsch interpolation the energy surface is softened as energy is dissipated through rotations of the MA cation, resulting in a significant reduction of \(E_{\mathrm{n}}\) (\(0.15\,\mathrm{eV}\) to \(0.025\,\mathrm{eV}\)) and \(E_{\mathrm{p}}\) (\(0.92\,\mathrm{eV}\) to 0.63 eV). In semiclassical models the capture rate has an exponential dependence on the ratio of barrier height to \(k_{\mathrm{B}}T\), so changes of \(\sim\)100 meV, as seen here, can have a significant impact on defect activity. For accurate predictions of capture rates, we use a quantum chemical theory to calculate coefficients for electron capture (\(C_{\mathrm{n}}\)) and hole capture (\(C_{\mathrm{p}}\)) [16, 17]. This moves beyond the semiclassical picture to include the strength of coupling between the defect state and CBM (\(W_{\mathrm{if}}^{\mathrm{n}}\), for electron capture), or the defect state and VBM (\(W_{\mathrm{if}}^{\mathrm{p}}\), for hole capture). It also considers the vibronic overlap between each PES, thus allowing quantum tunnelling below the classical barrier. The capture coefficients determine the capture rate \(R\) at a defect. To take electron capture at a neutral defect as an example, \[R_{n}=C_{n}N_{0}n, \tag{1}\] Figure 1: Carrier capture processes in single and mixed cation perovskites. Predicted energies are also given in Table S1. (a) Crystal structures for: i) a pristine (defect-free) lead halide perovskite material; ii) the neutral iodine interstitial in a H-centre configuration with a localised hole; iii) the negative iodine interstitial in a split-interstitial configuration. (b) Configuration coordinate diagram for the neutral and negative iodine interstitial in \(\mathrm{MAPbl}_{3}\). Each scatter point represents a DFT calculated total energy. The solid grey lines are the potential energy surfaces (PES) generated using a lower accuracy interpolation method that does not account for molecular rotations. (c) Nonradiative carrier capture coefficients for the iodine interstitial in \(\mathrm{MAPbl}_{3}\). Calculated for electron capture at a neutral iodine interstitial (yellow solid line), hole capture at a negative iodine interstitial (blue solid) and electron capture followed by hole capture (red dash) (d) Ratio of electron capture rate (from the conduction band) and hole emission rate (into the valence band) for the neutral iodine interstitial in \(\mathrm{MAPbl}_{3}\). (e) Configuration coordinate for the neutral and negative iodine interstitial in \(\mathrm{MA}_{0.875}\mathrm{Cs}_{0.125}\mathrm{PbI}_{3}\). The grey dot line is the PES of the negative iodine interstitial in \(\mathrm{MAPbl}_{3}\), given for comparison. (f-g) As in (c-d), but for \(\mathrm{MA}_{0.875}\mathrm{Cs}_{0.125}\mathrm{PbI}_{3}\). where \(N_{0}\) is the neutral defect density and \(n\) is the electron density. We consider electron capture followed by hole capture, the total rate of which is quantified using \[C_{\mathrm{np}}=\frac{C_{\mathrm{n}}C_{\mathrm{p}}}{C_{\mathrm{n}}+C_{\mathrm{p }}}. \tag{2}\] We note that this does not correspond to the total rate of non-radiative recombination at the iodine interstitial, as we do not consider non-radiative recombination with hole capture as the initial step. Figure 1c shows that although electron capture is fast at 300 K, the non-radiative recombination process is limited by the slow rate of hole capture. Importantly, competing with the first step of this process (electron capture) is hole emission from the localised state associated with \(\mathrm{I}_{\mathrm{i}}^{0}\) (green solid line) to a delocalised state in the valence band (forming \(\mathrm{I}_{\mathrm{i}}^{-}\), blue dot line). This process does not require photoexcitation so can happen 'in the dark'. The ratio of the electron capture rate \(R_{\mathrm{c,n}}\) to the hole emission rate \(R_{\mathrm{e,p}}\) is given by: \[\frac{R_{\mathrm{c,n}}}{R_{\mathrm{e,p}}}=\frac{nC_{n}}{N_{\mathrm{v}}C_{ \mathrm{p}}}, \tag{3}\] where \(N_{\mathrm{v}}\) is the density of occupied states in the valence band and \(n\) is the electron concentration [21]. Assuming a hole effective mass value of 0.2 \(m_{\mathrm{e}}\)[22] and that the electron concentration is \(1\times 10^{15}\,\mathrm{cm}^{-3}\)[23], hole emission at the neutral iodine interstitial will occur faster than electron capture at temperatures above 160 K (Figure 1d). It is important to highlight that once the the negatively charged iodine interstitial is formed, whether through electron capture or hole emission, it is limited by the slow rate of hole capture (\(C_{\mathrm{p}}=6.0\times 10^{-17}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}\) at 300 K) and is electrically inactive. ### Carrier capture in mixed A-cation systems We now investigate the impact of cation mixing on nonradiative trapping and recombination processes. We consider the mixed cation system MA\({}_{\mathrm{1-x}}\)Cs\({}_{\mathrm{x}}\)PbI\({}_{3}\) and compare this against the control case of MAPbI\({}_{3}\). Initially we focus our analysis on the mixed A-cation system MA\({}_{\mathrm{0.125}}\)Cs\({}_{\mathrm{0.875}}\)PbI\({}_{3}\), which is close to the Cs concentration reported to be optimal for device efficiency [24]. We find that a H-centre defect is formed with charge localisation around the iodine dimer (Figure S1), indicating that the basic defect activity is comparable to MAPbI\({}_{3}\). Figure 1e shows that the total lattice relaxation \(\Delta Q\) is suppressed through Cs incorporation (36.8 amu\({}^{1/2}\)A to 28.9 amu\({}^{1/2}\)A). Despite this reduction, lattice relaxation is still relatively large for the mixed A-cation systems as \(\Delta Q\) is typically less than 10 amu\({}^{1/2}\)A for all-inorganic materials [17, 18]. Analysis of the defect structure for each charge state shows that the \(\Delta Q\) reduction can be primarily attributed to reduced displacements of Pb and I. In particular, the change in Pb-I-Pb bond angle after charge capture is reduced, suggesting that octahedral rotations are suppressed in the mixed cation materials (Table S3). Figure 1e also shows an increase in the energy difference \(\Delta E\) between the neutral and negatively charged defect states at their equilibrium configurations (\(-0.23\,\mathrm{eV}\) to \(0.06\,\mathrm{eV}\)). \(\Delta E\) corresponds to the thermodynamic charge transition level relative to the valence band edge (for hole capture) or conduction band edge (for electron capture), and is a key parameter for classifying defect activity in the semi-classical defect model. For example, defects with a few \(K_{\mathrm{B}}T\) of the band edge typically show single carrier trapping and de-trapping behaviour, whilst 'deep' defects towards the middle of a band gap may successively trap both carrier species and form a site for non-radiative recombination [21]. After cation mixing we find that \(\Delta E\) increases so that the two charge states become close to thermodynamic equilibrium. As a result, fast hole trapping and de-trapping behaviour is expected. We find that the shape of the PES is largely unchanged after Cs mixing. This is especially true at small displacements around the equilibrium structures, where there is a slight softening of the harmonic PES after Cs incorporation for both the negative (\(43\,\mathrm{cm}^{-1}\) to \(35\,\mathrm{cm}^{-1}\)) and neutral (\(52\,\mathrm{cm}^{-1}\) to \(50\,\mathrm{cm}^{-1}\)) charge states. For comparison, the _DX_-Centre in GaAs has an effective harmonic frequency of \(81\,\mathrm{cm}^{-1}\). The reduction in \(\Delta E\), combined with a rigid relative displacement of the \(I_{\mathrm{i}}^{-}\) and \(I_{\mathrm{i}}^{0}\) PES in _E_-_Q space, leads to a reduction in the hole capture barrier \(E_{\mathrm{p}}\) (\(0.63\,\mathrm{eV}\) to \(0.23\,\mathrm{eV}\)) and a small increase in the electron barrier height \(E_{\mathrm{n}}\) (\(0.025\,\mathrm{eV}\) to \(0.045\,\mathrm{eV}\)). The reduction in \(E_{\mathrm{p}}\) indicates that there will be an increased rate of hole trapping at the negatively charged iodine interstitial after cation mixing. To quantify the impact of cation mixing we analyse the same multi-phonon capture and recombination processes as outlined in Section 2.1. We find that the defect state couples more strongly with the valence band (\(W_{\mathrm{if}}^{\mathrm{p}}=0.034\,\mathrm{eV}\,\mathrm{amu}\)-\({}^{1/2}\)A\({}^{\text{-1}}\)) compared to the conduction (\(W_{\mathrm{if}}^{\mathrm{n}}=0.002\,\mathrm{eV}\,\mathrm{amu}\)-\({}^{1/2}\)A\({}^{\text{-1}}\)); this is expected as the VBM has a larger density of electronic states derived from iodine \(p\)-orbitals (Figure S2). This, combined with the reduction in \(E_{\mathrm{p}}\), results in a carrier recombination process that is more balanced between electron and hole capture, and that shows two clear regimes. At temperatures below \(200\,\mathrm{K}\) the process is limited by low vibronic overlap between occupied states of the \(\mathrm{I}_{\mathrm{i}}^{-}\) and \(\mathrm{I}_{\mathrm{i}}^{0}\) energy surfaces, leading to slow hole capture at \(\mathrm{I}_{\mathrm{i}}^{-}\) and a recombination coefficient \(C_{\mathrm{np}}\) that is strongly temperature dependent. At higher temperatures the small \(W_{\mathrm{if}}^{\mathrm{n}}\) makes electron capture at the \(\mathrm{I}_{\mathrm{i}}^{0}\) the limiting process, and \(C_{\mathrm{np}}\) has a reduced dependence on temperature (Figure 1f). As in the single cation material, we must also consider hole emission from the localised state associated with \(\mathrm{I}_{\mathrm{i}}^{0}\). We find that that the rate of hole emission exceeds the rate of electron capture across the whole temperature range (Figure 1g). Furthermore, in contrast to the single cation material, hole trapping is also fast with \(C_{\mathrm{p}}\)= \(1.1\times 10^{-10}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}\) at \(300\,\mathrm{K}\); an increase by over seven orders of magnitude compared to single cation MAPbI\({}_{3}\). We conclude that \(\mathrm{I}_{\mathrm{i}}\) will form a site for successive hole trapping and de-trapping in this system. Our results indicate that Cs incorporation leads to a significant increase in the rate of non-radiative trapping and de-trapping at the iodine interstitial. At first this may appear to contradict the well-established improvement in performance for mixed cation perovskite materials, which are used in cells with the highest power conversion efficiencies (PCEs) [3]. However for high-performance PV materials, both high PCE and good stability (chemical, thermal and mechanical) are a prerequisite. Figure 1 suggests that the high PCEs for mixed A-cation perovskites do not necessarily derive from a reduced rate of non-radiative trapping or recombination in the material as-synthesised, but follow from their increased stability. ### Steric engineering of point defect properties The increase in \(\Delta E\) after cation mixing can be rationalised using the concept of reorganisation energy from electron transfer theory [25]. This is the energy released from the lattice relaxation \(\Delta Q\) between equilibrium geometries; a smaller geometric relaxation is associated with a smaller reorganisation energy. \(\Delta E\) and reorganisation energy \(\lambda\) are related through the expression \(\Delta E=E_{\rm abs}-\lambda\), where \(E_{\rm abs}\) is the Franck-Condon absorption energy for a vertical transition with no change in lattice geometry. For relaxation after electron capture in MAPbI\({}_{3}\)\(\lambda>E_{\rm abs}\), so that there is a large thermodynamic penalty for re-forming the neutral charge state. After Cs incorporation there is a reduction in \(\Delta Q\) and, as a result, \(\lambda\) is also reduced. This leads to an increase in \(\Delta E\) so that the two charge states become close to thermodynamic equilibrium. Across a wide range of molecular systems there a linear relationship between \(\lambda\) and \(\Delta Q\) for charge transfer reactions. This relationship enables charge transport engineering, whereby the steric size of the molecule is used to determine \(\Delta Q\) and, following this, \(\lambda\)[10, 26]. In order to evaluate the extension of steric engineering to lead halide perovskites we begin by investigating the relationship between \(\lambda\) and \(\Delta Q\) for carrier capture at the iodine interstitial in MA\({}_{\rm 1\mathchar 4}\)Cs\({}_{\rm x}\)PbI\({}_{3}\) across a range of stoichiometries (\(x=0,0.125,0.25\) and \(0.5\)). At higher stoichiometries (\(x>0.5\)) we found that the valence band maximum increases so that there is no localised defect state for hole capture within the band gap, in agreement with previous reports for CsPbI\({}_{3}\)[27]. In Figure 2a we show that after Cs incorporation \(\Delta Q\) is reduced relative to the single cation system (\(Fm\overline{3}m\) phase) for all compositions. We also find that there is a strong positive linear correlation between \(\lambda\) and \(\Delta Q\) for the mixed A-cation systems (lower dash line). This analysis includes the point (0,0), as required by the definition of relaxation energy. To allow comparison between different A-site compositions we scale \(\Delta Q\) by \(\sqrt{V_{c}/V}\) where \(V\) is the volume of the defect supercell and \(V_{c}\) is the volume of the control supercell (MAPbI\({}_{3}\) in the \(Fm\overline{3}m\) phase). We combine our results with data from the literature at the same level of theory for the single cation system [13]. This enables us to consider three lattice relaxation processes in MAPbI\({}_{3}\), each corresponding to a different regime of electron-lattice coupling: i) a vertical transition with no relaxation (\(\Delta Q=0\)); ii) relaxation in the \(Pnma\) phase (strong coupling, \(\Delta Q\sim 20\)); and iii) relaxation in the \(Fm\overline{3}m\) phase (giant coupling, \(\Delta Q\sim 35\)). Figure 2a confirms a positive correlation between \(\lambda\) and \(\Delta Q\) for the single cation systems (upper dash line). The mixed cation systems are at intermediate \(\Delta Q\) values between the strong- and giant- coupling regimes, and we find a positive linear correlation (Pearson correlation coefficient \(r(4)=0.96\) with \(P<0.01\)) across all data points (single and mixed, dotted line). To understand how \(\Delta Q\) may be used to tune the the charge state and activity of I\({}_{i}\) we investigate the relationship between \(\Delta E\) and \(\Delta Q\). In Figure 2b we show a strong negative correlation (\(r(7)=-0.96\) with \(P<0.001\)) between \(\Delta E\) and \(\Delta Q\) (dotted line), indicating that \(\Delta E\) can be tuned through variation of \(\Delta Q\). When there is no relaxation \(\Delta E=E_{\rm abs}\) and this is shown to be composition dependent. As in Figure 2a, if we confine our analysis to a single composition and \(E_{\rm abs}\) value, the trend becomes stronger (dashed line). The inset in Figure 2b compares \(\Delta E\) values calculated using the semilocal pbesol exchange-correlation functional (empty scatter points), and the HSE06 exchange correlation functional with spin-orbit coupling (HSE06-SOC, filled scatter points). We observe a systematic increase in \(\Delta E\) for both the single cation (square scatter points) and mixed cation (circular scatter points) systems at the higher HSE06-SOC level of theory. This is due to an increased accuracy for the predicted electronic band edge positions [5]. Figure 2b explains the significant \(0.7\,\mathrm{eV}\) discrepancy between previously published values for \(\Delta E\), which has led to conflicting predictions that the iodine interstitial is electrically inactive [8] or a site for non-radiative recombination [13]. Our results suggest that the source of this discrepancy is the Figure 2: Defect properties for the MA\({}_{\text{1-x}}\)Cs\({}_{\text{x}}\)PbI\({}_{\text{3}}\) series of materials. The purple circle, green triangle, orange cross and blue square denote \(x=0,0.125,0.25\) and \(0.5\) compositions respectively. Values for the single cation \(Pnma\) phase are taken from Reference [13]. (a) Relaxation energy \(\lambda\) and \(\Delta Q\). The lower dash line is a first-order polynomial fit to the mixed cation data, the upper dash line is a fit to the single cation data. The dotted line is a fit to all scatter points. (b) Charge transition level \(\Delta E\) and \(\Delta Q\). The dash line is a fit to the single cation data. The dotted line is a fit to all scatter points. The inset shows energies calculated with the pbesol exchange-correlation functional (empty scatter points) and the hybrid HSE06 functional with spin-orbit coupling (filled scatter points). different perovskite phases used for modelling, and the influence this has on the predicted \(\Delta Q\): whilst Reference [8] uses the high symmetry \(Pm\bar{3}m\) pseudo-cubic phase, Reference [13] uses the lower symmetry orthorhombic phase, which is formed from condensation of the \(M_{3}^{+}\) and \(R_{4}^{+}\) phonon modes associated with octahedral tilting. If a subset of the distortions associated with lattice relaxation in the cubic phase is not available in the orthorhombic phase, this will lead to the observed reduction in \(\Delta Q\) and increase in \(\Delta E\). Our analysis suggests that a symmetry lowering mechanism might be responsible for the reduction in \(\Delta Q\) observed for the mixed cation perovskites. To test this hypothesis we use symmetry mode analysis [28]. This allows us to decompose the structural changes after A-cation mixing into the normal phonon modes of the single cation cubic structure, giving insight into the specific atomic displacements that are contributing to any lattice distortion. Figure 3a shows that for all three mixed cation systems the most dominant phonon modes is \(\Gamma_{4}^{-}\). This is a polar zone centre displacement corresponding to an off-centering of Pb and I. The second most dominant phonon modes is \(M_{3}^{+}\). This is a zone boundary mode and a dominant component of thermal phase transitions in perovskite materials [29]. It corresponds to rigid rotations of the PbI\({}_{3}\) octahedra, and consequently describes motions of I only. The amplitudes for all phonon modes are given in Figures S3-S5. Inspection of the \(\Gamma_{4}^{-}\) polarisation vector shows that this mode is primarily a rigid shift relative to the A-site cation and so does not lead to a reduction in volume. In contrast, mapping along the \(M_{3}^{+}\) mode reduces the metal-halide-metal bonding angle and the cubo-octahedral volume around the A-site. This is in agreement with our measured volumes and Pb-I-Pb bond angle after Cs incorporation (Table S2). We conclude from this that the reduced steric size of the Cs cation (1.81 A) compared to MA (2.70 A) leads to a volume contraction primarily mediated via condensation of the \(M_{3}^{+}\) octahedral tilting mode. Figure 3: Symmetry mode analysis of MA\({}_{\text{1-x}}\)Cs\({}_{\text{x}}\)PbI\({}_{3}\) with the cubic perovskite phase \(Pm\bar{3}m\) used as a reference (parent) structure. a) Amplitude of the two most dominant phonon modes \(M_{3}^{+}\) and \(\Gamma_{4}^{-}\). Green (Pb) and orange (I) are used to indicate the contribution from each atomic species; the A-site species is not displaced for either mode. b) Schematic of the atomic displacements that correspond to each phonon mode. For easier visualisation of the octahedral tilt patterns the iodine atoms are not shown. Group theoretical analysis confirms that displacement along \(M_{3}^{+}\) reduces the symmetry from cubic to tetragonal (\(P4/mbm\)), which is in agreement with the structures reported for FA/Cs and FA/MA/Cs mixed cation systems [30][31]. These experimental observations of mixed A-cation perovskites in the tetragonal phase suggest two important points. Firstly, that this symmetry lowering behaviour is common across other systems where cation mixing leads to a reduction in unit cell volume. Secondly, that any dynamic disorder leading to an effective pseudo-cubic phase is suppressed; the PbX\({}_{6}\) octahedral rotations are 'locked-in' and do not time-average to a higher symmetry structure. The latter point is supported by molecular dynamics simulations which show that a low concentration of Cs or Rb in FAPbI\({}_{3}\) suppresses octahedral tilting [32]. ## 3 Discussion Our results demonstrate that \(\Delta E\) is highly sensitive to \(\Delta Q\), and that the same defect in the same material can show a wide variety of defect behaviour. As shown schematically in Figure 4, the iodine interstitial can vary from being electrically inactive (with \(\Delta E\) in the valence band) through being a site for non-radiative hole trapping and de-trapping (with \(\Delta E<k_{\rm B}T\)), to being a site for non-radiative recombination (with \(\Delta E\) in the band gap). This conclusion is supported by time resolved photoluminescence measurements of Figure 4: Schematic illustration outlining the impact of steric engineering and crystal phase on I\({}_{i}\) defect activity. \(\Delta E\) is the (0/-) charge transition level, \(\Delta Q\) is a measure of lattice relaxation between the equilibrium geometries of I\({}_{i}^{0}\) and I\({}_{i}^{-}\). The control case of single cation MAPbI\({}_{3}\) in the pseudo-cubic phase shows the largest lattice relaxation after carrier capture, leading to an electrically inactive defect (orange illustrations). A-site mixing with a smaller cation reduces lattice relaxation (\(\Delta Q^{\prime}\)) and the hole capture barrier (E\({}_{p}^{\prime}\)), leading to fast hole trapping and de-trapping (blue). Full symmetry lowering to the orthorhombic phase leads to further reductions (\(\Delta Q^{\prime\prime}\), E\({}_{p}^{\prime\prime}\)) and a charge transition level that is in the electronic band gap (green). MAPbI3 which show that regions with greater compressive strain are associated with increased non-radiative decay [33]. In addition, first-principles molecular dynamics studies demonstrate that the Kohn-Sham eigenvalues associated with defect states in hybrid halide perovskites can oscillate by as much as 1 eV, reinforcing our finding that defects in this system have an unusually high level of sensitivity to lattice distortions [34, 35]. Our results also reveal the possibility of tuning \(\Delta E\) through control of \(\Delta Q\), which is a new approach to defect engineering for hybrid and inorganic materials. Doping to determine the Fermi Level and defect charge species is well established [36], as is adjusting the chemical potentials of reactants during growth to increase the formation energy (and thus decrease the concentration) of harmful defects [37]. Instead, we propose doping (in this case, at the perovskite A-site) to determine the available lattice relaxation pathways (\(\Delta Q\)). This in turn adjusts the charge transition level and carrier capture barriers. In this study we find that the doping-induced volume contraction leads to a phase transition which 'locks in' the accepting phonon modes that are active in non-radiative carrier recombination. Furthermore, there are other mechanisms which could used to engineer the available lattice relaxation pathways; for example, applied hydrostatic pressure and epitaxial growth have been shown to induce phase transformations in perovskite materials [38, 39]. To conclude, we have analysed non-radiative carrier trapping in single and mixed A-cation perovskite systems. We have introduced an interpolation method for hybrid materials that describes the coupling between electronic states and molecular rotations, and applied this to MA1-xCsxPbI3 for accurate predictions of defect activity. We find that cation mixing leads to a significant increase in the rate of hole trapping and de-trapping compared to the single cation system in the pseudo-cubic phase. Importantly, we find a linear relationship between \(\Delta E\) and \(\Delta Q\), demonstrating that the same defect in the same material can display a wide range of defect activity depending on _e.g._ film morphology or temperature. The reduction in \(\Delta Q\) is associated with phase transitions to a lower symmetry phase; in the case of mixed A-site cations this transition is induced through the decreased steric size of Cs compared to MA. Furthermore, our results suggest that \(\Delta Q\) can be used to tune the carrier trapping activity. This is a general approach for materials that display large lattice relaxation after carrier capture, and may prove a useful approach for rationalising the design of other hybrid organic-inorganic materials including metal-organic frameworks. ## 4 Methods ### Electronic structure calculations The underlying electronic structures were calculated using density functional theory (DFT) as implemented in VASP[40] using a plane wave basis set with an energy cutoff of 400 eV. A 2\(\times\)2\(\times\)2 gamma centered Monkhorst-Pack mesh was used for the Brillouin zone integration. The interstitial was placed in a 192-atom supercell. We calculated \(\Delta Q\) for the iodine interstitial in a 768 atom unit cell and found it to be converged within 1 amu\({}^{1/2}\)A. A-site ordering for the mixed cation systems were determined using Special Quasi-random Structures [41] as implemented in the ICET code [42]. Ground state geometries were found using the PBEsol functional [43] with a force cutoff of 0.01 eV \(\mathrm{\AA}^{-1}\). The interpolated geometries were generated with a custom script using the Atomic Simulation Environment [44]. The potential energy surface was calculated using the screened-exchange HSE06 functional [20] with \(\alpha=0.43\) and spin-orbit coupling. The total energy cutoff was \(10^{-5}\) eV. We used a uniform reduction factor to evaluate HSE06 energies at the gamma point only. We calculated the neutral defect formation energy in MAPbI\({}_{3}\) using the full 2\(\times\)2\(\times\)2 k-point grid and found it to be converged within 0.01 eV per formula unit. We employed a delta self-consistent field approach to constrain the occupation of the defect states near energy level crossings. The electron-phonon coupling term was derived from wavefunctions calculated at the same level of theory. ### Defect properties The formation energy of a defect in charge state \(q\) is given by \[E_{\mathrm{f}}(q)=E_{\mathrm{d}}(q)-E_{\mathrm{b}}-\sum_{i}\mu_{i}n_{i}+q( \epsilon_{\mathrm{VBM}}+E_{\mathrm{F}})+E_{\mathrm{corr}},\] where \(E_{\mathrm{d}}(q)\) is the total energy of the defect lattice in charge state \(q\), \(E_{\mathrm{b}}\) is the total energy of the pristine lattice, \(\mu_{i}\) is the chemical potential of species \(i\) and \(n_{i}\) is the number of atoms that are added or removed. \(E_{\mathrm{d}}(q)\), \(E_{\mathrm{b}}\) and \(\mu_{i}\) were calculated using DFT, as outlined in the previous section. \(E_{\mathrm{corr}}\), the correction term for charged defects, was calculated using xdefectalign with a value of \(\bar{\epsilon_{0}}=22.67\) for the static dielectric constant [45, 46]. More details on the methodology as applied to hybrid halide perovskites have already been published in Reference [8]. A quantum mechanical treatment of electron capture was performed using the open-source CarrierCapture package [47] which builds on the approach outlined in Reference [16]. In this model the carrier capture coefficient for capture from an initial state i to a final state f is given by \[C=V\frac{2\pi}{\hbar}gW_{\mathrm{if}}^{2}\sum_{m}\Theta_{m}\sum_{n}|\langle \chi_{\mathrm{im}}|Q-Q_{0}|\chi_{\mathrm{fn}}\rangle|^{2}\times\delta(\Delta E +m\hbar\omega_{\mathrm{i}}-n\hbar\omega_{\mathrm{f}}),\] where \(V\) is the supercell volume, \(g\) is the energetic degeneracy of the final state, \(W_{\mathrm{if}}\) is the electron-phonon coupling matrix element, \(\langle\chi_{\mathrm{im}}|Q-Q_{0}|\chi_{\mathrm{fn}}\rangle\) is the overlap of the vibrational wavefunctions \(\chi\), and the Dirac \(\delta\) ensures that there is conservation of energy. In practice the Dirac \(\delta\) term is replaced by a smearing function; for the calculations in this study this is a gaussian function of width 0.01 eV. \(\Theta_{m}\) is the thermal occupation of the vibrational state \(m\) The electron-phonon coupling term was calculated using the Nonrad package [48]. Further details of the methodology can be found in the literature [16]. ### Geometry analysis Bond lengths and bond angles were analysed using the Atomic Simulation Environment [44]. Crystal structures were visualised using vesta[49]. Symmetry mode analysis was conducted using Isodistort[28] with MAPbI\({}_{3}\) in the parent cubic phase \(Pm\bar{3}m\) used as a reference structure. The rotational motion of the MA molecule was not considered in this analysis; all A-sites were modelled as point particles. The phonon mode amplitudes were normalised to the parent cell volume to allow comparison between different compositions. ### Code and data availability The custom analysis code and raw data used to generate plots is openly available at [https://github.com/NU-CEM/MACsPbI3_defects](https://github.com/NU-CEM/MACsPbI3_defects). The custom code and raw data used for Kabsch interpolation is openly available at [https://github.com/NU-CEM/Kabsch_interpolation/](https://github.com/NU-CEM/Kabsch_interpolation/). Supplementary information.The Supplementary Information includes a table of DFT calculated energies, a contour plot of charge distribution around the H-centre defect, geometric analysis data for the pristine and defect structures, an electronic density of states, and further results from the symmetry mode analysis. Acknowledgments.This work used the Oswald High Performance Computing facility operated by Northumbria University (UK). Via our membership of the UK's HEC Materials Chemistry Consortium, which is funded by EPSRC (EP/R029431), this work used the ARCHER2 UK National Supercomputing Service ([http://archer2.ac.uk](http://archer2.ac.uk)).
2301.11413
A massive quiescent galaxy at redshift 4.658
The extremely rapid assembly of the earliest galaxies during the first billion years of cosmic history is a major challenge for our understanding of galaxy formation physics. The advent of JWST has exacerbated this issue by confirming the existence of galaxies in significant numbers as early as the first few hundred million years. Perhaps even more surprisingly, in some galaxies, this initial highly efficient star formation rapidly shuts down, or quenches, giving rise to massive quiescent galaxies as little as 1.5 billion years after the Big Bang. However, due to their faintness and red colour, it has proven extremely challenging to learn about these extreme quiescent galaxies, or to confirm whether any exist at earlier times. Here we report the spectroscopic confirmation of a massive quiescent galaxy, GS-9209, at redshift $z=4.658$, just 1.25 billion years after the Big Bang, using JWST NIRSpec. From these data we infer a stellar mass of $M_* = 3.8\pm0.2\times10^{10}\ M_\odot$, which formed over a $\simeq200$ Myr period before this galaxy quenched its star formation activity at $z=6.5^{+0.2}_{-0.5}$, when the Universe was $\simeq800$ million years old. Based on the presence of broad H$\alpha$ in the spectrum and a high narrow-line [NII]/H$\alpha$ ratio, we infer the presence of an accreting supermassive black hole, with a mass of $M_\bullet = 5\pm1\times10^{8}\ M_\odot$. This large black hole mass relative to the stellar mass suggests that active galactic nucleus (AGN) feedback may have been responsible for quenching this galaxy. GS-9209 is also extremely compact, with an effective radius, $r_e=215\pm20$ parsecs. This galaxy is both a likely descendent of the highest-redshift submillimetre galaxies and quasars, and a likely progenitor for the dense, ancient cores of the most massive local galaxies.
A. C. Carnall, R. J. McLure, J. S. Dunlop, D. J. McLeod, V. Wild, F. Cullen, D. Magee, R. Begley, A. Cimatti, C. T. Donnan, M. L. Hamadouche, S. M. Jewell, S. Walker
2023-01-26T20:56:34Z
http://arxiv.org/abs/2301.11413v2
# A massive quiescent galaxy at redshift 4.658 ###### Abstract We report the spectroscopic confirmation of a massive quiescent galaxy, GS-9209 at a new redshift record of \(z=4.658\), just \(1.25\) Gyr after the Big Bang, using new deep continuum observations from JWST NIRSpec. From our full-spectral-fitting analysis, we find that this galaxy formed its stellar population over a \(\simeq\)\(200\) Myr period, approximately \(600-800\) Myr after the Big Bang (\(z_{\bf form}=7.3\pm 0.2\)), before quenching at \(z_{\bf quench}=6.7\pm 0.3\). GS-9209 demonstrates unambiguously that massive galaxy formation was already well underway within the first billion years of cosmic history, with this object having reached a stellar mass of \(\log_{10}(M_{\star}/{\rm M_{\sun}})>10.3\) by \(z=7\). This galaxy also clearly demonstrates that the earliest onset of galaxy quenching was no later than \(\simeq\)\(800\) Myr after the Big Bang. We estimate the iron abundance and \(\alpha\)-enhancement of GS-9209, finding \([{\bf Fe/H}]=-0.97^{+0.06}_{-0.07}\) and \([\alpha/{\bf Fe}]=0.67^{+0.25}_{-0.15}\), suggesting the stellar mass vs iron abundance relation at \(z\simeq 7\), when this object formed most of its stars, was \(\simeq 0.4\) dex lower than at \(z\simeq 3.5\). Whilst its spectrum is dominated by stellar emission, GS-9209 also exhibits broad H\(\alpha\) emission, indicating that it hosts an active galactic nucleus (AGN), for which we measure a black-hole mass of \(\mathbf{log_{10}(M_{\bullet}/M_{\odot})=8.7\pm 0.1}\). Although large-scale star formation in GS-9209 has been quenched for almost half a billion years, the significant integrated quantity of accretion implied by this large black-hole mass suggests AGN feedback plausibly played a significant role in quenching star formation in this galaxy. GS-9209 is also extremely compact, with an effective radius of just \(\mathbf{215\pm 20}\) parsecs. This intriguing object offers perhaps our deepest insight yet into massive galaxy formation and quenching during the first billion years of cosmic history. ## 1 Summary The discovery of massive galaxies with old stellar populations at early cosmic epochs has historically acted as a key constraint on models for both galaxy formation physics and cosmology [1, 2, 3, 4]. Today, the extremely rapid assembly of the earliest galaxies during the first billion years of cosmic history continues to challenge our understanding of galaxy formation physics [5, 6]. The advent of the James Webb Space Telescope (JWST) has exacerbated this issue by confirming the existence of galaxies in significant numbers as early as the first few hundred million years [7, 8, 9]. Perhaps even more surprisingly, in some galaxies, this initial highly efficient star formation rapidly shuts down, or quenches, giving rise to massive quiescent galaxies as little as \(\sim 1.5\) billion years after the Big Bang, at redshifts up to \(z\simeq 4\)[4, 10]. Due to their faintness and red colour, it has proven extremely challenging to learn about these extreme quiescent galaxies, or to confirm whether any exist at earlier times. Here, we report the spectroscopic confirmation of a quiescent galaxy, GS-9209, at a new redshift record of 4.658, just 1.25 billion years after the Big Bang, using the NIRSpec instrument on JWST. The transformative power of JWST allows us to characterise the physical properties of this early massive galaxy in unprecedented detail. GS-9209 has a stellar mass of \(M_{*}=4.1\pm 0.2\times 10^{10}\) M\({}_{\odot}\), and quenched star formation at \(z=6.7\pm 0.3\), when the Universe was \(\simeq 800\) million years old. This intriguing object offers perhaps our deepest insight yet into massive galaxy formation and quenching during the first billion years of cosmic history. ## 2 Results GS-9209 was first highlighted in the early 2000s as an object with red optical to near-infrared colours and a photometric redshift of \(z\simeq 4.5\)[11]. An optical spectrum was taken in the mid-2010s as part of the VIMOS Ultra Deep Survey (VUDS) [12], showing tentative evidence for a Lyman break at \(\lambda\simeq 7000\)A, but no Lyman \(\alpha\) emission. During the past 5 years, several studies have identified GS-9209 as a candidate high-redshift massive quiescent galaxy [13, 14], based on its blue colours at wavelengths, \(\lambda=2-8\mu\)m and non-detection at millimetre wavelengths [15]. GS-9209 is also not detected in X-rays [16], at radio wavelengths [17], or at \(\lambda=24\mu\)m [18]. The faint, red nature of the source (with magnitudes \(H_{\rm AB}=24.7\) and \(K_{\rm AB}=23.6\)) means that near-infrared spectroscopy with ground-based instrumentation is prohibitively expensive. ### Spectroscopic data On 16\({}^{\rm th}\) November 2022, we obtained medium-resolution spectroscopy (\(R=\lambda/\Delta\lambda=1000\)) through the JWST NIRSpec fixed slit, integrating for 3 hours with the G235M grism and 2 hours with the G395M grism, providing continuous wavelength coverage from \(\lambda=1.7-5.1\mu\)m. These data, shown in Fig. 1, reveal a full suite of extremely deep Balmer absorption features, from which we measure a spectroscopic redshift of \(4.6582\pm 0.0002\), consistent with previous photometric data and the VUDS spectrum. The spectrum strongly resembles that of an A-type star, and is reminiscent of lower-redshift post-starburst galaxies [19; 20; 21], with a H\(\delta\) equivalent width (EW), as measured by the H\(\delta_{\rm A}\) Lick index, of \(7.9\pm 0.3\)A, comparable to the most extreme values observed in the local Universe [22]. These spectral features strongly indicate this galaxy has undergone a sharp decline in star-formation rate (SFR) during the preceding few hundred Myr. The observed continuum is relatively smooth, as is the case for A-type stars, with only two clearly detected metal absorption features: the Ca k line at 3934A and the Na d feature at 5895A. The Ca h line at 3969A is blended with the much stronger H\(\epsilon\) Balmer line. The spectrum exhibits only the merest suspicion of [O ii] 3727A and [O iii] 4959A, 5007A emission, and no apparent infilling of H\(\beta\) or any of the higher-order Balmer absorption lines. However, as can be seen in Fig. 2, both H\(\alpha\) and [Nii] 6584A are clearly albeit weakly detected in emission, with H\(\alpha\) also exhibiting an obvious broad component. This broad component, along with the relative strength of [N ii] compared with the narrow H\(\alpha\) line indicate the presence of an accreting supermassive Figure 1: JWST NIRSpec observations of GS-9209. Data were taken using the G235M and G395M gratings (\(R=1000\)), providing wavelength coverage from \(\lambda=1.7-5.1\mu\)m. The galaxy is at \(z=4.658\), and exhibits extremely deep Balmer absorption lines, similar to lower redshift post-starburst galaxies, clearly indicating this galaxy experienced a significant, rapid drop in star-formation rate (SFR) within the past few hundred million years. The spectral region from \(\lambda=2.6-4.0\mu\)m, containing \(H\beta\) and \(H\alpha\), is shown at a larger scale in Fig. 2. black hole: an active galactic nucleus (AGN). However, the extreme EWs of the observed Balmer absorption features indicate that the continuum emission must be strongly dominated by the stellar component. Nevertheless, the AGN contribution to GS-9209 must be carefully modelled when fitting the spectrum of this source to extract reliable stellar population properties (see Section 4.3). ### Full spectral fitting To measure the stellar population properties of GS-9209, we perform full spectrophotometric fitting using the Bagpipes code. Full details of the methodology we employ are given in Section 4.3. Briefly, we combine our spectroscopic data with previously available CANDELS photometry, as well as new JWST NIRCam medium-band imaging in 5 filters from the Ultra Deep Field Medium-Band Survey (Programme ID: 1963; PI: Williams). We first mask the wavelengths corresponding to [O ii], [O iii], narrow H\(\alpha\) and [N ii], due to likely AGN contributions. We discuss the properties of these lines and their likely origin in Section 2.5. We then fit a 22-parameter model for the stellar, dust, nebular and AGN components, as well as spectrophotometric calibration. The resulting posterior median model is shown in black in Figs 1 and 2. We obtain a stellar mass of \(\log_{10}(M_{*}/\mathrm{M}_{\odot})=10.61\pm 0.02\), under the assumption of a Kroupa initial mass function (IMF) [23]. We additionally recover a very low level of dust attenuation, with \(A_{V}=0.04^{+0.05}_{-0.03}\). The SFR we measure averaged over the past 100 Myr is consistent with zero, with a very stringent upper bound, though this is largely a result of our chosen star-formation history (SFH) parameterisation [24]. We report a more-realistic upper bound on the SFR in Section 2.5 based on the narrow H\(\alpha\) line. Figure 2: JWST NIRSpec observations of GS-9209: zoom in on H\(\beta\) and H\(\alpha\). Data are shown in blue, with their associated uncertainties visible at the bottom in purple. The full Bagpipes fitted model is shown in black, with the AGN component shown in red. The narrow H\(\alpha\) and [N ii] lines were masked during the Bagpipes fitting process, and subsequently fitted with Gaussian functions, shown in green. Key emission and absorption features are also marked. ### Star-formation history The star-formation history (SFH) we recover is shown in Fig. 3. We find that GS-9209 formed its stellar population largely during a \(\simeq 200\) Myr period, from around \(600-800\) Myr after the Big Bang (\(z\simeq 7-8\)). We recover a mass-weighted mean formation time, \(t_{\rm form}=0.71^{+0.03}_{-0.02}\) Gyr after the Big Bang, corresponding to a formation redshift, \(z_{\rm form}=7.3\pm 0.2\). This is the redshift at which GS-9209 would have had half its current stellar mass, approximately \(\log_{10}(M_{*}/{\rm M}_{\odot})=10.3\). We find that GS-9209 quenched (which we define as the time at which its sSFR fell below 0.2 divided by the Hubble time, e.g., [25]) at time \(t_{\rm quench}=0.79^{+0.06}_{-0.04}\) Gyr after the Big Bang, corresponding to a quenching redshift, \(z_{\rm quench}=6.7\pm 0.3\). Our model predicts that the peak historical SFR for GS-9209 (at approximately \(z_{\rm form}\)) was within the range \({\rm SFR}_{\rm peak}=530^{+840}_{-310}\) M\({}_{\odot}\) yr\({}^{-1}\). This is similar to the SFRs of bright submillimetre galaxies (SMGs). The number density of SMGs with SFR \(>300\) M\({}_{\odot}\) yr\({}^{-1}\) at \(5<z<6\) has been estimated to be \(\simeq 3\times 10^{-6}\) Mpc\({}^{-3}\)[26]. Extrapolation then suggests that the SMG number density at \(z\simeq 7\) is \(\simeq 1\times 10^{-6}\) Mpc\({}^{-3}\), which equates to \(\simeq 1\) SMG at \(z\simeq 7\) over the \(\simeq 400\) square arcmin area from which GS-9209 and one other \(z>4\) quiescent galaxy were selected [14]. This broadly consistent number density suggests it is entirely plausible that GS-9209 went through a SMG phase at \(z\simeq 7\), shortly before quenching. In the right panel of Fig. 3, we show the positions of the massive, high-redshift galaxies recently reported by [7] in the first imaging release from the JWST CEERS survey. It can be seen that the positions of these galaxies are Figure 3: The star-formation history of GS-9209. The SFR as a function of time is shown in the left panel, with the stellar mass as a function of time shown in the right panel. The blue lines show the posterior medians, with the darker and lighter shaded regions showing the \(1\sigma\) and \(2\sigma\) confidence intervals respectively. We find a formation redshift, \(z_{\rm form}=7.3\pm 0.2\) and a quenching redshift, \(z_{\rm quench}=6.7\pm 0.3\). The sample of massive \(z\simeq 8\) galaxy candidates from JWST CEERS reported by [7] is also shown in the right panel, demonstrating that these candidates are plausible progenitors for GS-9209. broadly consistent with the SFH of GS-9209 at \(z\simeq 8\). It should however be noted that, as previously discussed, GS-9209 was selected as one of only two robustly identified \(z>4\) massive quiescent galaxies in an area roughly 10 times the size of the initial CEERS imaging area [14]. It therefore seems unlikely that a large fraction of the objects reported by [7] will evolve in a similar way to GS-9209 over the redshift interval from \(z\simeq 5-8\). ### Stellar metallicity We obtain a relatively low stellar metallicity for GS-9209 of \(\log_{10}(Z_{*}/\mathrm{Z}_{\odot})=-0.97^{+0.06}_{-0.07}\) (where we adopt a value of \(\mathrm{Z}_{\odot}\)=0.0142 [27]). By re-running our fitting procedure at a range of fixed metallicity values, we find that metallicity is constrained mainly by the shape of the stellar continuum emission above the Balmer break (the \(\lambda=2.0-2.6\mu\)m region shown in the inset panel of Fig. 1), which is strongly incompatible with models at higher metallicities. This UV continuum shape is mostly sensitive to the Fe abundance [28; 29], and we therefore associate our measured \(Z_{*}\) value with the Fe abundance, [Fe/H] \(=-0.97^{+0.06}_{-0.07}\). This is \(\simeq 0.4\) dex below the mean \(z\simeq 3.5\) stellar mass vs iron abundance relationship for star-forming galaxies [30]. Given that GS-9209 formed its stellar population at \(z\simeq 7\), our result suggests that the stellar mass vs iron abundance relation continues to trend downwards over the redshift interval from \(z\simeq 3.5-7\), as is observed between the local Universe and \(z\simeq 3.5\). As can be seen from Figs 1 and 2, we do not obtain a good fit to either the Ca k or Na d absorption features, with our model significantly under-predicting the depths of both. Stellar populations that form and quench rapidly are known to be \(\alpha\)-enhanced [31], whereas the stellar population models we fit assume a fixed scaled-Solar abundance pattern (see Section 4.3). We therefore provisionally attribute the failure of our model to reproduce these \(\alpha\)-element absorption features to significant \(\alpha\)-enhancement in GS-9209. It should be noted however that both of these features (in particular Na d) can also arise from interstellar medium (ISM) absorption, though the low dust attenuation we infer from our spectral fit might be taken to suggest this effect should be small. Unfortunately, reliable empirical \(\alpha\)-enhanced models are not currently available for stellar populations with ages less than 1 Gyr. Therefore, to test this \(\alpha\)-enhancement hypothesis, we first measure the EWs of these two features from our data (see Section 4), obtaining a Ca k EW of \(2.15\pm 0.25\)A, and a Na d EW of \(2.09\pm 0.46\)A. For comparison, our posterior median model predicts values of 1.12A and 0.41A respectively. We then scale up the metallicity of our model, keeping all other parameters fixed, until the predicted EWs match our data. By this process, we obtain [Ca/Fe] \(=0.67^{+0.25}_{-0.15}\). We are however unable to reproduce the observed depth of Na d via this process, which we attribute to the known strong ISM component of this absorption feature [29; 32]. The Ca abundance we calculate is however fully consistent with both theoretical predictions [33] and observational evidence [34] for \(\alpha\)-enhancement in extreme stellar populations. In particular, [3] report a consistent value of [Ca/Fe] \(=0.59\pm 0.07\) for an extreme massive quiescent galaxy at \(z=2.1\) We therefore adopt our measured Ca abundance as our best estimate of the \(\alpha\)-enhancement of GS-9209, \([\alpha/\mathrm{Fe}]=0.67^{+0.25}_{-0.15}\). This extreme \(\alpha\)-enhancement supports our finding of an extremely short, \(\lesssim 200\) Myr formation timescale [31], as shown in Fig. 3. We caution however that this value could be artificially boosted by an ISM contribution to the Ca k absorption line. ### Evidence for AGN activity From our Bagpipes full spectral fit, we measure an observed broad H\(\alpha\) flux of \(f_{\mathrm{H\alpha,\,broad}}=1.26\pm 0.08\times 10^{-17}=\mathrm{erg\ s^{-1}\ cm^{-2}}\) and full width at half maximum (FWHM) of \(10800\pm 600\) km s\({}^{-1}\) in the rest frame. This line width, whilst very broad, is consistent with rest-frame UV broad line widths measured for some \(z=6\) quasars (e.g., [35; 36]). We also recover an observed AGN continuum flux at rest-frame wavelength, \(\lambda_{\mathrm{rest}}=5100\)A of \(f_{5100}=0.040\pm 0.004\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\). This is approximately 5 per cent of the total observed flux from GS-9209 at \(\lambda=2.9\mu\)m. We measure a power-law index for the AGN continuum emission of \(\alpha_{\lambda}=-1.36\pm 0.08\) at \(\lambda_{\mathrm{rest}}<5000\)A, and \(\alpha_{\lambda}=0.69\pm 0.14\) at \(\lambda_{\mathrm{rest}}>5000\)A. These indices are broadly consistent with the average values observed for local quasars [37]. In combination with the non-detection of GS-9209 at longer wavelengths (see Section 2), this suggests the AGN component in GS-9209 is not significantly reddened. The AGN contribution to the continuum flux from GS-9209 rises to \(\simeq 15\) per cent at the blue end of our spectrum (\(\lambda=1.7\mu\)m), and \(\simeq 20\) per cent at the red end (\(\lambda=5\mu\)m). Just above the Lyman break at \(\lambda\simeq 7000\)A, the AGN contribution is \(\simeq 35\) per cent of the observed flux. Given our measured \(f_{\mathrm{H\alpha,\,broad}}\), which is more direct than our AGN continuum measurement, the average relation for local AGN presented by [38] predicts \(f_{5100}\) to be \(\simeq 0.4\) dex brighter than we measure. However, given the intrinsic scatter of 0.2 dex they report, our measured \(f_{5100}\) is only \(2\sigma\) below the mean relation. The extreme equivalent widths of the observed Balmer absorption features firmly disfavour stronger AGN continuum emission. We fit the narrow H\(\alpha\) and [N ii] lines in our spectrum as follows. We first subtract from our observed spectrum the posterior median Bagpipes model from our full spectral fitting, described in Section 2.2. We then simultaneously fit Gaussian components to both lines, assuming the same velocity width for both, which is allowed to vary. This process is visualised in Fig. 2. We also show the broad H\(\beta\) line in our AGN model, for which we assume the same width as broad H\(\alpha\), as well as Case B recombination. It can be seen that the broad H\(\beta\) line peaks at around the noise level in our spectrum, and is hence too weak to be clearly observed in our data. We obtain a H\(\alpha\) narrow-line flux of \(1.58\pm 0.10\ \times\ 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) and a [N ii] flux of \(1.56\pm 0.10\ \times\ 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\), giving a line ratio of \(\log_{10}([\)N ii\(]/\mathrm{H}\alpha)=-0.01\pm 0.04\). This line ratio is significantly higher than would be expected as a result of ongoing star formation, and is consistent with excitation due to an AGN or shocks resulting from galactic outflows [39]. Such outflows are commonly observed in post-starburst galaxies at \(z\gtrsim 1\)[40] without corresponding AGN signatures, suggesting either that these outflows are driven by stellar feedback, or that the AGN activity responsible for the outflow has since shut down. Even if we assume all the narrow H\(\alpha\) emission is driven by ongoing star formation, we obtain SFR \(=1.9\pm 0.1\) M\({}_{\odot}\) yr\({}^{-1}\)[41], corresponding to log\({}_{10}\)(sSFR/yr\({}^{-1}\)) \(=-10.3\pm 0.1\). This is under the assumption that dust attenuation is negligible, based on our finding of a very low \(A_{V}\) from full spectral fitting in Section 2.2. This is well below the commonly applied sSFR threshold for defining quiescent galaxies at this redshift [25], log\({}_{10}\)(sSFR\({}_{\rm threshold}\)/yr\({}^{-1}\)) \(=0.2/t_{\rm H}=-9.8\), where \(t_{\rm H}\) is the age of the Universe. Given the multiple lines of evidence we uncover for a significant non-stellar component to this line, it is likely that the SFR of GS-9209 is considerably lower than this estimate. We estimate the black-hole mass for GS-9209, \(M_{\bullet}\), from our combined H\(\alpha\) flux and broad-line width, using the relation presented in Equation 6 of [38], obtaining log\({}_{10}(M_{\bullet}/{\rm M}_{\odot})=8.7\pm 0.1\). From our Bagpipes full spectral fit, we infer a stellar velocity dispersion, \(\sigma=247\pm 16\) km s\({}^{-1}\) for GS-9209, after correcting for the intrinsic dispersion of our template set, as well as instrumental dispersion. Given this measurement, the relationship between velocity dispersion and black-hole mass presented by [42] predicts log\({}_{10}(M_{\bullet}/{\rm M}_{\odot})=8.9\pm 0.1\). Given the broad agreement between these estimators, it seems reasonable to conclude that GS-9209 contains a supermassive black hole with a mass of approximately half a billion to a billion Solar masses. It is interesting to note that this is \(\simeq 4-5\) times the black-hole mass that would be expected given the stellar mass of the galaxy, assuming this is equivalent to the bulge mass. This is consistent with the observed increase in the average black-hole to bulge mass ratio for massive galaxies from \(0<z<2\)[43]. This large amount of historical AGN accretion relative to star formation strongly implies that AGN feedback may be responsible for quenching this galaxy. ### Size measurement and dynamical mass GS-9209 is an extremely compact source, which is only marginally resolved in the highest-resolution available imaging data. The CANDELS/3DHST team Figure 4: JWST NIRCam imaging of GS-9209. Each cutout image shows an area of \(1.5^{\prime\prime}\times 1.5^{\prime\prime}\). The RGB image in the first (leftmost) panel is constructed with F430M as red, F210M as green and F182M as blue. The second panel shows the F210M image, with our posterior median PetroFit model shown in the third panel. The residuals between model and data are shown in the right panel, on the same colour scale as the middle two panels. [44] measured an effective radius, \(r_{e}=0.029\pm 0.002^{\prime\prime}\) for GS-9209 in the HST F125W filter via Sersic fitting, along with a Sersic index, \(n=6.0\pm 0.8\). At \(z=4.658\), this corresponds to \(r_{e}=189\pm 13\) parsecs. We update this size measurement using the newly available JWST NIRCam F210M-band imaging, which has a FWHM of \(\simeq 0.07^{\prime\prime}\) (see Section 4.4). Accounting for the AGN point-source contribution, we measure an effective radius, \(r_{e}=0.033\pm 0.003^{\prime\prime}\) for the stellar component of GS-9209, along with a Sersic index, \(n=2.3\pm 0.3\). At \(z=4.658\), this corresponds to \(r_{e}=215\pm 20\) parsecs. This is consistent with the CANDELS/3DHST measurement, and is \(\simeq 0.7\) dex below the mean relationship between \(r_{e}\) and stellar mass for quiescent galaxies at \(z\simeq 1\)[44; 45]. This is interesting given that post-starburst galaxies \(z\simeq 1\) are known to be more compact than is typical for the wider quiescent population [46]. We calculate a stellar-mass surface density within \(r_{e}\) of \(\log_{10}(\Sigma_{\rm eff}/{\rm M}_{\odot}\ {\rm kpc}^{-2})=11.15\pm 0.08\), consistent with the densest stellar systems in the Universe [47]. We show the F210M data for GS-9209, along with our posterior-median model in Fig. 4. We estimate the dynamical mass using our size and velocity dispersion measurements (e.g., [40]), obtaining a value of \(\log_{10}(M_{\rm dyn}/{\rm M}_{\odot})=10.3\pm 0.1\). This is \(\simeq 0.3\) dex lower than the stellar mass we measure. As GS-9209 is only marginally resolved, even in JWST imaging data, and due to the presence of the AGN component, it is plausible that our measured \(r_{e}\) may be subject to systematic uncertainties. Deeper imaging data in the F200W or F277W bands (e.g., from the JWST Advanced Deep Extragalactic Survey; JADES) will provide a useful check on this, particularly given the lower AGN fraction in the F277W band. Furthermore, since the pixel scale of NIRSpec is \(0.1^{\prime\prime}\), our velocity dispersion measurement may not accurately represent the central velocity dispersion of GS-9209, leading to an underestimated dynamical mass. It should also be noted that the stellar mass we measure is strongly dependent on our assumed IMF. A final, intriguing possibility would be a high level of rotational support in GS-9209, as has been observed for quiescent galaxies at \(2<z<3\)[48]. Unfortunately, the extremely compact nature of the source makes any attempt at resolved studies extremely challenging, even with the JWST NIRSpec integral field unit. Resolved kinematics for this galaxy would be a clear use case for the High Angular Resolution Monolithic Optical and Near-infrared Integral field spectrograph (HARMONI) planned for the Extremely Large Telescope (ELT). ## 3 Conclusion We report the spectroscopic confirmation of a massive quiescent galaxy, GS-9209 at a new redshift record of \(z=4.6582\pm 0.002\), with a stellar mass of \(\log_{10}(M_{*}/{\rm M}_{\odot})=10.61\pm 0.02\). This galaxy formed its stellar population over a \(\simeq 200\) Myr period, approximately \(600-800\) Myr after the Big Bang (\(z_{\rm form}=7.3\pm 0.2\)), before quenching at \(z_{\rm quench}=6.7\pm 0.3\). GS-9209 demonstrates unambiguously that massive galaxy formation was already well underway within the first billion years of cosmic history, with this object having reached \(\log_{10}(M_{*}/\mathrm{M_{\odot}})>10.3\) by \(z=7\). This galaxy also clearly demonstrates that the earliest onset of galaxy quenching was no later than \(\simeq 800\) Myr after the Big Bang. We estimate the iron abundance and \(\alpha\)-enhancement of GS-9209, finding \(\mathrm{[Fe/H]}=-0.97^{+0.06}_{-0.07}\) and \(\mathrm{[\alpha/Fe]}=0.67^{+0.25}_{-0.15}\), suggesting the stellar mass vs iron abundance relation at \(z\simeq 7\), when this object formed most of its stars, was \(\simeq 0.4\) dex lower than at \(z\simeq 3.5\)[30]. Whilst its spectrum is dominated by stellar emission, GS-9209 also hosts an AGN, for which we measure a black-hole mass of \(\log_{10}(M_{\bullet}/\mathrm{M_{\odot}})=8.7\pm 0.1\) from the observed broad and narrow H\(\alpha\) emission [38]. We also predict a consistent value of \(\log_{10}(M_{\bullet}/\mathrm{M_{\odot}})=8.9\pm 0.1\) based on the stellar velocity dispersion of GS-9209 [42]. Whilst large-scale star formation in GS-9209 has been quenched for almost half a billion years, the significant integrated quantity of AGN accretion implied by this large black-hole mass (\(\simeq 4-5\) times what would be expected given the stellar mass of this galaxy) suggests that AGN activity plausibly played a significant role in quenching star formation in this galaxy. Based on the properties we measure, GS-9209 seems likely to be associated with the most extreme galaxy populations currently known at \(z>5\), such as the highest-redshift submillimetre galaxies and quasars (e.g., [36; 49; 50]). GS-9209 is also plausibly descended from an object similar to the \(z\simeq 8\) massive galaxy candidates recently reported in the first data from the JWST CEERS programme [7], though the number density of these candidates is significantly higher than that of \(z>4\) quiescent galaxies. GS-9209 and similar objects (e.g., [9]) are also likely progenitors for the dense, ancient cores of the most massive galaxies in the local Universe. This study, which makes use of just 5 hours of on-source integration time, demonstrates the huge potential of JWST for revolutionising our understanding of the high-redshift Universe. It seems clear that this work will be followed rapidly by the confirmation and detailed spectroscopic exploration of large samples of \(z>4\) quiescent galaxies, to build up a detailed understanding of massive galaxy formation and quenching during the first billion years. ## 4 Methods ### Spectroscopic data reduction We reduce our NIRSpec data using the JWST Science Calibration Pipeline v1.8.4, using version 1017 of the JWST calibration reference data. To improve the spectrophotometric calibration of our data, we also reduce observations of the A-type standard star 2MASS J18083474+6927286 [51], taken as part of JWST commissioning programme 1128 (PI: Lutzgendorf) [52] using the same instrument modes. We compare the resulting stellar spectrum against a spectral model for this star from the CALSPEC library [53] to construct a calibration function, which we then apply to our observations of GS-9209. ### Photometric data reduction The majority of our photometric data are taken directly from the CANDELS GOODS South catalogue [54]. We supplement this with new JWST NIRCam photometric data taken as part of the Ultra Deep Field Medium-Band Survey [55] (Programme ID: 1963; PI: Williams). Data are available in the F182M, F210M, F430M, F460M and F480M bands. We reduce these data using the PRIMER Enhanced NIRCam Image-processing Library (PENCIL, e.g., [8]), a custom version of the JWST Science Calibration Pipeline (v1.8.0), and using version 1011 of the JWST calibration reference data. We measure photometric fluxes for GS-9209 in large, 1\({}^{\prime\prime}\)-diameter apertures to ensure we measure the total flux in each band (the object is isolated, with no other sources within this radius, see Fig. 4). We measure uncertainties as the standard deviation of flux values in the nearest 100 blank-sky apertures, masking out nearby objects (e.g., [56]). ### Bagpipes full spectral fitting We fit the available photometry in parallel with our new spectroscopic data using the Bagpipes code [57]. Our model has a total of 22 free parameters, describing the stellar, dust, nebular and AGN components of the spectrum. A full list of these parameters, along with their associated priors, is given in Table 1. We fit our model to the data using the MultiNest nested sampling algorithm [58; 59; 60]. We use the 2016 updated version of the BC03 [61; 62] stellar population models, using the MILES stellar spectral library [63] and updated stellar evolutionary tracks [64; 65]. We assume a double-power-law star-formation-history model (e.g., [24; 57]). We allow the logarithm of the stellar metallicity, \(Z_{*}\) to vary freely from \(\log_{10}(Z_{*}/\mathrm{Z}_{\odot})=-2.45\) to 0.55. These are the limits of the range spanned by the BC03 model grid relative to our adopted Solar metallicity value (\(\mathrm{Z}_{\odot}=0.0142\)[27]). We mask out the narrow emission lines in our spectrum during our Bagpipes fitting due to likely AGN contributions, whereas Bagpipes is only capable of modelling emission lines from star-forming regions. We do however still include a nebular model in our Bagpipes fit to allow for the possibility of nebular continuum emission from star-forming regions. We assume a stellar-birth-cloud lifetime of 10 Myr, and vary the logarithm of the ionization parameter, U, from \(\log_{10}(U)=-4\) to \(-2\). We also allow the logarithm of the gas-phase metallicity, \(Z_{\mathrm{g}}\), to vary freely from \(\log_{10}(Z_{\mathrm{g}}/\mathrm{Z}_{\odot})=-2.45\) to 0.55. Because our eventual fitted model only includes an extremely small amount of star formation within the last 10 Myr for GS-9209, this nebular component makes a negligible contribution to the fitted model spectrum. We model attenuation of the above components by dust using the model of [66; 67], which is parameterised as a power-law deviation from the Calzetti dust attenuation law [68], and also includes a Drude profile to model the 2175A bump. We allow the \(V-\)band attenuation, \(A_{V}\) to vary from \(0-4\) magnitudes. We further assume that attenuation is multiplied by an additional factor for all stars with ages below 10 Myr, and resulting nebular emission. This factor is commonly assumed to be 2, however we allow this to vary from 1 to 5. We allow redshift to vary, using a narrow Gaussian prior with a mean of 4.66 and standard deviation of 0.01. We additionally convolve the spectral model with a Gaussian kernel in velocity space, to account for velocity dispersion in our target galaxy. The width of this kernel is allowed to vary with a logarithmic prior across a range from \(50-500\) km s\({}^{-1}\). Separately from the above components, we also include a model for AGN continuum, broad H\(\alpha\) and H\(\beta\) emission. Following [37], we model AGN continuum emission with a broken power law, with two spectral indices and a break at \(\lambda_{\rm rest}=5000\)A in the rest frame. We vary the spectral index at \(\lambda_{\rm rest}<5000\)A using a Gaussian prior with a mean value of \(\alpha_{\lambda}=-1.5\) (\(\alpha_{\nu}=-0.5\)) and standard deviation of 0.1. We also vary the spectral index at \(\lambda_{\rm rest}>5000\)A using a Gaussian prior with a mean value of \(\alpha_{\lambda}=0.5\) (\(\alpha_{\nu}=-2.5\)) and standard deviation of 0.2. We parameterise the normalisation of the AGN continuum component using \(f_{5100}\), the flux at rest-frame 5100A, which we allow to vary with a linear prior from 0 to \(10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\). We model broad H\(\alpha\) with a Gaussian component, varying the normalisation from 0 to \(2.5\times 10^{-17}\) erg s\({}^{-1}\) cm\({}^{-2}\) using a linear prior, and the velocity dispersion from \(1000-5000\) km s\({}^{-1}\) in the rest frame using a logarithmic prior. We also include a broad H\(\beta\) component in the model, which has the same parameters as the broad H\(\alpha\) line, but with normalisation divided by the standard 2.86 ratio from Case B recombination theory. However, as shown in Fig. 2, this H\(\beta\) model peaks at around the noise level in our spectrum, and the line is therefore plausible in not being obviously detected in the observed spectrum. We include intergalactic medium (IGM) absorption using the model of [69]. To allow for imperfect spectrophotometric calibration of our spectroscopic data, we also include a second-order Chebyshev polynomial (e.g., [70; 71]), which the above components of our combined model are all divided by before being compared with our spectroscopic data. We finally fit an additional white noise term, which multiplies the spectroscopic uncertainties from the JWST pipeline by a factor, \(a\), which we vary with a logarithmic prior from \(1-10\). ### Size measurement from F210M-band imaging We model the light distribution of GS-9209 in the JWST NIRCam F210M imaging data using PetroFit [72]. We fit these PetroFit models to our data using the MultiNest nested sampling algorithm [58; 59; 60]. We use F210M in preference to the F182M band due to the smaller AGN contribution in F210M and the fact that it sits above the Balmer break, therefore being more representative of the stellar mass present rather than any ongoing star formation. As our spectroscopic data contains strong evidence for an AGN, we fit both Sersic and delta-function components simultaneously, convolved by an empirically estimated PSF, derived by stacking bright stars. In preliminary fitting, we find that the relative fluxes of these two components are entirely degenerate with the Sersic parameters. We therefore predict the AGN contribution to the flux in this band based on our full-spectral-fitting result, obtaining a value of \(8\pm 1\) per cent. We then impose this as a Gaussian prior on the relative contributions from the Sersic and delta function components. The 11 free parameters of our model are the overall flux normalisation, which we fit with a logarithmic prior, the effective radius, \(r_{e}\), Sersic index, \(n\), ellipticity and position angle of the Sersic component, the x and y centroids of both components, the position angle of the point spread function, and the fraction of light in the delta-function component, which we fit with a Gaussian prior with a mean of 8 per cent and standard deviation of 1 per cent, based on our full spectral fitting result. ## Acknowledgements The authors would like to thank James Aird for helpful discussions. A. C. Carnall thanks the Leverhulme Trust for their support via a Leverhulme Early Career Fellowship. R. J. McLure, J. S. Dunlop, D. J. McLeod, V. Wild, R. Begley, C. T. Donnan and M. L. Hamadouche acknowledge the support of the Science and Technology Facilities Council. F. Cullen acknowledges support from a UKRI Frontier Research Guarantee Grant (grant reference EP/X021025/1). A. Cimatti acknowledges support from the grant PRIN MIUR 2017 - 20173ML3WW_001. ## Statement of Author Contributions ACC led the preparation of the observing proposal, reduction and analysis of the data, and preparation of the manuscript. RJM, JSD, VW, FC and AC provided advice and assistance with data reduction, analysis and interpretation, as well as consulting on the preparation of the observing proposal. DJM, DM, RB and CTD reduced the JWST imaging data and prepared the empirical PSF. DJM, MLH and SMJ assisted with measurement of the size and morphology of GS-9209. SW assisted with selection of GS-9209 from the CANDELS catalogues prior to the observing proposal being submitted. All authors assisted with preparation of the final published manuscript.
2303.00367
A limiting model for a low Reynolds number swimmer with N passive elastic arms
We consider a low Reynolds number artificial swimmer that consists of an active arm followed by $N$ passive springs separated by spheres. This setup generalizes an approach proposed in Montino and DeSimone, Eur. Phys. J. E, vol. 38, 2015. We further study the limit as the number of springs tends to infinity and the parameters are scaled conveniently, and provide a rigorous proof of the convergence of the discrete model to the continuous one. Several numerical experiments show the performances of the displacement in terms of the frequency or the amplitude of the oscillation of the active arm.
François Alouges, Aline Lefebvre-Lepot, Jessie Levillain
2023-03-01T09:49:44Z
http://arxiv.org/abs/2303.00367v1
# A limiting model for a low Reynolds number swimmer with \(N\) passive elastic arms ###### Abstract We consider a low Reynolds number artificial swimmer that consists of an active arm followed by \(N\) passive springs separated by spheres. This setup generalizes an approach proposed in Monino and DeSimone, Eur. Phys. J. E, vol. 38, 2015. We further study the limit as the number of springs tends to infinity and the parameters are scaled conveniently, and provide a rigorous proof of the convergence of the discrete model to the continuous one. Several numerical experiments show the performances of the displacement in terms of the frequency or the amplitude of the oscillation of the active arm. ## 1 Introduction As stated by Purcell's _S scallop Theorem_[3], reciprocal shape changes in a swimmer never leads to a net displacement of the system in a low Reynolds number setting. Indeed, a microscopic scallop opening and closing its valve would be completely unable to swim, due to negligible inertial forces in this situation [4]. Several simple mechanisms have then been introduced (see e.g. [5]) to overcome this obstruction, most of them using two degrees of freedom in order to create closed curves with nonzero surface in the shape space of the swimmer. One of the simplest mechanisms introduced in the literature is probably Najafi and Golestanian's three-sphere swimmer [6], which consists in three spheres linked by two extensible arms of negligible thickness, moving along a single direction. This model is much simpler than Purcell's original three-link swimmer [3], or Purcell's rotator [7], as there is no rotational motion involved. This swimmer has two degrees of freedom, activated periodically in time with a phase lag in order to produce the loop. Both Purcell's and Najafi and Golestanian's swimmers have been extensively studied in [8, 9, 10, 11, 12, 13]. As an extension of this three-sphere swimmer, Monino and DeSimone then introduced a three-sphere swimmer with a passive elastic arm [14]. This adaptation has only one degree of freedom, which is the length of the non-elastic arm. Thanks to a resonant effect at natural frequency of the system (which depends on the viscosity of the fluid, the masses and the spring constant), an out-of-phase oscillation of the spring is created, which ultimately leads to a net motion of the swimmer. However, at very low or very high frequency, no net motion is possible after a stroke. Having this passive elastic arm also confines net motion to only one direction on the swimming axis, swimming direction is thus limited, and the swimmer can only move with its passive arm ahead. This was also denoted by Passov [15], when looking at Purcell's three-link swimmer with a passive elastic tail. In this paper, Monino and DeSimone's swimmer is extended by adding a large number \(N\) of passive elastic arms to their one-dimensional swimmer, thus turning it into an \((N+2)\)-sphere swimmer. This simple swimmer then leads to a limit model with an elastic tail resembling a one-dimensional flagella along which compressive waves propagate. The paper is organized as follows. In sect. 2, we describe the \(N\)-spring swimmer, and its equations of motion, before looking at the limit model, when the number of springs tends to infinity, in sect. 2.3. We prove the convergence of the discrete model to the continuous one in sect. 3, using the fact that it is found to be a non-conventional mass lumping discretization of the limit model. Sect. 4 introduces two formulas in order to compute the net displacement of both swimmers, discrete and continuous. Finally, in sect. 5 we study numerically the movement and displacement of our swimmer depending on various system parameters, in order to find optimal swimming parameters to obtain the largest net displacement possible. ## 2 Problem's formulation and study: \(N\)-spring discrete model and its continuous limit The swimmer studied in this paper is an extension of the three-sphere swimmer with a single passive elastic arm [14], to a swimmer with \(N+2\) spheres and \(N\) passive elastic arms, presented in figure 1. The first arm of this artificial swimmer is a rod of negligible thickness, surrounded by two spheres of radius \(a_{1}\). This arm has a prescribed periodic movement around a length at rest \(L\), of the form \(L_{0}(t)=L(1+\tilde{\varepsilon}\cos(\omega t))\) where \(\tilde{\varepsilon}\in[0,1)\) is a non-dimensional parameter. \(\tilde{\varepsilon}<1\) so that the active arm always has a positive length. We define \(\varepsilon\) as \(\varepsilon=L\tilde{\varepsilon}\). The rest of the swimmer has a total length at rest \(\Lambda\) that does not depend on \(N\). In order to keep a constant length and have an elastic force that does not depend on \(N\), all the other spheres have a radius \(a=\tilde{a}/N\), the springs have each a rest length \(h=\Lambda/N\gg a\), and an elastic constant \(k=\tilde{k}N\), with \(\tilde{k}\) and \(\tilde{a}\) prescribed and independent of \(N\). If the swimmer is able to control the length of the front rod with the prescribed periodic function \(L_{0}(t)\), the length of the \(N\) remaining springs are governed by the balance of viscous and elastic forces. At any time \(t\), the length \(L_{j}(t)\) of the \(j\)-th arm, \(j\geq 1\) is written as \(L_{j}(t)=\frac{\ell_{j}(t)}{N}+h\). Let us then denote by \(\mu\) the fluid viscosity, \(f_{j}^{F}\) and \(f_{j}^{R}\) the hydrodynamic and elastic forces on the \(j\)-th sphere. We also call \(x_{j}\) the coordinate of its center, so that \(V_{j}=\dot{x}_{j}\) is the velocity of the \(j\)-th sphere. The geometry of the system entails \(\dot{L}_{j}=V_{j+2}-V_{j+1}\) for all \(j=0\),..., \(N\). In order to effectively swim, our \(N\)-spring swimmer undergoes periodic harmonic but non-reversible deformations, just like the original swimmers from Najafi and Golestanian [6], and Montino and DeSimone [14]. However, due to the geometry, we expect a wave to propagate along the tail. This is the behaviour of this wave that we aim at describing in the remainder of the paper. Figure 1: Low Reynolds number swimmer with \(N\) elastic arms. ### First approximations In a first approximation, we consider the case where the hydrodynamic force on the \(j\)-th sphere only depends on the speed of that same sphere, and neglect interactions between spheres. This leads to the following set of equations on (fluid) forces and velocities: \[\left\{\begin{array}{l}f_{j}^{F}=-6\pi\mu aV_{j}\mbox{ for }j\geq 3,\\ f_{j}^{F}=-6\pi\mu a_{1}V_{j}\mbox{ for }j=1,2.\end{array}\right. \tag{1}\] The elastic forces on each sphere can be written as: \[\left\{\begin{array}{ll}f_{2}^{R}&=k(L_{1}-h)=k\frac{\ell_{1}}{N}\\ f_{j}^{R}&=k\big{(}(L_{j-1}-h)-(L_{j-2}-h)\big{)},\\ &=k\frac{\ell_{j-1}-\ell_{j-2}}{N}\ \ \ \ \mbox{ for }3\leq j\leq N+1\\ f_{N+2}^{R}&=-k(L_{N}-h)=-k\frac{\ell_{N}}{N}.\end{array}\right. \tag{2}\] At low Reynolds number, inertial forces are negligible. This, together with the fact that the artificial swimmer is self-propelled, gives: \[\left\{\begin{array}{l}f_{1}^{F}+\cdots+f_{N+2}^{F}=0,\\ f_{j}^{R}+f_{j}^{F}=0\mbox{ for }j\geq 3.\end{array}\right. \tag{3}\] Using (1), (2) and (3), we obtain the expression of fluid forces on each sphere with respect to the length of the adjacent arms. In particular, for the first two spheres: \[\left\{\begin{array}{l}f_{1}^{F}-f_{2}^{F}=6\pi\mu a_{1}(V_{2}-V_{1})=6\pi \mu a_{1}\dot{L}_{0},\\ f_{1}^{F}+f_{2}^{F}=f_{3}^{R}+\cdots+f_{N+2}^{R}=-k\ell_{1}/N,\end{array}\right. \tag{4}\] which finally leads to: \[\left\{\begin{array}{l}f_{1}^{F}=\frac{1}{2}(+6\pi\mu a_{1}\dot{L}_{0}- \tilde{k}\ell_{1}),\\ f_{2}^{F}=\frac{1}{2}(-6\pi\mu a_{1}\dot{L}_{0}-\tilde{k}\ell_{1}).\end{array}\right. \tag{5}\] ### Movement of the spheres In order to write the equations governing the system, we use equations (1-5) to find ODEs on the elongation \(l_{j}(t)\) of the \(j\)-th arm, for \(j\geq 1\). We first consider the case \(j\geq 2\). Writing \(\dot{L}_{j}=V_{j+2}-V_{j+1}=\frac{1}{6\pi\mu a}(f_{j+2}^{R}-f_{j+1}^{R})\), one deduces \[\dot{\ell}_{j}=\Lambda^{2}K\frac{\ell_{j-1}-2\ell_{j}+\ell_{j+1}}{h^{2}},\,2 \leq j\leq N, \tag{6}\] where we have added a fictitious variable \[\ell_{N+1}=0\,, \tag{7}\] and with \(K=\frac{\tilde{k}}{6\pi\mu\tilde{a}}\). To determine the equation for the first elastic arm, we use the fact that \(\dot{L}_{1}=V_{3}-V_{2}=-\frac{1}{6\pi\mu a}f_{3}^{F}+\frac{1}{6\pi\mu a_{1}}f_ {2}^{F}\) to obtain, using equations (2) and (5): \[h\dot{\ell}_{1}=\Lambda^{2}K\frac{\ell_{2}-\ell_{1}}{h}-\frac{\Lambda K\tilde{ a}}{2a_{1}}\ell_{1}-\frac{\Lambda}{2}\dot{L}_{0}. \tag{8}\] We can easily verify that the ODE problem (6,7,8) is well-posed using Cauchy-Lipschitz theorem, and provides a unique solution \((\ell_{j}(t))_{1\leq j\leq N+1}\) for any initial configuration. Seeking for periodic (complex) solutions to equation (6) leads to \[\ell_{j}(t)=(\alpha_{d}\gamma_{+}^{j-1}+\beta_{d}\gamma_{-}^{j-1})e^{i\omega t}, \tag{9}\] where \(\alpha_{d},\beta_{d}\in\mathbb{C}\) and \[\gamma_{\pm}=\frac{i/(K_{\omega}N^{2})+2\pm\sqrt{\Delta}}{2} \tag{10}\] and \(\Delta=\frac{-1}{K_{\omega}^{2}N^{4}}+\frac{4i}{K_{\omega}N^{2}}\), where \(K_{\omega}=\frac{K}{\omega}=\frac{\tilde{k}}{6\pi\mu\tilde{a}\omega}\) is an adimensional number. Notice that \(|\gamma_{+}|>1\) while \(|\gamma_{-}|<1\). The constants \(\alpha_{d}\) and \(\beta_{d}\) may be determined through the boundary conditions. Namely assuming, from the linearity of the problem, \(\ell_{1}=b_{d}e^{i\omega t}\), with \(b_{d}\in\mathbb{C}\) and recalling \(l_{N+1}=0\) enables us to write \[\left\{\begin{array}{l}\ell_{1}(t)=b_{d}e^{i\omega t}=e^{i\omega t}(\alpha_ {d}+\beta_{d}),\\ \ell_{N+1}(t)=e^{i\omega t}(\alpha_{d}\gamma_{+}^{N}+\beta_{d}\gamma_{-}^{N}) =0\,,\end{array}\right. \tag{11}\] to finally obtain \[\alpha_{d}=\frac{-\gamma_{-}^{N}b_{d}}{(\gamma_{+}^{N}-\gamma_{-}^{N})},\quad \beta_{d}=\frac{\gamma_{+}^{N}b_{d}}{(\gamma_{+}^{N}-\gamma_{-}^{N})}. \tag{12}\] Then, we use (8) to determine \(b_{d}\): \[b_{d}=-\frac{\varepsilon i/2}{i/N+NK_{\omega}(1-z_{d})+K_{\omega}\frac{\tilde {a}}{2a_{1}}}, \tag{13}\] where \(z_{d}=\frac{\gamma_{+}^{N}\gamma_{-}-\gamma_{-}^{N}\gamma_{+}}{\gamma_{+}^{N} -\gamma_{-}^{N}}\). ### Limit model with an infinite number of springs As we increase the number of springs in our swimmer, a limit model arises, with an elastic-like tail, as shown in figure 2. This elastic tail compresses and dilates itself in the same way that the springs do, following the active arm, in order to create a global displacement of our swimmer. Equations (6) and (8) can be viewed as a finite element discretization of a PDE, which describes the continuous version of our swimmer. Limit expressions for this PDE model are formally derived throughout this section while the convergence of the \(N\)-spring model to the continuous model will be proven in Sect. 3. First, as \(h\to 0\) (\(N\rightarrow\infty\)), \(\frac{\ell_{j-1}-2\ell_{j}+\ell_{j+1}}{h^{2}}\) formally converges to a second order derivative. More precisely, we introduce a new space variable \(y_{j}=(j-1)h\) for \(1\leq j\leq N+1\). The points \(y_{j}\) are equally Figure 2: Continuous model of the low-Reynolds-number elastic swimmer. Color variations in the tail indicate compression and expansion of the swimmer. spaced and thus different from the previous \(x_{j}\). Since \(y_{1}=0\), the \(y\) variable can be seen as a local space coordinate attached to the second sphere, and we assume \(\ell(y_{j})=\ell_{j}\) for a smooth enough function \(\ell\). Passing to the formal limit in (6) leads to a heat equation: \[\partial_{t}\ell(y,t)=K\Lambda^{2}\partial_{yy}\ell(y,t),\quad\forall(y,t)\in[0,\Lambda]\times\mathbb{R}_{+}^{\star}. \tag{14}\] Concerning the boundary conditions, we first notice that \(\ell_{N+1}=0\) leads to \(\ell(\Lambda,t)=0\) for all \(t>0\). As \(h\to 0\), the equation (8) on \(\ell_{1}\) formally becomes a Fourier-type boundary condition: \[\Lambda^{2}K\partial_{y}\ell(0,t)-\Lambda K\frac{\tilde{a}}{2a_{1}}\ell(0,t)= \frac{\Lambda}{2}\dot{L}_{0}(t),\quad\forall t>0.\] Therefore, we finally obtain the following continuous problem: Find \(\ell\in\mathcal{C}^{2}([0,\Lambda]\times\mathbb{R}_{+}^{\star})\) such that \(\forall(y,t)\in(0,\Lambda)\times\mathbb{R}_{+}^{\star}\), \[\left\{\begin{array}{l}\partial_{t}\ell(y,t)-\Lambda^{2}K\partial_{yy}\ell (y,t)=0,\\ \\ \Lambda^{2}K\partial_{y}\ell(0,t)-\Lambda K\frac{\tilde{a}}{2a_{1}}\ell(0,t) =\frac{\Lambda}{2}\dot{L}_{0}(t),\\ \\ \ell(\Lambda,t)=0.\end{array}\right. \tag{15}\] ### Well-posedness of the problem Equation (15) belongs to the class of problem for which the classical theory of parabolic equations applies. Namely, calling \[\mathcal{V}=\left\{u\in H^{1}((0,\Lambda))|u(\Lambda)=0\right\}\,,\] which is a Hilbert space with the scalar product \((u,v)_{\mathcal{V}}=\int_{0}^{\Lambda}\partial_{y}u\,\partial_{y}v\,dy\), the variational formulation reads: Let \(T>0\), find \(\ell(y,t)\in L_{t}^{\infty}(0,T;L_{y}^{2}((0,\Lambda)))\cap L_{t}^{2}(0,T; \mathcal{V})\) such that for all \(t\in(0,T)\) and for all \(v\in\mathcal{V}\) \[\frac{d}{dt}\int_{0}^{\Lambda}\ell v\,dy+\Lambda^{2}K\int_{0}^{ \Lambda}\partial_{y}\ell\,\partial_{y}v\,dy \tag{16}\] \[+\frac{\Lambda K\tilde{a}}{2a_{1}}\ell(0,t)v(0)=-\frac{\Lambda}{ 2}\dot{L}_{0}(t)v(0)\] with \(\ell(y,0)=\ell_{0}(y)\in L^{2}((0,\Lambda))\) a given initial data. Defining the bilinear form \(\kappa\) in \(\mathcal{V}\times\mathcal{V}\) as: \[\kappa:(u,v)\mapsto\Lambda^{2}K\int_{0}^{\Lambda}\partial_{y}u(y)\partial_{y}v (y)\,dy+\frac{\Lambda K\tilde{a}}{2a_{1}}u(0)v(0), \tag{17}\] which is symmetric and coercive on \(\mathcal{V}\), well-posedness of the problem (16) follows from standard results on parabolic equations (see e.g. [16]). Moreover, it is well known that the solution \(\ell(\cdot,t)\) is of class \(\mathcal{C}^{\infty}([0,\Lambda])\) for any time \(t>0\). ### Analytical periodic solutions Let us now solve the system (15) using the following ansatz \(\ell(y,t)=\underline{\ell}(y)e^{i\omega t}\). From (14) we deduce the following equation: \[i\underline{\ell}=\Lambda^{2}K_{\omega}\partial_{yy}\underline{\ell}. \tag{18}\] The characteristic polynomial associated to (18) has two roots, \(r:=\frac{1+i}{\Lambda\sqrt{2K_{\omega}}}\) and \(-r\), which leads to the following solutions: \[\underline{\ell}(y)=\alpha e^{ry}+\beta e^{-ry}, \tag{19}\] with \(\alpha\), \(\beta\in\mathbb{C}\). We then determine \(\alpha\) and \(\beta\) using boundary conditions: \[\left\{\begin{array}{l}-(\alpha+\beta)\frac{\tilde{a}}{2a_{1}}+\Lambda r(\alpha -\beta)=\frac{i\varepsilon}{2K_{\omega}},\\ \alpha e^{r\Lambda}+\beta e^{-r\Lambda}=0,\end{array}\right.\] i.e., \[\left\{\begin{array}{l}\alpha=\frac{i\varepsilon}{2K_{\omega}\big{(}\frac{ \tilde{a}}{2a_{1}}(e^{2r\Lambda}-1)+\Lambda r(e^{2r\Lambda}+1)\big{)}},\\ \beta=-e^{2r\Lambda}\alpha.\end{array}\right. \tag{20}\] We notice that \(r\Lambda=\frac{1+i}{\sqrt{2K_{\omega}}}\) only depends on \(K_{\omega}\). ## 3 Convergence of the discrete model towards the continuous one We first notice that the discrete problem (6) is a kind of _non conventional_ mass-lumped version of a finite element discretization of the continuous one (15). In order to clarify this statement, we introduce the finite element setting. Let \(\mathcal{V}_{h}\subset\mathcal{V}\) the space of continuous, piecewise linear functions \(g\) on the one-dimensional partition \(T_{h}=\{y_{1},\,\cdots,\,y_{N+1}\}\) of \((0,\Lambda)\), and that verify the Dirichlet boundary condition \(g(\Lambda)=0\). Let \(\{\Phi_{j}\}_{j=1,\,N}\) be the standard basis for \(\mathcal{V}_{h}\) consisting of the hat functions defined by \(\Phi_{j}(y_{k})=\delta_{j,k}\) for \(1\leq j,k\leq N\). Let \(\ell_{h}\in\mathcal{V}_{h}\) be the continuous, piecewise linear function such that for \(1\leq j\leq N+1\), \(t>0\), \(\ell_{h}(y_{j},t)=\ell_{j}(t)\). Using the basic semi-discrete Galerkin method would lead to the discretization of (21) in the matrix form: \[\frac{d(M_{h}L_{h})}{dt}+K_{h}L_{h}=\tilde{f}(t), \tag{21}\] with \(L_{h}(t)=(\ell_{1}(t),\,\cdots,\,\ell_{N}(t))^{T}\). Similarly, \(\tilde{f}=(-\frac{\Lambda}{2}\dot{L}_{0},\,0,\,\cdots,\,0),(M_{h})_{i,j}=\int \limits_{0}^{\Lambda}\Phi_{i}(y)\Phi_{j}(y)dy\) and \((K_{h})_{i,j}=\kappa(\Phi_{i},\Phi_{j})\), where \(\kappa\) is defined in equation (17). Computing explicitly the coefficients of the matrices \(K_{h}\) and \(M_{h}\) gives \[(K_{h})_{ij}=\left\{\begin{array}{ll}-\Lambda^{2}K/h&\mbox{for }|i-j|=1,\\ 2\Lambda^{2}K/h&\mbox{for }i=j\geq 2\,,\\ \Lambda^{2}K/h+\Lambda K\tilde{a}/(2a_{1})&\mbox{for }i=j=1\,,\end{array}\right.\] and \[(M_{h})_{ij}=\left\{\begin{array}{ll}h/6&\mbox{for }|i-j|=1,\\ 2h/3&\mbox{for }i=j\geq 2\,,\\ h/3&\mbox{for }i=j=1\,.\end{array}\right.\] The key observation is that Eqs. (6) and (8) are nothing but a mass-lumped discretization of (15) where the mass matrix \(M_{h}\) has been replaced by the diagonal version \[\widetilde{M}_{h}=\begin{pmatrix}h&&0\\ &\ddots&\\ 0&&h\end{pmatrix}\,.\] Hence, \(\ell_{h}\) actually solves \[\frac{d(\widetilde{M}_{h}L_{h})}{dt}+K_{h}L_{h}=\tilde{f}(t)\,, \tag{22}\] together with the initial condition \[\ell_{h}(0)=\ell_{0,h}\in\mathcal{V}_{h}\,. \tag{23}\] The classical mass-lumped method, on the other hand, would have consisted in replacing the tridiagonal mass matrix \(M_{h}\) by a diagonal matrix \(\bar{M}_{h}\) using an integration formula on the vertices of the partition. Namely, using the trapezoidal formula \(\int\limits_{0}^{\Lambda}g\sim\big{(}\frac{1}{2}g(y_{1})+\sum\limits_{j=2}^{N} g(y_{j})+\frac{1}{2}g(y_{N+1})\big{)}h=\big{(}\frac{1}{2}g(y_{1})+\sum\limits_{j=2}^{N} g(y_{j})\big{)}h\), for a function \(g\) satisfying \(g(\Lambda)=0\) leads to the mass-lumped matrix \[\bar{M}_{h}=\begin{pmatrix}h/2&&&0\\ &h&&\\ &&\ddots&\\ 0&&&h\end{pmatrix} \tag{24}\] which differs from \(\widetilde{M}_{h}\). We shall then study the ODE (22), (23) using the method presented in [17] which provides us with a convergence result for the mass-lumped method with \(\bar{M}_{h}\). We introduce the two following inner products on \(\mathcal{V}_{h}\) associated with \(\bar{M}_{h}\) and \(\widetilde{M}_{h}\) respectively. Namely, for \((u_{h},v_{h})\in\mathcal{V}_{h}\) \[\langle u_{h},v_{h}\rangle_{h}=\frac{h}{2}u_{h}(y_{1})v_{h}(y_{1})+h\sum \limits_{j=2}^{N}u_{h}(y_{j})v_{h}(y_{j})\] and \[(u_{h},v_{h})_{h}=h\sum\limits_{j=1}^{N}u_{h}(y_{j})v_{h}(y_{j})\,.\] We also call \(\|\cdot\|_{h}\) the norm associated to \((\cdot,\cdot)_{h}\), while the \(L^{2}\) norm and inner products are denoted by \(\|\cdot\|\) and \((\cdot,\cdot)\) respectively. Gerschgorin Theorem applied to \(M_{h}\) shows the equivalence of the norms \(\|\cdot\|\) and \(\|\cdot\|_{h}\) on \(\mathcal{V}_{h}\)_uniformly_ in \(h\), and, more precisely, we have the estimate, valid for all \(v_{h}\in\mathcal{V}_{h}\) \[\frac{1}{6}(v_{h},v_{h})_{h}\leq(v_{h},v_{h})\leq(v_{h},v_{h})_{h}\,,\] from which we also deduce \[hv_{h}(y_{1})^{2}\leq\|v_{h}\|_{h}^{2}\leq 6\|v_{h}\|^{2}\,. \tag{25}\] Finally, we introduce, for \(u_{h},v_{h}\in\mathcal{V}_{h}\), \(\delta_{h}(u_{h},v_{h})=(u_{h},v_{h})_{h}-(u_{h},v_{h})\) the quadrature error. _Lemma 3.1_.: Let \(u_{h},v_{h}\in\mathcal{V}_{h}\). We have, for \(h\) sufficiently small: \[|\delta_{h}(u_{h},v_{h})| \leq Ch\|\partial_{y}u_{h}\|\|\partial_{y}v_{h}\|, \tag{26}\] \[|\delta_{h}(u_{h},v_{h})| \leq C\sqrt{h}\|\partial_{y}u_{h}\|\|v_{h}\| \tag{27}\] for a constant \(C\) that does not depend on \(u_{h}\), \(v_{h}\) or \(h\). Proof.: In all what follows, \(C\) denotes a constant that may vary from line to line, being always independent of \(h\). Let \(u_{h},v_{h}\in\mathcal{V}_{h}\). We write \(|\delta_{h}(u_{h},v_{h})|\leq|(u_{h},v_{h})_{h}-\langle u_{h},v_{h}\rangle_{h }|+|\langle u_{h},v_{h}\rangle_{h}-(u_{h},v_{h})|\). Thomee [17] provides us with an estimate of the error between \(\langle u_{h},v_{h}\rangle_{h}\) and \((u_{h},v_{h})\), namely, \[|\langle u_{h},v_{h}\rangle_{h}-(u_{h},v_{h})|\leq Ch^{2}\|\partial_{y}u_{h}\| \|\partial_{y}v_{h}\|\] and \[|\langle u_{h},v_{h}\rangle_{h}-(u_{h},v_{h})|\leq Ch\|\partial_{y}u_{h}\|\|v_ {h}\|\] for some constant \(C>0\) that does not depend on \(u_{h}\), \(v_{h}\) or \(h\). The latter estimate is obtained by an inverse inequality. It remains to estimate the term \(\tilde{\delta}_{h}(u_{h},v_{h})=(u_{h},v_{h})_{h}-\langle u_{h},v_{h}\rangle_{h}\). We notice that: \[|\tilde{\delta}_{h}(u_{h},v_{h})| =\frac{h}{2}|u_{h}(y_{1})v_{h}(y_{1})| \tag{28}\] \[=\frac{h}{2}\left|\int_{0}^{\Lambda}\partial_{y}u_{h}(y)\,dy\right| \,\left|\int_{0}^{\Lambda}\partial_{y}v_{h}(y)\,dy\right|\] \[\leq\frac{h\Lambda}{2}\|\partial_{y}u_{h}\|\|\partial_{y}v_{h}\|\,. \tag{29}\] Similarly, (28) together with (25) gives: \[|\tilde{\delta}_{h}(u_{h},v_{h})|\leq C\sqrt{h}\|\partial_{y}u_{h}\|\|v_{h}\|. \tag{30}\] This yields (26) and (27). _Theorem 3.1_.: If \(\ell\) and \(\ell_{h}\) are solution to (16) and (22), (23) respectively, and \(\ell_{0}\in H^{2}((0,\Lambda))\), we have, for all \(t\geq 0\), \[\|\ell_{h}(t)-\ell(t)\| \leq C\|\ell_{0,h}-\ell_{0}\|+Ch^{2}(\|\partial_{yy}\ell_{0}\|+ \|\partial_{yy}\ell(t)\|)\] \[\qquad\quad+\,Ch\left(\int_{0}^{t}\|\partial_{yt}\ell\|^{2}ds \right)^{1/2}.\] Proof.: Let \(R_{h}\) be the Ritz projector from \(\mathcal{V}\) on \(\mathcal{V}_{h}\), associated with \(\kappa(\cdot,\cdot)\). Namely, for \(g\in\mathcal{V}\), \(R_{h}g\) is defined by \[\kappa(R_{h}g,v_{h})=\kappa(g,v_{h})\] for all \(v_{h}\in\mathcal{V}_{h}\). We write \(\ell_{h}-\ell=(\ell_{h}-R_{h}\ell)+(R_{h}\ell-\ell)=\theta_{h}+\rho\) (Notice that \(\theta_{h}\in\mathcal{V}_{h}\)). Standard estimations show that \(\rho(t)\) satisfies \(\|R_{h}\ell-\ell\|\leq Ch^{2}\|\partial_{yy}\ell\|\). In order to estimate \(\theta_{h}\), we write, for all \(\chi_{h}\in\mathcal{V}\) \[(\partial_{t}\theta_{h},\chi_{h})_{h}+\kappa(\theta_{h},\chi_{h}) = (\partial_{t}\ell_{h},\chi_{h})_{h}+\kappa(\ell_{h},\chi_{h}) \tag{31}\] \[-(\partial_{t}R_{h}\ell,\chi_{h})_{h}-\kappa(R_{h}\ell,\chi_{h})\] \[= (f,\chi_{h})\] \[-(\partial_{t}R_{h}\ell,\chi_{h})_{h}-\kappa(\ell,\chi_{h})\] \[= (\partial_{t}\ell,\chi_{h})-(\partial_{t}R_{h}\ell,\chi_{h})_{h}\] \[= -(\partial_{t}\rho,\chi_{h})\] \[\qquad\quad-\delta_{h}(\partial_{t}R_{h}\ell,\chi_{h}).\] Setting \(\chi_{h}=\theta_{h}\), we obtain \[\frac{1}{2}\frac{d}{dt}\|\theta_{h}\|_{h}^{2}+\kappa(\theta_{h},\theta_{h})=- \left(\partial_{t}\rho,\theta_{h}\right)-\delta_{h}\left(\partial_{t}R_{h} \ell,\theta_{h}\right).\] Here, we have at once, using Cauchy-Schwarz and Poincare inequalities: \[|(\partial_{t}\rho,\theta_{h})| \leq \|\partial_{t}(\ell-R_{h}\ell)\|\,\|\theta_{h}\|\] \[\leq Ch\,\|\partial_{yt}\ell\|\,\|\theta_{h}\|\] \[\leq Ch\,\|\partial_{yt}\ell\|\,\|\partial_{y}\theta_{h}\|.\] Using the first equation of Lemma 3.1, and the fact that \(\|\partial_{y}R_{h}u\|\leq C\|\partial_{y}u\|\), we also obtain \[|\delta_{h}\left(\partial_{t}R_{h}\ell,\theta_{h}\right)| \leq Ch\,\|\partial_{yt}R_{h}\ell\|\,\|\partial_{y}\theta_{h}\|\] \[\leq Ch\,\|\partial_{yt}\ell\|\,\|\partial_{y}\theta_{h}\|\,,\] from which we deduce that \[\frac{1}{2}\frac{d}{dt}\|\theta_{h}\|_{h}^{2}+\kappa(\theta_{h}, \theta_{h}) \leq Ch\left\|\partial_{yt}\ell\right\|\|\partial_{y}\theta_{h}\|\] \[\leq \kappa(\theta_{h},\theta_{h})+Ch^{2}\left\|\partial_{yt}\ell \right\|^{2}\,,\] using the coercivity of \(\kappa(\cdot,\cdot)\) on \(\mathcal{V}\). We therefore infer \[\|\theta_{h}(t)\|_{h}^{2}\leq\|\theta_{h}(0)\|_{h}^{2}+Ch^{2}\int_{0}^{t}\left\| \partial_{yt}\ell\right\|^{2}\,ds\,.\] We now recall that \(\|\cdot\|_{h}\) and \(\|\cdot\|\) are equivalent norms on \(\mathcal{V}_{h}\), uniformly in \(h\), and hence \[\|\theta_{h}(t)\|\leq C\|\theta_{h}(0)\|+Ch\left(\int_{0}^{t}\left\|\partial_{yt }\ell\right\|^{2}ds\right)^{1/2}.\] Here \(\|\theta_{h}(0)\|=\|\ell_{0,h}-R_{h}\ell_{0}\|\) and \[\|\ell_{0,h}-R_{h}\ell_{0}\| \leq \|\ell_{0,h}-\ell_{0}\|+\|\ell_{0}-R_{h}\ell_{0}\|\] \[\leq \|\ell_{0,h}-\ell_{0}\|+Ch^{2}\|\partial_{yy}\ell_{0}\|,\] whence \(\theta_{h}(t)\) is bounded as desired. _Theorem 3.2_.: If \(\ell\) and \(\ell_{h}\) are solution to (16) and (22), (23) respectively we have, for \(t\geq 0\), \[\|\partial_{y}(\ell_{h}-\ell)(t)\| \leq Ch(\|\partial_{yy}\ell_{0}\|+\|\partial_{yy}\ell(t)\|)\] \[+C\|\partial_{y}(\ell_{0,h}-\ell_{0})\|+C\sqrt{h}\left(\int_{0}^{t }\|\partial_{yt}\ell\|^{2}ds\right)^{1/2}.\] Proof.: We now set \(\chi_{h}=\partial_{t}\theta_{h}\) in equation (31) for \(\theta_{h}\) to obtain: \[\|\partial_{t}\theta_{h}\|_{h}^{2}+\frac{1}{2}\frac{d}{dt}\kappa(\theta_{h}, \theta_{h})=-(\partial_{t}\rho,\partial_{t}\theta_{h})-\delta_{h}(R_{h} \partial_{t}\ell,\partial_{t}\theta_{h}).\] Here, as in the proof of Theorem 3.1, \[|(\partial_{t}\rho,\partial_{t}\theta_{h})|\leq\|\partial_{t}(\ell-R_{h}\ell) \|\|\partial_{t}\theta_{h}\|\leq C\sqrt{h}\|\partial_{yt}\ell\|\|\partial_{t} \theta_{h}\|.\] Further, by the second line of Lemma 3.1, \[|\delta_{h}(\partial_{t}R_{h}\ell,\partial_{t}\theta_{h})| \leq C\sqrt{h}\|\partial_{yt}R_{h}\ell\|\|\partial_{t}\theta_{h}\|\] \[\leq C\sqrt{h}\|\partial_{yt}\ell\|\|\partial_{t}\theta_{h}\|.\] Using again the equivalence between the norms \(\|\cdot\|_{h}\) and \(\|\cdot\|\) on \(\mathcal{V}_{h}\), we conclude: \[\|\partial_{t}\theta_{h}\|_{h}^{2}+\frac{1}{2}\frac{d}{dt}\kappa( \theta_{h},\theta_{h}) \leq C\sqrt{h}\|\partial_{yt}\ell\|\|\partial_{t}\theta_{h}\|_{h}\] \[\leq \|\partial_{t}\theta_{h}\|_{h}^{2}+Ch\|\partial_{yt}\ell\|^{2}\,,\] so that, after integration, and using the coercivity of \(\kappa(\cdot,\cdot)\) on \(\mathcal{V}\) \[\|\partial_{y}\theta_{h}(t)\| \leq C\|\partial_{y}\theta_{h}(0)\|+C\sqrt{h}\left(\int_{0}^{t}\| \partial_{yt}\ell\|^{2}ds\right)^{1/2}\] \[\leq \|\partial_{y}(\ell_{0,h}-\ell_{0})\|+Ch\|\partial_{yy}\ell_{0}\|\] \[\qquad+C\sqrt{h}\left(\int_{0}^{t}\|\partial_{yt}\ell\|^{2}ds \right)^{1/2}.\] This, together with the standard estimate for \(\partial_{y}\rho(t)\) completes the proof. We proved the convergence of the discrete \(N\)-spring swimmer to the continuous model we formally derived in the previous section. Note that we obtain only a first-order (resp. half order) convergence in \(L^{2}\) norm (resp. \(H^{1}\) norm) while the standard estimations for the mass-lumping method leads to a second-order (resp. first order) convergence. This is due to the Fourier-type boundary condition at \(0\) which differs from the classical Dirichlet boundary condition used in [17]. ## 4 Mathematical expression of the displacement ### Net displacement of the \(N\)-spring swimmer We seek the swimmer's displacement by looking at the displacement of the first of the largest sphere, meaning we only compute \(V_{1}=\dot{x}_{1}\), and integrate over a period \((0,2\pi/\omega)\). Taking into account the hydrodynamic interactions due to the \(i^{\text{th}}\)-sphere with \(i\in\{2,\cdots,N+2\}\) on the first sphere, we have \[V_{1}=\frac{1}{6\pi\mu a_{1}}f_{1}^{F}+\frac{1}{4\pi\mu L_{0}}f_{2}^{F}+\frac{ 1}{4\pi\mu}\sum\limits_{i=3}^{N+2}\frac{f_{i}^{F}}{L_{0}+L_{1}+\cdots+L_{i-2}}\] Using expressions (2) and (5), we obtain \[\begin{array}{rl}V_{1}&=\frac{1}{2}\dot{L}_{0}-\frac{\tilde{a}}{2a_{1}}K \ell_{1}-\frac{3a_{1}\dot{L}_{0}}{4L_{0}}\\ &\qquad\qquad-\frac{3K\tilde{a}\ell_{1}}{4L_{0}}+\frac{3\tilde{a}K}{2}\sum \limits_{j=1}^{N}\frac{\ell_{j}-\ell_{j+1}}{\sum\limits_{i=0}^{j}L_{i}}\,, \end{array} \tag{32}\] where we recall that, by convention, \(\ell_{N+1}=0\). Finally, by integrating over one period, and noticing that both \(\ell_{2}\) and \(\dot{L}_{1}/L_{1}\) have a vanishing time-average, we obtain, for any value of \(h=\Lambda/N\), the displacement of the corresponding \(N\)-spring swimmer: \[\Delta_{h}x_{1}=\int\limits_{0}^{2\pi/\omega}\biggl{[}-\frac{3K\tilde{a}\ell _{1}}{4L_{0}}+\frac{3\tilde{a}K}{2}\sum\limits_{j=1}^{N}\frac{\ell_{j}-\ell_{ j+1}}{\sum\limits_{i=0}^{j}L_{i}}\biggr{]}\,dt \tag{33}\] ### Net displacement of the limit model We may find an expression for the displacement of the limit model as \(h\) tends to \(0\), by passing to the limit in the preceding expression. Indeed, for \(h\) and \(y\) given, we define \(j_{h}(y)\) the unique integer such that \(j_{h}(y)h\leq y\leq(j_{h}(y)+1)h\). Then, defining \(\chi_{h}\) the function \[\chi_{h}(y,t)=\frac{1}{L_{0}(t)+\cdots+L_{j_{h}(y)+1}(t)},\] we may write \[\int_{0}^{2\pi/\omega}\sum\limits_{j=0}^{N-1}\frac{\ell_{h}(jh,t) -\ell_{h}((j+1)h,t)}{\sum\limits_{i=0}^{j+1}L_{i}}\,dt=\] \[-\int_{0}^{2\pi/\omega}\int_{0}^{\Lambda}\partial_{y}\ell_{h}(y,t )\chi_{h}(y,t)\,dy\,dt\,,\] where \(\ell_{h}\) is the piecewise linear function defined in the previous section. Finally, the displacement \(\Delta_{h}x_{1}\) of the \(N\)-spring swimmer during one time period can be rewritten as \[\Delta_{h}x_{1} =\int\limits_{0}^{2\pi/\omega}\Bigg{[}-\frac{3K\tilde{a}\ell_{h}(0, t)}{4L_{0}(t)}\] \[\quad-\frac{3\tilde{a}K}{2}\int_{0}^{\Lambda}\partial_{y}\ell_{h}( y,t)\chi_{h}(y,t)\,dy\,\Bigg{]}\,dt\,.\] Now, using the fact that \(j_{h}(y)h\to y\) when \(h\to 0\), together with the \(L^{2}\) and \(H^{1}\) convergence of \(\ell_{h}\) to \(\ell\), we obtain that, for any \(y\) and \(t\), \[\chi_{h}(y,t) =\frac{1}{L_{0}(t)+(j_{h}(y)+1)h+\frac{h}{\Lambda}\sum_{i=0}^{j} \ell_{h}(ih,t)}\] \[\underset{h\to 0}{\longrightarrow}\frac{1}{L_{0}(t)+y+\int \limits_{0}^{y}\frac{\ell(t)}{\Lambda}}=:\chi(y,t)\] Moreover \(0\leq\chi_{h}(y,t)\leq\max_{t}\frac{1}{L_{0}(t)}=\frac{1}{L(1-\bar{\varepsilon })}\), shows that \(\chi_{h}\) is uniformly bounded. Therefore, using dominated convergence theorem, we deduce that \(\chi_{h}\) converges to \(\chi\) in \(L^{2}(0,2\pi/\omega;(0,\Lambda))\) as \(h\) tends to \(0\). Using the convergence theorems proven in the preceding section, we may pass to the limit \(h\to 0\) in \(\Delta_{h}x_{1}\), and obtain the following expression for the displacement during one period for the limit model \[\Delta x_{1} =\int\limits_{0}^{2\pi/\omega}\int\limits_{0}^{\Lambda}-\frac{3K \tilde{a}}{2}\partial_{y}\ell(y,t)\bigg{(}L_{0}(t)+y+\int\limits_{0}^{y}\frac {\ell}{\Lambda}\bigg{)}^{-1}\,dy\,dt \tag{34}\] \[\quad\quad\quad\quad\quad-\int\limits_{0}^{2\pi/\omega}\frac{3K \tilde{a}\ell(0,t)}{4L_{0}}dt\,.\] ## 5 Numerical experiments In this section, we numerically study the discrete model's convergence towards the continuous one. Then, we investigate the influence of the two parameters \(\omega\) and \(\tilde{\varepsilon}\) on the system and on its displacement, while the rest of the swimmer is determined by the values in table 1. All simulations are achieved using Matlab. We consider here that the default length \(L\) of the active arm is small compared to the rest of the swimmer. The first sphere thus acts like the head of a sperm cell, and the active arm like a link between the head and the flagella, which gives a signal so that the rest of the system oscillates. ### Convergence of the discrete models to the continuous one We investigate numerically the convergence estimations obtained in section 3. We recall that the continuous solution \(\ell\) solves the heat equation PDE with the Fourier-type boundary conditions (15). We consider, in this section, periodic forcing for which explicit solutions are given by (19, 20). #### 5.1.1 Convergence of the \(N\)-spring discrete model We recall that the discrete solution \(\ell_{h}\) is the \(P^{1}\) discrete function based on the \((\ell_{i})_{i}\) solution to the \(N\)-spring ODE system (6,7,8). This discrete system corresponds to a semi-discretization in space of the continuous model, based on a non conventional mass-lumping method. The solution \((\ell_{i})_{i}\) of the discrete problem in the periodic setting is given in equations (9,10,12,13). The space step \(h\) (or equivalently the number of springs \(N\)) being given, the discrete error is defined as the error between \(\ell_{h}\) and the \(P^{1}\) interpolation of \(\ell\). We plot in figure 3, the \(L^{2}\) (resp. \(H^{1}\)) error denoted by \(e_{h,L^{2}}\) (resp. \(e_{h,H^{1}}\)). We observe that the \(L^{2}\) error converges with order one, as expected from theorem 3.1. Concerning the \(H^{1}\) error, we observe a superconvergence phenomenon: as the \(L^{2}\) error, it converges at order \(1\), while theorem 3.2 predicts a convergence at order \(1/2\). This can be explained by the regularity of the considered periodic solution. #### 5.1.2 Influence of mass-lumping As mentioned earlier, the \(N\)-spring model turns out to be a discretization in space of the continuous problem (15), based on an unconventional mass-lumping method. The convergence proof that we proposed in section 3 is based on the results of Thomee [17]. He shows that, for a standard mass-lumping discretization, the usual order of convergence for finite elements is obtained: convergence of order \(2\) for the \(L^{2}\) error and \(1\) for the \(H^{1}\) error. We investigate here the influence of the space discretization, by comparing the \(N\)-spring model (22), solved numerically this time, to the classical mass-lumping method (24) and the standard Galerkin finite element method (21). Again we consider the periodic framework for which the exact solution is available. \begin{table} \begin{tabular}{c c} \hline \(\tilde{a}\) & \(1\cdot 10^{-5}\,m\) \\ \(a_{1}\) & \(1\cdot 10^{-5}\,m\) \\ \(\Lambda\) & \(4\cdot 10^{-4}\,m\) \\ \(L\) & \(3\cdot 10^{-5}\,m\) \\ \(\tilde{k}\) & \(1\cdot 10^{-8}\,Nm^{-1}\) \\ \(\mu\) & \(8.9\cdot 10^{-4}\,Pa\,s\) \\ \hline \end{tabular} \end{table} Table 1: Values of the parameters used in the numerical simulations, matching those of [14]. We have taken for \(\mu\) the dynamic viscosity of water at \(25^{\circ}C\). Figure 3: \(L^{2}\) and \(H^{1}\) errors between the \(N\)-spring discrete model and the continuous one as a function of the number of springs in log scale, in the \((2\pi/\omega)\)-periodic case, for \(\tilde{\varepsilon}=0.7\) and \(\omega=1\,rad\cdot s^{-1}\). The time discretization of the three ODE systems is achieved using a Crank-Nicolson scheme for which the time step is chosen to be small enough so that the error due to the time discretization is negligible. The corresponding \(L^{2}\) (resp. \(H^{1}\)) error is given on figure 4 (resp. figure 5). We can see that, as expected, the \(L^{2}\) error converges at order 1 for the \(N\)-spring model, while it converges at order 2 for both the classical mass-lumping method and the standard Galerking discretization. Again, due to the regularity of the solutions, a super-convergence phenomenon of the \(H^{1}\) error is observed for all three methods: as the \(L^{2}\) error, it converges at order 1 for the \(N\)-spring model and order 2 for the other two discretizations. Figure 4: \(L^{2}\) error between the continuous model and our mass-lumping method, as a function of the number of springs, in log scale. Figure 5: \(H^{1}\) error between the continuous model and our mass-lumping method, as a function of the number of springs, in log scale. ### Swimming strokes In this section, we investigate the swimming ability of the \(N\)-spring swimmer. The stroke being periodic, we use the explicit solutions given in section 2.2. The computations are achieved for \(N=2\,000\) springs. #### 5.2.1 Deformation of the swimmer Figure 6 shows a full stroke of the swimmer, in which we notice that a wave is propagating along its tail. Remember that this wave is a contraction wave along the horizontal tail. This tail appears to be oscillating fairly efficiently for the side close to the head, while the amplitude of the contraction decays considerably on the second half of the tail. The movement shown corresponds to the stretch of \(\ell_{j}\), and not to the actual deformation which would be \(\ell_{j}/N\), for all \(1\leq j\leq N\). We thus remark that this deformation is relatively small compared to the size of the artificial swimmer, which matches the approximation of small deformations that we made in the first place. #### 5.2.2 Displacement In this section, we study the influence of the parameters \(\tilde{\varepsilon}\) and \(K_{\omega}\) on the swimmer's displacement (33), in order to maximize its absolute value. In figure 7, we plot the displacement of the swimmer as a function of time, for different values of \(\tilde{\varepsilon}\). The displacement is computed through numerical integration of equation (33). The graph shows that the swimmer globally swims backwards, and we recognize the back and forth motion which is characteristic of low Reynolds number artificial swimmers. A larger amplitude \(\tilde{\varepsilon}\) of the forcing leads to a larger displacement and we observe (see figure 8), that \(\Delta x_{1}\) is proportional to \(\tilde{\varepsilon}^{2}\), which is what is expected in theory (similar behaviors are observed, e.g., in [1, 2, 7] and explained as the surface of loops in the space of shapes [11]). As we want to maximize \(\Delta x_{1}\) while having \(\tilde{\varepsilon}<1\), we choose a fixed value \(\tilde{\varepsilon}=0.7\) which, although arbitrary, allows for an easier comparison to Montino and DeSimone's results [14], as they made a similar parameter choice. Figure 9 shows \(\Delta x_{1}\) depending on \(K_{\omega}\), for different values of \(\tilde{\varepsilon}\). At any fixed \(K_{\omega}\), we observe once again that larger \(\tilde{\varepsilon}\) leads to larger \(\Delta x_{1}\). We first observe that, if \(K_{\omega}\to\infty\), the net displacement of the swimmer vanishes. According to the expression of \(K_{\omega}\), this is the case for example when \(\omega\to 0\): the oscillation disappears, immobilizing the artificial swimmer. This can also happen when \(\tilde{k}\to\infty\): the springs become so rigid that the tail of the swimmer can no longer deform. In that case, the swimmer has only one degree of freedom left to deform and faces Purcell's Scallop theorem's obstruction. Similarly, letting \(K_{\omega}\to 0\) Figure 6: Movement of the whole 2000-springs swimmer during a full stroke, at different time stamps \(T\), for \(\omega=1\,rad\cdot s^{-1}\) and \(\tilde{\varepsilon}=0.7\). immodilizes the swimmer. An optimal value \(K_{\omega}^{\rm opt}\) for the non-dimensional parameter is reached between these two limiting cases, in order to maximize the displacement on one time period. According to the figure, \(K_{\omega}^{\rm opt}\simeq 0.3765\). A complete mathematical expression of \(K_{\omega}^{\rm opt}\) does not seem available, due to the largely nonlinear nature of the problem contrarily to the final expression obtained in [14]. A pair of optimal values for \(\omega\) and \(k\) to obtain this \(K_{\omega}^{\rm opt}\) are \(\omega=1\,rads^{-1}\) and \(\tilde{k}\simeq 6.207\cdot 10^{-8}Nm^{-1}\). Moreover, the expression of \(K_{\omega}\) guarantees that \(\omega\) must vary proportionally to \(\tilde{k}\) for the pair \((\tilde{k},\omega)\) to remain at the optimum. Indeed, the softer the spring, the slower the first arm needs to oscillate in order to generate a large movement. Looking at the other parameters separately, we can also clearly see from equation (33), that the displacement depends linearly on \(\tilde{a}\), which is predictable. However, this parameter has a direct consequence on the size of the artificial swimmer and must stay in a reasonable range (in our case no more than \(1e-5\,m\)) so that the swimmer stays at microscopic scale. Finally, we notice that the value of \(\Lambda\) and the ratio \(a_{1}/\tilde{a}\) has little to no influence on the previous analysis. We therefore keep for those parameters values that seem coherent with the scale we are working at, and that match with numerical experiments provided in [14]. ## 6 Conclusion We analyzed the dynamics of two low-Reynolds-number swimmers. The first one, which is an extension of [14], is made of \(N\) passive springs, and the second one is the corresponding limit model with an elastic tail. Both are activated by an active arm that elongates and retracts periodically with amplitude \(\varepsilon\) and angular frequency \(\omega\). Noting that the \(N\)-spring swimmer is a non-conventional mass lumping discretization of the limit model, we proved its convergence, when \(N\) tends to infinity, to the continuous model, by extending the results of Thomee [17] to the case of a Fourier-type boundary condition. For both swimmers, a phase difference between the oscillations of the active arm and the tail is created by the interaction between elastic and hydrodynamic forces. Then, both models undergo non-reciprocal shape changes and thus circumvent Scallop Theorem's obstruction [3]. Numerical simulations indeed show a wave propagating along the swimmers' tails. Similarly to what was shown in [14], our models are able to swim but there is no control over the swimming direction. Then, we focused on computing the net displacement over a time period of the swimmer in both cases, in view of its optimization. We obtain explicit formulae for this displacement as a function of the local elongation Figure 7: Displacement of the 2000-spring swimmer against time \(t\), for different values of \(\tilde{\varepsilon}\). during the stroke. We numerically recover the classical back and forth swimming and the second-order scaling of the displacement as a function of the maximum elongation of the forcing active arm. Moreover, we highlight a dimensionless parameter \(K_{\omega}\), driving the movement of the swimmer when its geometry (\(\Lambda\), \(a\), \(a_{1}\)) is given. Some optimal values for this parameter can be estimated by numerical experiments. Lastly, we noticed that, similarly to the behavior of Machin's swimming rod [18], the deformations of both our swimmers is rapidly attenuating along their passive parts, which suggests that some form of activation is needed in order to mimic the type of behavior observed in nature.
2305.02826
Unifilar Machines and the Adjoint Structure of Bayesian Filtering
We elucidate the mathematical structure of Bayesian filtering, and Bayesian inference more broadly, by applying recent work on category theoretical probability, specifically the concept of a strongly representable Markov category. We show that filtering, along with related concepts such as conjugate priors, arise from an adjunction: the process of taking a hidden Markov process is right adjoint to a forgetful functor. This has an interesting consequence. In practice, filtering is usually implemented using parametrised families of distributions. The Kalman filter is a particularly important example, which uses Gaussians. Rather than calculating a new posterior each time, the implementation only needs to udpate the parameters. This structure arises naturally from our adjunction; the correctness of such a model is witnessed by a map from the model into the system being modelled. Conjugate priors arise from this construction as a special case. In showing this we define a notion of unifilar machine, which has its origins in the literature on epsilon-machines. Unifilar machines are useful as models of the "observable behaviour" of stochastic systems; we show additionally that in the Kleisli category of the distribution monad there is a terminal unifilar machine, and its elements are controlled stochastic processes, mapping sequences of the input alphabet probabilistically to sequences of the output alphabet.
Nathaniel Virgo
2023-05-04T13:44:02Z
http://arxiv.org/abs/2305.02826v2
# Unifilar Machines and the Adjoint Structure of Bayesian Models ###### Abstract We apply recent work on category theoretical probability to the idea of _Bayesian filtering_, making use of the concept of a strongly representable Markov category. We show that there is an adjunction between 'dynamical' and 'epistemic' models of a hidden Markov process. Concepts such as Bayesian filtering and conjugate priors arise as natural consequences of this adjunction. Along the way we define a notion of _unifilar machine_, which is a kind of stochastic Moore machine in which the output is chosen stochastically, but the update function is deterministic given the output. Unifilar machines are useful as models of the behaviour of stochastic systems; we show that in the Kleisli category of the distribution monad there is a terminal unifilar machine, and its elements are controlled stochastic processes, mapping sequences of the input alphabet probabilistically to sequences of the output alphabet. ## 1 Introduction This paper is concerned with the mathematical structure of _Bayesian filtering_, which is a common problem in applications of Bayesian inference. The idea is that there is some system with known dynamics (which in general are stochastic) but an unknown hidden state. The goal is to keep track of a Bayesian prior over the states of the system, updating it to a posterior whenever a new observation is made. This is useful if we want to be able to control the hidden state, as in solving a partially observable Markov decision process (POMDP), for example. To reveal the underlying mathematical structure we make use of recent results in synthetic probability, which allows us to write proofs at the category theoretic level without using measure theory directly. We work in the framework of Markov categories [4], and in particular we make use of the concept of _strongly representable Markov category_ as defined in [6]. Strongly representable Markov categories include **BorelStoch** (whose objects are standard Borel spaces and whose morphisms are Markov kernels) and the Kleisli category of the (real-valued) distribution monad, which we call **Dist**. Therefore most of our results apply to both measure-theoretic probability and finitely supported probability. We model a system with a hidden state as a certain kind of stochastic Moore machine (essentially a hidden Markov model); we call this a _dynamical model_ of the system. There is then a functor \(B\) that takes such a dynamical model and maps it to an _epistemic model_. This lives in a different category of machines that we call _unifilar machines_, whose outputs are stochastic but whose state updates are deterministic. Its state space consists of probability distributions over the hidden states of the system, and its dynamics are given by Bayesian updating. Our main technical result is theorem 2.6 in section 2, which states that this functor is right adjoint to a forgetful functor in the opposite direction. This adjunction has an interesting consequence. The functor \(B\) maps a dynamical model \(\kappa\) to what could be called its _universal epistemic model_, \(B(\kappa)\). If we consider a unifilar machine \(\alpha\) equipped with a morphism \(\alpha\to B(\kappa)\), we can also consider this an epistemic model of \(\kappa\), in the following sense. In applications, one doesn't necessarily want to keep track of the Bayesian distribution directly. Instead, one uses a parametrised family of distributions, chosen such that the update step only needs to update the parameters to produce the posterior distribution. For this to work, the Bayesian posterior must always be in the same family of distributions as the prior. An example of enormous practical importance is the Kalman filter. In the setting of the Kalman filter the prior is a multivariate Gaussian and the posterior is always also a multivariate Gaussian. The filter's state space consists of the means and covariance matrix that parametrise such a Gaussian, and the update step simply maps them to their new values. In our framework this kind structure arises simply from considering a morphism \(\alpha\to B(\kappa)\). The state space of \(B(\kappa)\) consists of probability distributions, and the state space of \(\alpha\) consists of values that parametrise them in a consistent way. This idea is closely related to the notion of conjugate prior, which was previously studied in a category-theoretic context in [7]. The definition in that paper is essentially our eq. (20), which arises from our framework in a very natural way. The connection between Bayesian filtering and Bayesian inference is explored in section 2.1, where we also briefly touch on connections to recent work on de Finetti's theorem within a category-theoretic context [9, 5]. A secondary contribution of our paper is an exploration of the possible generalisations of Moore machines to the stochastic case. Our result involves two different generalisations of Moore machine, which we term _comb machine_ (definition 2.2) and _unifilar machine_ (definition 2.4). Unifilar machines in particular are of independent interest. They area based an idea from the literature on \(\epsilon\)-machines [1]. They are defined such that their output map is stochastic but their update map is (almost surely) deterministic given their input and their output. This means that their states map more directly to 'behaviours' than the states of a more general stochastic machine. Indeed, we show in section 3 that in **Dist** the category of unifilar machines has a terminal object (for every choice of input and output space), which consists of the collection of 'controlled stochastic processes,' also known as'stochastic streams' [3]. In general, if a category of unifilar machines has a terminal object then it can be seen as an "object of behaviours" of stochastic systems, including hidden Markov models as well as other unifilar machines. Our Bayesian filtering machines have a strong resemblance to Bayesian lenses [11], but they seem to lack the backwards component and don't appear to compose like lenses. Understanding the relationship is an open problem. Bayesian filtering and its connection to conjugate priors was previously considered in a Markov category context by the author and colleagues in [13]. The novel contribution of the present paper is to reveal more of the abstract categorical structure underlying the idea, including the definitions of comb machine, unifilar machine and the adjoint structure involving the functor \(B\), as well as the discussion of terminal unifilar machines in section 3. ### Background on Representable Markov categories We will use the machinery of representable Markov categories and in particular, strongly representable Markov categories, both defined in [6]. For general background on Markov categories, including the notion of conditional, which we will use extensively, we refer to [4]. Recall that given an object \(X\) in a Markov category \(\mathcal{C}\), a _distribution object_ is an object \(PX\) equipped with a map \(\operatorname{\mathrm{samp}}_{X}\colon PX\to X\) such that for every morphism \(f\colon A\to X\) there is a unique deterministic morphism \(f^{\mathfrak{o}}\colon A\to PX\) such that \(f^{\mathfrak{o}}\mathbin{\sharp}\operatorname{\mathrm{samp}}_{X}=f\). A Markov category is then called _representable_ if every object has a distribution object. Representable Markov categories often arise as the Kleisli categories of monads obeying conditions spelt out in [6]. The two examples we will use are **BorelStoch** (the Kleisli category of the Giry monad, restricted to standard Borel spaces) and the Kleisli category of the (real-valued) distribution monad, which we will call **Dist**. **BorelStoch** is shown to be strongly representable in example 6.12 of [6], and for completeness we include a proof that **Dist** is strongly representable in appendix A.1. We recall also the following results about representable Markov categories: When every object has a distribution object, \(P\) extends to a functor \(P\colon\mathcal{C}\to\mathcal{C}_{\mathrm{det}}\). Restricting the domain of this functor we obtain a functor \(P_{\mathrm{det}}\colon\mathcal{C}_{\mathrm{det}}\to\mathcal{C}_{\mathrm{det}}\), which will also be written as \(P\), except when we wish to explicitly disambiguate. The functor \(P_{\mathrm{det}}\) can be made into a monad on \(\mathcal{C}_{\mathrm{det}}\), and the Kleisli category of this monad is \(\mathcal{C}\). The unit has components \(\delta_{X}=\operatorname{id}_{X}^{\mathfrak{o}}\colon X\to PX\), and the multiplication has components \(\mu_{X}=P(\operatorname{\mathrm{samp}}_{X})\colon PPX\to PX\). This monad arises from an adjunction: the functor \(P\) is right adjoint to the inclusion functor \(\mathcal{C}_{\mathrm{det}}\hookrightarrow\mathcal{C}\). Its unit has components \(\delta_{X}\) and its counit has components given by the sampling map \(\operatorname{\mathrm{samp}}_{X}\colon PX\to P\). In string diagrams we will draw \(\operatorname{\mathrm{samp}}_{X}\) as a white dot. Additionally, if a morphism is known to be deterministic we indicate this with a black bar at its right-hand edge, so we can write (1) We will need the definition of a strongly representable Markov category. For this we first recall another definition from [6]. **Definition 1.1** (deterministic given \(X\); [6], definition 6.4).: Let \(f\colon A\to X\otimes Y\) be a morphism in a Markov category \(\mathcal{C}\) such that a conditional \(c\colon X\otimes A\to Y\) exists. The morphism \(f\) is said to be _deterministic given \(X\)_ if the conditional is almost surely deterministic, in the sense that (2) If \(f\) is known to be deterministic given \(X\) we write it as (3) In [6] it is shown that if eq. (2) holds for one conditional of \(f\) then it holds for all conditionals, so that this definition is independent of the choice of conditional \(c\). The precise sense in which eq. (2) describes an almost-surely condition is spelt out in detail in [6], but we can note that in the case of **Dist** it means that \(c\) behaves deterministically as long as its \(X\) input is in the support of the marginal \(\sum_{y}f(x,y\mid a)\). For both **BorelStoch** and **Dist**, if the conditional \(c\) is almost-surely deterministic then it is almost surely equal to a deterministic morphism ([6], example 6.12), so for most purposes it will not hurt to think of such conditionals as genuinely deterministic, though only almost-surely defined. **Definition 1.2** (Strongly representable Markov category; [6], definiton 6.7).: A strongly representable Markov category is a representable Markov category in which for every morphism \(f\colon A\to X\otimes Y\) there is a unique morphism \(f^{\circ}\colon A\to X\otimes PY\) such that (i) \(f^{\circ}\) is deterministic given \(X\), and (ii) (4) (This definition is less efficient than the one given in [6], which doesn't include an assumption that the category is representable, since this can be proven from weaker assumptions.) A strongly representable Markov category necessarily has conditionals, because \(f^{\circ}\) has a conditional by the definition of deterministic given \(X\), and if \(c\colon X\times A\to PY\) is such a conditional then \(c\,{\raise 1.0pt\hbox{$\,\lower 1.0pt\hbox{$\,\circ\,$}}}\,{\rm samp}_{Y}\) is a conditional of \(f\). This concludes our review of needed concepts from [6]. ## 2 Machines and Bayesian Filtering The following definitions are all relative to a Markov category \(\mathcal{C}\) and a choice of objects called the input space \(I\) and output space \(O\), which we will assume to be fixed throughout this section. We assume \(\mathcal{C}\) has conditionals and later we will also assume it is strongly representable. For most of the following we will work with what we call "comb machines," which are a generalisation of Moore machines. However, many of the results also carry over to the case of Mealy machines, which we define first because they are simpler. The following definition is standard: **Definition 2.1** (Stochastic Mealy machine).: A _stochastic Mealy machine_ is an object \(S\) of \(\mathcal{C}\) called the _state space_, together with a morphism \(\alpha\colon I\otimes S\to O\otimes S\) in \(\mathcal{C}\). A morphism of Mealy machines \((S,\alpha)\to(T,\beta)\) is a morphism \(f\colon S\to T\) in \(\mathcal{C}\) such that \(I\otimes S\xrightarrow{\alpha}O\otimes S\xrightarrow{\operatorname{id}_{O} \otimes f}O\otimes T=I\otimes S\xrightarrow{\operatorname{id}_{I}\otimes f}I \otimes T\xrightarrow{\beta}O\otimes T\). The category of Mealy machines will be written \(\mathbf{Mealy}(I,O)\). The idea is that a Mealy machine starts in some state in \(S\), receives an input in \(I\), and then produces an output in \(O\) while simultaneously transitioning to a new state. The output may depend on the input and may be correlated with the new state. We don't require morphisms of Mealy machines to be deterministic. We now briefly discuss Moore machines and their generalisation to the stochastic context. In a Cartesian category, a Moore machine consists of a state space \(S\) and two maps: a _readout map_\(S\to O\) and an _update map_\(I\times S\to S\). An obvious way to generalise this to the stochastic case is to let both maps be stochastic, so that the readout map has type \(I\otimes S\to S\). However, machines with this definition tend not to be very well behaved, and in practise other definitions tend to be used. One way to make stochastic Moore machines well behaved is to make the readout map deterministic. Machines of this kind can be expressed in terms of generalised lenses [12]; this is the approach taken in [10], for example. Intuitively, requiring a deterministic readout map allows the update map to "know" what the machine's last output was, since this can be inferred from the current value of \(S\). However, for the present work it is important to allow the readout map to be stochastic. For this reason we take a different approach, starting with the following definition. **Definition 2.2** (Comb machine).: A _comb machine_ is an object \(S\) of \(\mathcal{C}\) (the state space), together with a morphism \(\alpha\colon I\otimes S\to O\otimes S\) in \(\mathcal{C}\) and a morphism \(\alpha^{\bullet}\colon S\to O\) such that \[\tikzfig{width=1.5}{\begin{array}{c}\includegraphics[width=1.5}]{ \begin{array}{c}\includegraphics[width=1. Equation (5) expresses the idea that the output of a comb machine cannot directly depend on the input. Consequently a comb machine \(\alpha\) could be seen as a Mealy machine that obeys an extra condition, namely the existence of \(\alpha^{\bullet}\) such that eq. (5) holds. However, we will often think of them differently. If \(\mathcal{C}\) has conditionals then a comb machine \(\alpha\) can always be factored as (6) where \(u\) is a conditional of \(\alpha\). We refer to \(\alpha^{\bullet}\) as the _readout map_ and \(u\) as an _update map_ of the comb machine \((S,\alpha)\), analogously to the maps that define a Moore machine. Update maps are almost-surely unique and almost-surely deterministic. The readout map has the type we would expect for a Moore machine, \(S\to O\), but the update map has type \(I\otimes O\otimes S\to S\), and moreover it is only defined up to almost-sure equality. It might seem odd that the update map takes the output as an input. An intuition is that this allows the update map to "know" what the output was. Thus the next state can be correlated with the output given the previous state and the input, even though the output alone is independent of the input. It might also seem odd that the update map is only defined up to almost sure equality. An intuition for this is that it shouldn't matter how the machine behaves on measure-zero subsets of the output. In the case of **Dist** this means that if a given output \(o\in O\) cannot occur at all when the machine is in some state \(s\in S\) then we don't care about the result of the update map when it is given \(o\) and \(s\) as inputs. Consequently it makes sense to define comb machines in a way that makes two machines equal if their update maps only differ in such cases. We think of comb machines as giving their output first and then receiving their input, in contrast to Mealy machines, which first receive an input and then give an output.1 The picture to have in mind for a comb machine is this: Footnote 1: This raises the question of whether we can interpose some other morphism in between \(\alpha^{\bullet}\) and \(u\), so that the machine receives an input that can depend on its output, and perhaps also on the outputs of other machines. Answering this in the most general case is rather involved and we will not address it in this paper. However, in the case where \(\mathcal{C}\) is **FinStoch**, [8] provides a way to compose 2-combs, of which comb machines are a special case. \[\includegraphics[]{figures/comp_m}\ =\ \includegraphics[]{figures/comp_m}\, \tag{7}\] We now introduce the concept of a _unifilar_ machine. A unifilar machine has a stocahstic readout map but a deterministic update map. (Or at least, an almost-surely deterministic one.) The term "unifilar" comes from the literature on computational mechanics and \(\epsilon\)-machines. In particular it appears in a machine-like context in [1], proposition 5. The formal context is different since we don't make a stationarity assumption, but our definition achieves the same idea. We define unifilar machines in Mealy machine and comb machine flavours: **Definition 2.3** (unifilar Mealy machine).: A _unifilar Mealy machine_ is a Mealy machine \((S,\alpha)\) with the condition that \(\alpha\) is deterministic given \(O\). Additionally, we require morphisms of unifilar mealy machines to be deterministic. The category of unifilar Mealy machines will be written as \(\textbf{UnifilarMealy}(I,O)\). **Definition 2.4** (unifilar comb machine).: A _unifilar comb machine_ is a comb machine \((S,\alpha,\alpha^{\bullet})\) with the condition that \(\alpha\) is deterministic given \(O\). As with unifilar Mealy machines, we require morphisms of unifilar comb machines to be deterministic. The category of unifilar comb machines will be written as \(\textbf{UnifilarComb}(I,O)\). When we say "unifilar machine" without qualification we mean a unifilar comb machine. The idea of a unifilar machine (of either type) is that all of the randomness comes from the choice of output. If \(\mathcal{C}\) has conditionals then a unifilar comb machine factors according to eq. (6), with the additional feature that the conditional \(u\) is almost-surely deterministic, in the sense of eq. (2). We interpret this as follows: first the output \(O\) is chosen stochastically (via \(\alpha^{\bullet}\colon S\to O\)), and then the state updates almost-surely deterministically as a function of the output and the input. As with comb machines in general, the reason for the almost-surely condition is that we don't care how the machine behaves on measure-zero subsets of the output space. So instead of specifying a deterministic function \(I\otimes O\otimes S\to S\) we just require the existence of an almost-surely deterministic conditional. If \(\mathcal{C}\) is Cartesian then Mealy machines and unifilar Mealy machines coincide, as do comb machines and unifilar comb machines, both of which coincide with Moore machines. So both comb machines and unifilar comb machines can claim to be a generalisation of Moore machines to the stochastic case. It is worth saying something about the meaning of morphisms in these categories. The following can be made formal using the machinery we introduce in section 3, but for now we state it informally. We can think of a non-unifilar machine (of either flavour) as providing a stochastic map from infinite sequences of inputs to infinite sequences of outputs, subject to a _causality condition_ that each output can only depend on inputs that were received at earlier points in time. (Recall that for Mealy machines we consider the input to be received before the output, and vice versa for comb machines.) For Mealy machines and comb machines, a morphism \((S,\alpha)\rightarrow(T,\beta)\) is a stochastic map \(S\to T\), which we think of (informally) as being such that the following give the same distribution over output sequences: (i) feed a given infinite sequence of inputs to \(\alpha\), or (ii) sample from the stochastic map to get a random state in \(T\), then feed the same infinite sequence of inputs to \(\beta\). The existence of a morphism thus witnesses that, from the point of view of an external observer who cannot observe the machine's state, the machine \(\beta\) is capable of exhibiting all of the externally observable behaviours that \(\alpha\) can exhibit. Using a stochastic map makes sense because the states are unobserved and change randomly; we consider distributions over states to exhibit behaviours, as well as states themselves. The interpretation of morphisms between unifilar machines is similar, but we require the morphisms to be deterministic, echoing the almost-sure determinism condition on their update maps. A morphism of unifilar machines witnesses not only that their externally observable behaviour is the same, but also that there is a mapping between their internal states that preserves this behaviour. This makes sense conceptually because we will generally consider the state of a unifilar machine to be observable. In fact we will often consider a unifilar machine to _be_ an observer, with its state space representing the observer's possible states of knowledge. This will become clearer after proposition 2.5. Our first result concerns the existence of an adjunction between the categories \(\mathbf{UnifilarComb}(I,O)\) and \(\mathbf{CombMachine}(I,O)\). An analogous result holds for \(\mathbf{UnifilarMealy}(I,O)\) and \(\mathbf{Mealy}(I,O)\), which we will state at the end. Its proof is largely the same. We first note that there is a forgetful functor \(F\colon\mathbf{UnifilarComb}(I,O)\to\mathbf{CombMachine}(I,O)\) that embeds unifilar comb machines into comb machines. On objects it forgets that the machine obeys the deterministic-given-\(O\) condition, and it also forgets that morphisms are deterministic. We next construct a functor \[B\colon\mathbf{CombMachine}(I,O)\to\mathbf{UnifilarComb}(I,O)\] using the defining property of a strongly representable Markov category. First note that for a comb machine \((\alpha,S)\) we can construct the morphism \(I\otimes PS\xrightarrow{\mathrm{id}_{I}\otimes\mathrm{samp}_{S}}I\otimes S \xrightarrow{\alpha}O\otimes S\). This is of the form \(A\to X\otimes Y\), where \(A=I\otimes PS\). Consequently there is a unique morphism \(B\alpha\colon I\otimes PS\to O\otimes PS=(\mathrm{id}_{I}\otimes\mathrm{samp}_ {S}\,\sharp\alpha)^{\circ}\) such that \(B\alpha\) is deterministic given \(O\) and (8) We have that \((PS,B\alpha)\) is a unifilar comb machine. It obeys the required comb property because (9) which follows straightforwardly after substituting \(\mathrm{del}_{PS}=\mathrm{samp}_{S}\,\sharp\,\mathrm{del}_{S}\). This defines the action of \(B\) on objects. On morphisms we define it to map a morphism of comb machines with underlying map \(f\colon S\to T\) to a morphism of unifilar machines with underlying deterministic map \(Pf\colon PS\to PT\). **Proposition 2.5**.: \(B\) _is a functor._ Proof.: Such a mapping respects composition and identities by functoriality of \(P\), but to prove \(B\) is a functor we have to show that \(Pf\) is indeed a morphism of unifilar machines. This also uses the defining property of a strongly representable Markov category; we spell it out in appendix A.2. We think of the functor \(B\) as taking a dynamical model (in the form of a comb machine) and converting it into an epistemic model in the form of a unifilar machine. To see this, consider a comb machine \((H,\kappa)\), where we think of \(H\) as a set of hidden states and \(\kappa\) as a dynamical process that emits outputs and stochastically changes the hidden state as a function of the input. Then \(B((H,\kappa))\) is a unifilar machine, which can be written (using eq. (9)) as \[B\left(\begin{array}{c}I\\ H\end{array}\right)=\begin{array}{c}I\\ PH\end{array}, \tag{10}\] where the conditional \(u\) is almost surely uniquely defined and almost surely deterministic. The state space of this unifilar machine consists of probability measures over \(H\). We will see that we can think of these as "beliefs" about the hidden state of \(\kappa\), held by an idealised Bayesian reasoner, whose prior at any given time is an element of \(PH\). This Bayesian reasoner does not interact with the machine \(\kappa\), it only observes the inputs that \(\kappa\) receives and the outputs it emits in response, updating its prior to a posterior at each time step. The output map \(PH\xrightarrow{\text{\scriptsize samp}_{H}}H\xrightarrow{\kappa^{\bullet} }O=PH\xrightarrow{P\kappa^{\bullet}}PO\xrightarrow{\text{\scriptsize samp}_{O}}O\) "simulates" the output of \(\kappa\). The map \(P\kappa^{\bullet}\) maps the reasoner's prior beliefs about the hidden state to its beliefs about the next output it will observe. The update map \(u\) is a bit more interesting. It takes as input a probability measure over the hidden states along with an input and an output, and it returns a new probability measure over hidden states. This map is performing a kind of Bayesian updating, but some care is needed over its interpretation, because not only is the reasoner obtaining new information about the state (via \(O\)) but the state itself is also changing. The update map \(u\) combines Bayesian updating with "simulating" the stochastic change in \(H\). This process is known as Bayesian filtering. As we might expect, the posterior is only defined up to almost-sure equivalence. In the case where \(O\) is finite this is because for a given output \(o\in O\) and a given belief \(b\in PH\) we might have \((b\,\sharp\,P\kappa\bullet)(o)=0\), i.e. the output \(o\) is "subjectively impossible" according to the agent's current epistemic state. In this case calculating the Bayesian posterior in the usual way would lead to a division by zero, so there is no consistent value that the posterior distribution could take. Since the update map \(u\) is only defined up to almost sure equality its output only matters in those cases where this doesn't happen. We can thus regard the functor \(B\) as taking a dynamical model as input and turning it into an epistemic model. We remark that a similar operation is performed in the process of solving a partially observable Markov decision process (POMDP). A POMDP consists of some kind of machine -- for simplicity let us say a comb machine \((H,\kappa)\) -- together with a reward function. This machine is a dynamical model of some environment, and the goal is to find a "policy" that maximises the expected amount of reward that is accumulated over time, usually with an exponential discounting factor. (We will not consider reward functions in the present work.) A common solution technique involves converting the POMDP into a Markov decision process (MDP), which is a simpler class of problem. In an MDP the state space is assumed to be fully observed, so that there is no need to consider outputs. In an MDP the machine only takes inputs, and changes state stochastically as a function of its input, so it can be seen as an object of \(\mathbf{CombMachine}(I,1)\). Again there is an associated reward function, which we will not consider in detail. To turn a POMDP into an MDP one forms the so-called "belief MDP", whose state space is given by probability distributions over \(H\). In our framework it is given by (11) Note that this is a stochastic map in general. For an approach to POMDPs that is closely related to the present work, see [2]. The following is our main technical result. **Theorem 2.6**.: _The functor \(B\) is right adjoint to the forgetful functor \(F\)._ Proof.: We show that if \(f\colon S\to H\) is the map in \(\mathcal{C}\) underlying a morphism \(F((S,\alpha))\to(H,\kappa)\) in \(\mathbf{CombMachine}(I,O)\) then \(f^{\mathfrak{a}}:S\to PH\) is the deterministic map underlying a morphism \((S,\alpha)\to B((H,\kappa))\) in \(\mathbf{UnifilarComb}(I,O)\), and vice versa. This will form the natural isomorphism of hom-sets needed for an adjunction. Suppose \(f\colon S\to H\) is the map underlying a morphism \(F((S,\alpha))\to(H,\kappa)\) in \(\mathbf{CombMachine}(I,O)\). Then we have the following (where, as always, all diagrams are in \(\mathcal{C}\)): (12) Both sides of the last equation consist of morphism \(I\otimes S\to O\otimes PH\) that is deterministic given \(O\), composed with \(\operatorname{id}_{I}\otimes\operatorname{samp}_{H}\). Using the defining property of a strongly representable Markov category we can conclude that (13) so that \(f^{\mathfrak{a}}\) underlies a morphism \((S,\alpha)\to B((H,\kappa))\) in \(\mathbf{UnifilarComb}(I,O)\). Each of these steps can be reversed, so this gives a bijection \[\mathbf{CombMachine}(I,O)(F(-),=)\cong\mathbf{UnifilarComb}(I,O)(-,B(=)).\] Naturality follows from functoriality of \(f\) and the naturality of the sampling map. This adjunction can be seen as an extension of the one between \(P\colon\mathcal{C}_{\mathrm{det}}\to\mathcal{C}\) and \(\mathcal{C}_{\mathrm{det}}\hookrightarrow\mathcal{C}\) in a representable Markov category, and it shares the same unit and counit. The unit has components \(\delta_{X}\colon X\to PX\) and the counit has components \(\mathrm{samp}_{X}\colon PX\to X\), where \(PX=BX\) on objects. The existence of this adjunction has some interesting consequences. We have already established that the unifilar machine \(B((H,\kappa))\) can be seen as an epistemic model of the comb machine \((H,\kappa)\), seen as a dynamical model. But now consider a morphism \((S,\alpha)\to B((H,\kappa))\) in \(\mathbf{UnifilarComb}(I,O)\) from some other unifilar machine into \(B((H,\kappa))\). We argue that when equipped with such a morphism, \((S,\alpha)\)_also_ deserves to be seen as modelling \((H,\kappa)\). To see this we consider its adjoint map \(F((S,\alpha))\to(H,\kappa)\), which is given by an underlying map \(\psi\colon S\to H\) in \(\mathcal{C}\) such that (14) or (15) where \(u\) is an update map for \(\alpha\). By marginalising both sides (i.e. post-composing with \(\mathrm{id}_{O}\otimes\mathrm{del}_{H}\)) we have \(\alpha^{\bullet}=\psi\,{}^{\circ}_{\sharp}\,\kappa^{\bullet}\), so this equation becomes (16) where \(u\) is almost-surely deterministic. This a Bayesian filtering version of Jacobs' [7] definition of conjugate priors. It is not quite the same as the one in [13] because in that paper \(u\) is not assumed to be almost-surely deterministic, so a stronger equation is needed. However, it is conceptually the same. The morphism \(\psi\) can be regarded as what the author and colleagues called an _interpretation map_ in [13]. We think of the update map \(u\) as a physical machine whose job is to keep track of an epistemic model of \(\kappa\). At each time step it receives both the input that was given to \(\kappa\) and the output that \(\kappa\) emitted in response. The machine's physical state (\(S\)) then updates in an (almost surely) deterministic way. Equation (16) expresses the idea that when the machine receives a new piece of information in the form of an \((i,o)\) pair it should update its beliefs in a consistent way. The left-hand side can be seen as the agent's current beliefs about the _next_ output and the _next_ value of the hidden state, as a function of the next input. The equation says that after receiving an input and output pair, its new beliefs about the _current_ hidden state should equal a conditional of its prior beliefs, conditioned on \(i\) and \(o\). The adjoint map \(\psi^{\alpha}\colon S\to PH\) can then be seen as mapping the unifilar machine's physical state to a probability measure over \(H\) that we think of as "the machine's beliefs about \(H\)," i.e. its current Bayesian prior. Since \(\psi^{\mathfrak{D}}\) underlies a morphism \((S,\alpha)\to B((H,\kappa))\) it means that \(\alpha\)'s updates have to be able to'simulate' the idealised Bayesian filtering that \(B((H,\kappa))\) performs. An important practical example of this is the Kalman filter, in which the state space \(S\) consists of a vector of means and a covariance matrix, which parametrise a Gaussian distribution. The map \(\psi^{\mathfrak{D}}\) maps these parameters to the distribution they parametrise. The machine \((H,\kappa)\) has to be chosen carefully in order for a morphism \(\alpha\to B((H,\kappa))\) to exist; its existence means that when the prior is a Gaussian the posterior will also be a Gaussian. The update map of \(\alpha\) then only has to map the parameters of the prior to a new set of means and variances parametrising the posterior. Although we don't spell out the details of this example we note a similarity to the cateogory **Gauss** defined in [4], section 6. We now state the corresponding result for Mealy machines: as for comb machines there are functors \(\textbf{Mealy}(I,O)\stackrel{{ B}}{{\underset{F}{\rightleftarrows}}} \textbf{UnifilarMealy}(I,O)\) such that \(F\) is left adjoint to \(B\). The definitions and proofs are the same as for comb machines and unifilar comb machines, except that we don't need to care about the comb condition. These functors can be thought of in the same terms, with \(B\) mapping a dynamical model to a corresponding epistemic model. The Mealy machine version of eq. (16) is (17) which expresses the same kind of consistency relation. ### Bayesian Inference and Conjugate Priors Up to now we have considered a version of Bayesian filtering in which the systems being modelled have the form of a comb machine. In this section we consider an important special case of this, in which the system being modelled simply emits independent and identically distributed outputs. This corresponds to the standard setting of Bayesian inference, where we receive independent samples from a known distribution with an unknown (but fixed) value for its parameters, and wish to use this data to make inferences about the parameters. In this section we primarily consider machines whose input space is the terminal object in \(\mathcal{C}\). In this case the distinction between comb machines and Mealy machines isn't relevant, and we refer to such machines as generators, defining \(\textbf{Generator}(X)=\textbf{Mealy}(1,X)\cong\textbf{UnifilarMachine}(1,X)\) and \(\textbf{UnifilarGenerator}(X)=\textbf{UnifilarMealy}(1,X)\cong\textbf{UnifilarComb}(1,X)\). To model Bayesian inference in our setup we consider objects of \(\textbf{Generator}(X)\) represented by morphisms in \(\mathcal{C}\) of the following special form: (18) In this setting we call \(X\) the _sample space_ and \(\Theta\) the _parameter space_, and we think of \(f\) as a statistical model, that is, a family of distributions over \(X\) parametrised by \(\Theta\). Applying the functor \(B\) we get (19) where we have called the conditional Bayes\({}_{f}\) because that is what it does: it takes in a prior over the parameters together with some data \(x\in X\), and returns the Bayesian posterior over the parameters, according to the model \(f\). If we consider a map \(\psi^{\mathfrak{o}}\) into this machine from some other unifilar machine \((S,\alpha)\), we obtain exactly the notion of a conjugate prior. Its adjoint map of comb machines, \(\psi\colon F((S,\alpha))\to f^{\circ}\), obeys (20) which is the equation given in [7] as a definition of conjugate prior. The only minor difference is that here the update map is only almost surely defined instead of being a specified deterministic function. We think of \(\psi\colon S\to\Theta\) as a statistical model and say that it is a conjugate prior for \(f\). Its parameter space \(S\) is referred to as the space of hyperparameters. It is worth briefly mentioning the further special case in which \(f\) is the sampling map, although we will not make use of it. (21) Here Bayes\({}_{X}\) also performs Bayesian updating, corresponding to inference about an unknown distribution. It takes a distribution over distributions over \(X\), representing a prior, along with a sample from the unknown distribution. Its output is the Bayesian posterior over distributions, conditioned on the sample. We note that all the generators in this section obey the property of _exchangeability_, specifically the version of that concept defined in [9] in the context of de Finetti's theorem. That is, they are all machines \((S,\alpha)\) such that (22) In our context, one of the results of [9] is that in **Stoch** (and hence also in **BorelStoch**) the category **Generator**({0,1}) has a terminal object, given by, which is part of their category-theoretic treatment of de Finetti's theorem. (A much more general version of de Finetti's theorem is proved for **BorelStoch** in [5], though in a less machine-like context.) In the context of the machines in eq. (19) and eq. (21), exchangeability amounts to the idea that a Bayesian reasoner should reach the same posterior from the same data, regardless of the order in which the data are presented. (Except that here this is subject to the usual almost-surely condition.) There is much more that can be said about exchangeability and its relationship to Bayesian inference within the framework of unifilar machines, but we will leave the topic here and return to the more general case of non-exchangeable machines in the next section. ## 3 Terminal objects as "objects of behaviours" If \(\mathbf{UnifilarComb}(I,O)\) has a terminal object then it can be seen as an "object of behaviours," in much the same manner as a final coalgebra. If such a terminal object exists we call the elements of its state space _transducers_ from \(I\) to \(O\). They can be thought of as stochastic maps from infinite sequences of inputs to infinite sequences of outputs, subject to the causality condition described above, that each output can only depend on inputs that were received prior to it. To illustrate this idea we will prove that transducers always exist in \(\mathbf{Dist}\), and that they indeed have the form of stochastic maps between sequences. For this we need the definition of a controlled stochastic process. This is a classical idea, but the category theoretic definition we give is similar to definition 9.12 of [3]. For further generalisations with a slightly different flavour, see section 7 of [4]. **Definition 3.1** (controlled stochastic process).: In a Markov category \(\mathcal{C}\), an _output-first controlled stochastic process_ with input space \(I\) and output space \(O\) is defined as a family of morphisms \(p_{n}\colon I^{n-1}\to O^{n}\) for \(n\geq 1\), subject to the condition that (23) where the labels on the wires represent the indexes of the inputs and outputs. An _input-first controlled stochastic process_ is defined similarly, but with the outputs indexed starting from \(1\) instead of \(0\), so that \(p_{n}\) has type \(I^{n-1}\to O^{n-1}\). When we say "controlled stochastic process" without qualification, we mean an output-first controlled stochastic process. The condition says both that the family of distributions has to be consistent with each other, and that each output can only depend on inputs that were received prior to it. **Proposition 3.2** (**Dist** has all transducers).: _In \(\mathbf{Dist}\), the terminal object \((\omega,T)\) of \(\mathbf{UnifilarComb}(I,O)\) exists and is as follows. \(T\) is the set of all output-first controlled stochastic processes (in \(\mathbf{Dist}\)). \(\omega\) is composed of the following readout and update maps: the readout map sends a controlled stochastic process \(p\) to the distribution \(p_{1}\), which is a distribution over \(O\) with no input. Given \(i\in I\), \(o\in O\) and a controlled stochastic process \(p\), the update map sends \((i,o,p)\) to a delta distribution concentrated on a new controlled stochastic process \(p^{i,o}\) given by_ \[p_{n}^{i,o}(o_{0},\ldots,o_{n}\mid i_{1},\ldots,i_{n})=\frac{1}{p_{1}(o)}p_{n+1} (o,o_{0},\ldots,o_{n}\mid i,i_{1},\ldots,i_{n}) \tag{24}\] _if \(p_{1}(o)>0\), and to some arbitrary distribution over controlled stochastic processes otherwise. (As such it is defined up to the appropriate almost-surely condition.)_ Proof.: Given a unifilar machine \((S,\alpha)\) and a state \(s\in S\), one can show inductively that under any morphism of unifilar machines \((S,\alpha)\to(T,\omega)\), the state \(s\) must map to the controlled stochastic process given by (25) We give the details in appendix A.3 The update map of \((T,\omega)\) performs Bayesian conditioning: it returns a new map from input sequences to output sequences, formed by fixing the first input and conditioning on the first output. Unifilar machines in **Dist** can be expressed as coalgebras of a suitable polynomial functor, and the existence of a terminal object can also be deduced from that. However this is not the case in a general strongly representable Markov category. A similar result holds for unifilar Mealy machines. The main differences are that we use input-first controlled stochastic processes instead of output-first ones, and the comb condition is not needed. One advantage of formulating transducers internally in this way is that we can consider probability distributions over them. In particular, since \((T,\omega)\) is a terminal object it is equipped with an algebra of the monad \(F\,\sharp\,B\) arising from the adjunction in theorem 2.6. This means that we can think of the unique map \(B(F((T,\omega)))\to(T,\omega)\) as taking a probability distribution over transducers and returning a new transducer that represents is 'average' or 'expected' behaviour. This will work in any suitable Markov category, whenever the terminal object of \(\textbf{UnifilarComb}(I,O)\) exists. On the other hand, **BorelStoch** doesn't have all transducers: the category \(\textbf{UnifilarMealy}(\mathbb{R},\{0,1\})\) does not have a terminal object. Consider those machines with trivial state spaces, whose output depends only on the current input. Specifying the behaviour of such a machine amounts to specifying a measurable map \(\mathbb{R}\to[0,1]\). But there is no measurable space of such functions, so there is no measurable space that includes the behaviours of all such machines. However, we conjecture that **BorelStoch** has terminal objects for \(\textbf{UnifilarComb}(I,O)\) and \(\textbf{UnifilarMealy}(I,O)\) when \(I\) is a finite set. ## Acknowledgements The author thanks Martin Biehl for insightful comments on the manuscript, and Martin Biehl, Simon McGregor, Timorl, Matteo Capucci and Toby Smithe for discussions that stimulated the work. This paper was made possible through the support of Grant 62229 from the John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the John Templeton Foundation.
2303.00401
Structure Determination in a new Type of Amorphous Molecular Solids with Different Nonlinear Optical Properties: A Comparative Structural Analysis
The microscopic structure of two amorphous materials with extreme nonlinear optical properties has been studied. One of these materials exhibits second harmonic generation, while another material of similar molecular structure emits brilliant white light if being irradiated with a simple IR laser diode. Structural differences were investigated using X-ray scattering and EXAFS combined with molecular RMC. Transmission electron microscopy and scanning precession electron diffraction were used to understand specific structural differences on all length scales, from mesoscopic down to mutual molecular arrangements. Characteristic differences were found at all scales. Close core-core spacing between {SnS} clusters as well as characteristic cluster distortions appear to be characteristic features of the white light emitting material. In the other material, cores are undistorted and core distances are larger. There, the formation of nanocrystalline structures in the amorphous matrix could also be identified as reason for the WLG suppression.
Jonathan Link Vasco, Jens Ruediger Stellhorn, Benjamin Danilo Klee, Benedict Paulus, Juergen Belz, Johannes Haust, Shinya Hosokawa, Shinjiro Hayakawa, Kerstin Volz, Iran Rojas León, Jan Christmann, Stefanie Dehnen, Wolf-Christian Pilgrim
2023-03-01T10:44:28Z
http://arxiv.org/abs/2303.00401v1
Structure Determination in a new Type of Amorphous Molecular Solids with Different Nonlinear Optical Properties: A Comparative Structural Analysis ###### Abstract The microscopic structure of two amorphous materials with extreme nonlinear optical properties has been studied. One of these materials exhibits second harmonic generation, while another material of similar molecular structure emits brilliant white light if being irradiated with a simple IR laser diode. Structural differences were investigated using X-ray scattering and EXAFS combined with molecular RMC. Transmission electron microscopy and scanning precession electron diffraction were used to understand specific structural differences on all length scales, from mesoscopic down to mutual molecular arrangements. Characteristic differences were found at all scales. Close core-core spacing between {SnS} clusters as well as characteristic cluster distortions appear to be characteristic features of the white light emitting material. In the other material, cores are undistorted and core distances are larger. There, the formation of nanocrystalline structures in the amorphous matrix could also be identified as reason for the WLG suppression. ## 1 Introduction The search for improved materials to generate light has been and is still an active research field. About 25 years ago, these efforts have culminated in the development of the light emitting diode (LED), which has meanwhile become omnipresent everywhere in our daily lives[1]. Typically, LEDs emit a strong line in the dark blue, near UV region resulting from a direct band gap transition. It is then spectroscopically converted to longer wavelengths by dyes and phosphors which cover just the visible range of the electromagnetic spectrum between \(\sim\)350 to \(\sim\)800 nm. LEDs are extremely energy efficient and can be tuned to provide any desired color temperature. Another important property of LEDs is that their radiation pattern is almost perfectly Lambertian, i.e., it emits into all directions with virtually the same intensity, which is of great advantage if bright illumination of rooms is desired or if LEDs are used as pixels in flat panel displays where large viewing angles are preferred. For other applications however, a rather point-shaped, laser like radiation characteristics is often wanted. Such light sources also exist and were already developed in the nineteen-seventies as so-called Supercontinuum Emitters (SCEs)[2, 3]. They are based on strongly non-linear optical (NLO) materials as e.g. YAG, Sapphire, CaF\({}_{2}\)-crystals, optical fibers or other waveguide-based sources. However, to invoke the NLO effects, high electrical field strengths are needed which are provided by pulsed high-power lasers. These SCEs are therefore heavy and bulky devices with high energy consumption and their use is basically restricted to pure scientific and medical applications. A few years ago, a group of inorganic-organic hybrid cluster molecules was identified that already exhibit extreme NLO properties when irradiated with just a simple low energy-density continuous wave near infrared (CW-NIR) laser diode [4, 5]. These compounds consist of heteroadamantane shaped units of general formula [(RT)\({}_{4}\)E\({}_{6}\)], where R is an organic ligand, T is a group-14 element, bound to the organic ligand R, and E represents a chalcogen. The heteroatomic composition combined with the wide variability of the organic ligands provides large synthetic variety and meanwhile a huge number of different derivates exist [6, 7]. All compounds precipitate as solids, some of which are crystalline, while others show a completely amorphous morphology. Comparing the non-linear optical responses of the different materials, one finds that all crystalline representatives respond as second harmonics generators (SHG) upon IR-irradiation, while most amorphous materials reply as white light generators (WLG). The latter emit warm white light just covering the visible region of the electromagnetic spectrum. Moreover, the emission characteristics of these materials is highly brilliant, retaining the directionality of the driving laser [4, 6]. The difference between these two groups is exemplary visualized in Figures 1 (a) and (b) for the two systems [(PhSi)\({}_{4}\)S\({}_{6}\)] and [(PhSn)\({}_{4}\)S\({}_{6}\)], in which the molecular units differ only by the exchange of Si by Sn while phenyl groups (Ph) are the organic ligands is in both cases. DFT calculations of these cluster molecules have revealed almost identical molecular structures for both [8] which are displayed in the respective figures. However, the {SiS}cluster solidifies crystalline as is indicated by the corresponding diffractogram in the second row of the Figure, and emits a second harmonic with wavelength 489 nm (2.53 eV) if being irradiated with a CW-NIR line at 979 nm (1.265 eV). On the other hand, the {SnS} cluster precipitates clearly amorphous as can be inferred from the characteristic shape of its X-ray structure factor \(S(Q)\) in Figure 1 (b). If irradiated with the same 979 nm laser line it however responds with a brilliant white light emission centered between 400 and 800 nm. Yet, the underlying process for white light generation is still unclear, as is the reason why some cluster systems crystallize while others exclusively solidify amorphous. However, the observation that WLG is never observed in crystalline materials indicates that the effect must be related to specific structural correlations or degrees of freedom that are only attainable in a sufficiently disordered state. This raises the question of how the disordered state is characterized in these systems. This question can only be understood if both the mutual arrangement of the molecules on a microscopic scale in the sub-nano range is known, as well as morphological variations on mesoscopic lengths. This is the only way to understand the relationships between order and disorder in these materials, which is essential for an understanding of the structure-property relationships in view of the nonlinear optical behavior. Among Figure 1: NLO-responses from the crystalline material [(PhSi)\({}_{4}\)S\({}_{6}\)] **(a)** and the amorphous material [(PhSn)\({}_{4}\)S\({}_{6}\)] **(b)** (top). The driving excitation is visible at 979 nm (1.265 eV) in each spectrum. The 2\({}^{\mathrm{nd}}\)-harmonics of **(a)** is clearly seen at 489.5 nm (2.53 eV), while **(b)** depicts a broad white spectrum. The respective X-ray patterns are also shown below indicating that the SHG-material is clearly crystalline while the WLG material shows the typical structure factor \(S(Q)\) of an amorphous solid. **(c)** shows the NLO-response from [(NpSn)\({}_{4}\)S\({}_{6}\)] indicating SHG, although the X-ray structure factor clearly designates an amorphous solid. all the materials synthesized so far, there are very few which appear to be amorphous but nevertheless react as SHGs upon irradiation, as is shown in Fig. 1 (c) for the [(NpSn)\({}_{4}\)S\({}_{6}\)]-cluster with naphtyl (Np) ligands as the organic component. Its X-ray scattering pattern clearly identifies it as a non-crystalline material, which is evident from the shape of the measured structure factor \(S(Q)\) in the figure. The fact that some few SHG materials are amorphous opens up the possibility of detecting precisely the microscopic structural differences without them being masked by the different morphology. We therefore performed structural studies on the amorphous SHG [(NpSn)\({}_{4}\)S\({}_{6}\)] and the WLG [(PhSn)\({}_{4}\)S\({}_{6}\)] to search for microscopic spatial differences in order to identify the specific structural features of the WLG materials. For this, we performed measurements of the EXAFS (Extended X-ray Absorption Fine Structure) function \(\chi(k)\) and the static X-ray structure factor \(S(Q)\), and analyzed the resulting data by means of molecular Reverse Monte Carlo simulations (m-RMC). Scanning transmission electron microscopy ((S)TEM) measurements, combined with scanning precession electron diffraction ((S)PED), were also performed to unravel the relationships between morphology and microscopic structure to understand why some materials do not emit white light despite their amorphous appearance. Here, we give a comprehensive overview of the results from the selected systems [(PhSn)\({}_{4}\)S\({}_{6}\)] and [(NpSn)\({}_{4}\)S\({}_{6}\)], which are shown in Figs. 1 (b) and (c). ## 2 Methods ### Sample preparation The molecular {SnS}-cluster materials [(PhSn)\({}_{4}\)S\({}_{6}\)] and [(NpSn)\({}_{4}\)S\({}_{6}\)] were prepared according to literature,[9, 10] by reacting organotin trichlorides (RSnCl\({}_{3}\)) with sodium sulphides where R was either Ph (-C\({}_{6}\)H\({}_{5}\)) or Np (-C\({}_{10}\)H\({}_{7}\)). All synthesis steps were performed under Argon atmosphere and the substances were obtained as amorphous white stable and non-hygroscopic powders. Final product analysis was performed by NMR and mass spectrometry, and preliminary morphology studies performed by X-ray diffractometry to clarify their crystalline or amorphous nature. The molecular structures of the substances were further elucidated by density functional theory (DFT) calculations[8] supporting an inversion-free heteroadamantane-type molecular structure with (idealized) T\({}_{4}\) symmetry. The density of the solid samples was measured with an AccuPyc II 1340 Gas Displacement Pycnometry System (micromerities) using Helium gas. The measurements consisted of 30 purge steps, following 50 measurements per sample which were averaged to give a density accuracy of up to two decimal places. For [(PhSn)\({}_{4}\)S\({}_{6}\)] a value of 2.00 g/cm\({}^{3}\) was found, while the density of [(NpSn)\({}_{4}\)S\({}_{6}\)] was determined to 1.79 g/cm\({}^{3}\). ### X-ray scattering High precision X-ray scattering data of the [(PhSn)\({}_{4}\)S\({}_{6}\)] material were measured in transmission geometry at beamline P02.1[11] of the PETRA III synchrotron at DESY, Hamburg using a primary energy of 59.87 keV. Scattered X-rays were collected using a two-dimensional position sensitive detector with 2048\(\times\)2048 pixels of size 200\(\times\)200 \(\upmu\)m\({}^{2}\). The DAWN software package was employed to convert the 2D image into scattering pattern.[12] Distance between sample and detector was set to 240.2 mm and the sample was positioned in front of a detector corner to obtain maximum angular range. The [(NpSn)\({}_{4}\)S\({}_{6}\)] sample was explored on an inhouse Bruker D5000 diffractometer equipped with a Goebel mirror to optimize the primary beam (Mo \(K_{a}\), 17.44 keV). Both samples were confined in boro silicate X-ray capillaries of 1 mm outer diameter and 0,01 mm wall thickness. Measured scattering intensities were corrected for background- and air-scattering, self-absorption, polarization and Compton contribution and then normalized to \(S(Q)\). ### EXAFS experiments Tin-\(K\) edge EXAFS (29.2 keV) for both samples were obtained at beamline P65,[13] also located at PETRA III, while sulfur-\(K\) edge EXAFS (2.47 keV) were measured at beamline BL-11[14] at the HiSOR facility of the Hiroshima Synchrotron Radiation Center in Japan, which is designed to maximize the beam intensity on the sample within the 2-5 keV region. At BL-11, the measurements were carried out at room temperature and the sample was directly measured, sandwiched between two sulfur free polypropylene foils. At P65 the samples were mixed with graphite and pressed to pellets. All scans were performed in transmission mode. The absorption spectra were normalized and background was calculated using the AUTOBK algorithm. The data were finally analyzed using the Demeter software package (Athena and Artemis)[15]. ### Molecular Reverse Monte Carlo simulations An existing RMC-code from the RMC POT\(++\) program package[16], which already provides the ability to group atoms as rigid molecules and move them along molecular translational and rotational degrees of freedom, was accordingly manipulated for our needs. For the two materials, no crystal structures were known that could have been used as starting configurations for the simulations. Therefore, random molecular arrangements had to be generated which, on the one hand, had to correspond to the real particle densities and, on the other hand, should no longer contain any overlapping molecules. In the original script of the RMC_POT\(++\) package, all atom pairs that violated these cut-off conditions were identified, and all molecular moves that resulted in such pairs were prohibited. However, random initial configurations of large molecules inevitably contain atomic overlaps, and any attempts to move entire molecules unavoidably lead to new overlaps. It is therefore impossible to disentangle a random initial configuration of larger molecules with such strict constraints. We therefore used a less stringent procedure in which a quantity S was defined as a measure for cut-off violations which had to be minimized. However, moves resulting in overlaps were accepted as long as no additional overlaps were created, i.e. all molecular motions were allowed as long as the value of S was not increased. The algorithm was further modified in that complex histogram calculations were restructured and parallelized and a subdivision of the simulation box was introduced enabling the control of cut-off conditions to be limited only to the immediate vicinity of a moving molecule. Finally, these improvements led to an increase in computational speed by almost a factor of 400[17]. In each simulation, 216 copies of the DFT calculated {SnS}-clusters were moved under periodic boundary conditions in cubic simulation boxes whose sizes were chosen as to match the respective sample densities. The experimental X-ray \(S(Q)\)s and EXAFS-\(\chi(k)\)s were used as experimental boundary conditions to which the corresponding functions calculated from the simulated structures should ideally converge. First, the rigid molecules calculated with DFT[8] were moved along their translational and rotational degrees of freedom until best possible agreement between simulated and measured data was achieved. Then, after this rigid m-RMC simulation, a second dynamic simulation was performed where sulfur and tin atoms were additionally allowed to slightly vary their coordinates inside the cluster-cores within certain limits. ### Transmission Electron Microscope and electron scattering studies The advantage of (S)TEM is that possible mesoscopic crystalline inclusions in an amorphous matrix can be identified locally, which is not possible with scattering methods that can only provide the ensemble average of an irradiated sample. On the other hand, (S)PED allows to selectively target such areas and collect meaningful information about them using local electron diffraction. Thus, as in conventional diffraction experiments, structural properties can be obtained on a molecular basis from sample regions only nanometers apart. The spatial resolution over the sample volume is thus much higher than in conventional scattering experiments with neutrons or X-rays. The TEM measurements were performed using a conventional JEOL JEM-3010 at 300 kV equipped with a TVIPS X416F-ES camera providing single electron sensitivity. Due to this camera it could be operated under low-dose conditions. For the measurements and location images these were on the order of \(10^{-4}\) e/A\({}^{2}\) for the low magnification mode and \(\sim\)\(10^{-2}\) e/A\({}^{2}\) for higher magnifications. Measurements were performed at room temperature to obtain comparable data to the above-mentioned X-ray diffraction studies. (S)PED measurements were performed using the NanoMEGAS P2010 beam scanning/precession system installed on a double aberration corrected JEOL JEM-2200FS. This system produces a focused convergent probe with variable precession angle. An angle of approximately 1.0 degree was used. The probe was scanned over the sample, and the diffraction planes were recorded for each scan point by a camera pointed at the microscope's built-in phosphor screen. Thus, the pixel information corresponds to the slightly convergent (about 0.8 mrad) diffraction pattern at a camera length of about 53 cm and originates from the area under investigation. The spatial resolution is determined either by the resolution of the scan point or by the physical size of the probe, which for this experiment was measured to be about 1.8 nm. Further details on these experiments are given elsewhere [18, 19]. ### Results and Discussion Fig. 2 shows experimental EXAFS results (symbols) for the amorphous SHG- and WLG -materials [(NpSn)\({}_{4}\)S\({}_{6}\)] and [(PhSn)\({}_{4}\)S\({}_{6}\)], obtained at the Sn \(K\)- and the S \(K\)-edge. All spectra can be fitted reasonably well by the EXAFS function \(\chi(k)\). S-Sn and S-S scattering paths were used for the S \(K\)-edge data and Sn-S, Sn-Sn and Sn-C paths at the Sn \(K\)-edge. Mean displacements were also used as fitting parameters. Fit windows were defined between 1.2-4.0 A for the Sn \(K\)-data, and between 1.5-3.8 A for the S \(K\)-data. Structural parameters from the DFT-calculated [8] molecules shown in the figures, were used as starting parameters for the fits. The obtained results are displayed as red lines. They are close to the curves expected from the single molecule DFT-calculations. However, the fits at the sulfur edges are considerably better for the SHG material with organic naphtyl ligands than for the phenyl containing WLG material. This is also apparent from the goodness of fit values shown in the figures as red numbers. Their difference indicates that the calculated molecular structure of the WLG-cluster seems to experience stronger modifications if being transferred into the dense amorphous phase than the SHG-cluster. Since this difference is only visible in the sulfur EXAFS, it is reasonable to assume that the structural effect is exclusively related to the sulfur-sulfur correlations. Therefore, an additional S-S fitting path was introduced into the fitting procedure for [(PhSn)\({}_{4}\)S\({}_{6}\)], which indeed resulted in a significant improvement of the fit quality. This is represented in Fig. 2 by the blue line in the spectrum for the phenyl cluster and also reflected by the improved goodness of fit value (blue number in the figure) which is now of the same order of magnitude as for the SHG-material. The additional fitting path indicates that an additional sulfur atom is situated nearby either due to a distortion of the molecular structure or due to an additional intermolecular correlation. The additional scattering path yields, a S\(\cdots\)S spacing of 3.6 A. It must however be stated that the inclusion of an additional fitting path causes an increased dependence among the fitting parameters. Therefore, additional information was needed to confirm the reliability of this procedure. Figure 2: Experimental real space EXAFS data (symbols) obtained at the sulfur \(K\)-edge (top) and the tin \(K\)-edge (bottom) of the [(NpSn)\({}_{4}\)S\({}_{6}\)] (left) and the [(PhSn)\({}_{4}\)S\({}_{6}\)] (right), respectively. Solid red lines represent fits to the data using the respective DFT-calculated models as shown in the Figures. Blue line in the spectrum for the phenyl cluster denotes an extended fit using an additional S-S scattering path to fit the data. Fig. 3 shows the results of the m-RMC simulations for the amorphous [(PhSn)\({}_{4}\)S\({}_{6}\)] and [(NpSn)\({}_{4}\)S\({}_{6}\)] materials, using the X-ray scattering data already displayed in Figure 1 and the EXAFS data from the Sn _K_-edges as constraints for the simulation. The thinner blue line in the results for the phenyl cluster represents a simulation attempt using rigid molecular clusters, based on the DFT-calculated structural model.[8] It can be seen that the essential features of _S(Q)_ are reasonably well reproduced, but the agreement with the data is only moderate. E.g., a phase shift can be observed for _Q_-values above about 8 A\({}^{\text{-}1}\) indicating discrepancies between the DFT-calculated structure model and the real shape in the amorphous matrix which complies with the above interpretations from EXAFS. The observed difference is even more pronounced in the Sn _K_-edge EXAFS data, where the corresponding blue curve deviates considerably from the experimental findings (symbols). Another simulation was hence performed where sulfur and tin atoms could vary their coordinates in the cluster cores within given limits: The Sn-S bond was allowed to vary more or less freely between 2.05 and 2.65 A, since its contribution to the X-ray and EXAFS data is large, yielding sufficient information density for a reliable simulation. The Sn-C bond was stronger constrained to 2.05 - 2.25 A due to its smaller weighting and thus the smaller experimentally information density. The C atoms were not allowed to move intramolecularly, except that the organic groups were allowed to rotate around the Sn-C bonds. The Sn atoms thus always remained close to their original coordinates, which ensured an intact overall molecular structure during the simulation process. The results of this simulation are shown in Fig. 3 by the full red lines. Both, _S(Q)_ as well as \(\chi(k)\) calculated from the atomic coordinates are now in nearly quantitative agreement with experiment. This latter simulation procedure was then applied to the [(NpSn)\({}_{4}\)S\({}_{6}\)] system, yielding similar good results for _S(Q)_ and \(\chi(k)\) as is also shown in Fig. 3. The influence of the dynamic m-RMC on the structure of the cluster cores is illustrated in Figs. 4 by the intramolecular partial pair distribution functions (PPDF) as obtained from the simulation boxes. The blue dashed vertical lines respectively indicate the Sn\(\cdots\)Sn, S\(\cdots\)S and Sn\(\cdots\)S spacings expected from the undistorted DFT-calculated clusters.[8] The curves represent the results from the dynamic m-RMC simulations. The red solid lines, belonging to the right-hand scales are the so-called running coordination numbers defined as the integral from 0 up to a given value of \(r\) over the respective radial distribution functions (RDF), 4\(\pi\cdot\)_n_:_g(r)\(\cdot\)r\({}^{2}\), with _n_k being the particle density of element k. It determines the number of neighboring atoms hidden under a PPDF peak. It can be seen from Fig. 4 (a) that the {Sn\({}_{4}\)S\({}_{6}\)} cores of the amorphous [(PhSn)\({}_{4}\)S\({}_{6}\)] material are considerably distorted. The Sn\(\cdots\)Sn correlation peak in _g_S\({}_{\text{sa}}\)(_r_), is asymmetrically broadened, revealing a deformation of the originally tetrahedral Sn\({}_{4}\) frame. The Figure 3: Comparison between experimental and m-RMC simulated _S(Q)_ and Sn _K_-EXAFS functions for the WLG [(PhSn)\({}_{4}\)S\({}_{6}\)] and the SHG [(NpSn)\({}_{4}\)S\({}_{6}\)]. Symbols are experimental data. The blue lines in the graphs for the phenyl-cluster represent simulation results using the rigid DFT-calculated molecule.[8] Red lines in all graphs are results of the dynamic simulation, where Sn and S atoms were allowed to slightly vary their positions inside the cluster cores. first peak in \(g_{\rm S-S}(r)\) (Fig. 4 (a), center) represents the three sulfur atoms at 2.44 A to which each Sn atom is chemically bonded. In the undistorted model cluster, three further S atoms are situated at 4.63 A as second next neighbors (blue dashed vertical line). However, in the amorphous solid this correlation is split into three components between 3.5 and 5.5 A, containing these three neighbors. The structure model also suggests four next neighbors in \(g_{\rm S-S}(r)\) at 4.01 A (1\({}^{st}\) vertical dashed line in \(g_{\rm S-S}(r)\)) which is also split into two distances below and above this value. The running coordination number exactly reveals that two of the four S-neighbors are shifted to smaller distances (3.68 A) over a narrow correlation range, while the two other atoms are situated farther away (4.18 A), distributed over a wider \(r\)-range. The second S\(\cdots\)S spacing originally located at 5.7 A in the rigid DFT-calculated cluster is considerably broadened and shifted to smaller distances (5.51 A). The PPDFs of the amorphous SHG-system [(NpSn)\({}_{4}\)S\({}_{6}\)] as obtained from the m-RMC are displayed in Fig. 4 (b). Here however, no molecular distortions are observed. All correlation peaks are close to the values predicted by the DFT-calculated model and no splitting or asymmetric broadening of the correlation peaks is found. Hence, the cluster cores of this SHG-material are largely undistorted and closely resemble the DFT-calculated model [8], which is consistent with the above EXAFS analysis. The mutual spatial arrangement of phenyl- and the naphryl-clusters in the simulation boxes are represented in the upper insets of Figs. 5 (a) and (b), where the positions of the molecular centers of mass are displayed, respectively. The graphs below are the pair distribution functions (PDF), \(g_{\rm m}(r)\), of these centers. The [(PhSn)\({}_{4}\)S\({}_{6}\)] PDF indicates that molecular centers do not approach closer than 6 A. Above this value a steep correlation rise occurs forming a pronounced peak centered between 6 and \(\sim\)9 A. Further increased correlation exists between \(\sim\)11 and \(\sim\)14 A. For distances between 6 and 7 A dimeric structures are exclusively found in the simulation box. They represent about 20 % of all molecules. They are indicated by the red bonds in Fig. 5 (a). A typical dimer from the RMC ensemble is shown in Fig. 6 (a). The mutual molecular arrangement is an alternating staggered configuration where the ligands of one molecule are located in the voids between those of the other molecule allowing maximal approximation of the cores. A preference of this conformation for {SnS} and {SiS} clusters with phenyl ligands was found in DFT based binding energy calculations, where cluster dimers were studied as minimal models of the amorphous state [6, 20], and where intra dimer distances between 6.0 and 6.5 A were proposed for [(PhSn)\({}_{4}\)S\({}_{6}\)]. Here, we find an average dimer spacing Figure 4: Comparison between the intramolecular partial correlations functions for the WLG [(PhSn)\({}_{4}\)S\({}_{6}\)] **(a)** and the SHG [(NpSn)\({}_{4}\)S\({}_{6}\)] **(b)**. The insets in the two upper diagrams in **(a)** show enlargements of the respective peaks in the graphs. The blue dashed lines in the graphs represent internal distances in the cluster cores as expected from the DFT calculated structures [8]. **(c)** shows the intermolecular partial correlation functions for the [(PhSn)\({}_{4}\)S\({}_{6}\)] system. of 6.75 A, which is slightly larger, and can be attributed to the fact that the dimer interaction in a real solid is also shared with other molecules. A staggered alternating arrangement between dimers is also found in the crystal structure of [(PhSi)\({}_{\text{s}}\)S\({}_{\text{6}}\)].[8]. The RDF-integral (red lines, right scales) shows that each molecule is on average surrounded by two neighbors at 8.0 A, indicating that the first maximum in \(g_{\text{m}}(r)\) may mainly result from chainlike structures. Indeed, such structures dominate the mutual alignments in the simulation box up to this correlation length. A linear tetramer chosen from the RMC ensemble is exemplary shown in Fig. 6 (b). The respective intermolecular distances are given by red numbers. Again, we find alternating staggered mutual alignments of the organic ligands. At larger distances, up to 9 A the running coordination number indicates three neighbors on average. This value lies between the first two maxima in \(g_{\text{m}}(r)\). Here, the chains begin to branch as is indicated by the grey bonds between the molecular centers in the simulations box of Fig. 5 (a). From Fig. 6 (a) and (b), the distortion of the cluster cores is already visible, which obviously results from modified sulfur positions. Some inter- and intramolecular sulfur spacings are given in the figure, indicating that the overall difference between intra- and intermolecular S-S spacings seems to vanish. It appears as if the sulfur atoms tend to distribute themselves uniformly. This is also apparent from the intermolecular PPDFs shown in Fig. 4 (c). A pronounced correlation peak centered at 3.7 A is clearly visible in \(g_{\text{s}}\)-\(g(r)_{\text{inter}}\). The integral over the RDF indicates that every cluster core of the RMC ensemble is on average surrounded by one S atom from another cluster core between 3 and 4.18 A, 75% of which are situated in the segment under the peak at 3.7 A, which is in good agreement with the spacing found for the additional scattering path in the sulfur \(K\)-edge EXAFS analysis alone. This value is well inside the range of the intramolecular S\(\cdots\)S spacings. A similar strong correlation is found in \(g_{\text{S}\text{n-S}}(r)_{\text{inter}}\) which however extends to considerably larger values, and involves only about half an atom on a comparable length scale. Much weaker intermolecular correlations exist between Sn and S atoms. The mutual spatial situation of the centers of mass for the SHG material [(NpSn)\({}_{\text{s}}\)S\({}_{\text{6}}\)], as obtained from the dynamic m-RMC simulation, is displayed in Figure 5 (b). Here, a first maximum in \(g_{\text{m}}(r)\) is Figure 5: \(g_{\text{m}}(r)\), of the molecular centers as obtained from m-RMC simulations on [(PhSn)\({}_{\text{s}}\)S\({}_{\text{6}}\)] **(a)** and [(NpSn)\({}_{\text{s}}\)S\({}_{\text{6}}\)] **(b)**. Red full lines represent the integrals over the RDFs (4\(\pi\) n \(r^{2}\) g(r)) to give average numbers of surrounding molecules. Inset shows distribution of the molecular centers in the simulation box. In **(a)** dimer bonds are drawn in red for \(r\) values up to 7 Å. Other bonds are drawn up to 8.5 Å. In **(b)** the red dimer bonds are drawn for spacings up to 8 Å. Grey bonds show correlations up to 11 Å. centered around 8.6 A, which is at higher distance as for the WLG-material, indicating that cluster cores are farther apart in the SHG case. Also, the increase in correlation above 6 A seems to be shallower than in the WLG case, suggesting that repulsive forces are weaker and nearest neighbor distances are spread over wider ranges. A second maximum is located nearby between about 11 and 14 A. Only 6.5% of the molecules form dimer pairs in a correlation range between 6.5 and 7.5 A (red bonds in Fig. 5 (b)), which is considerably less than for the WLG material. At higher correlation length the number of neighbors increases rapidly and at \(\sim\)10 A, the integral over the RDF indicates two next neighbors on average. In fact, longer chains are found in the simulations box up to this length scale, which are however already strongly branched, and at 11 A the running coordination number indicates already more than three next neighbors. Here, in contrast to the WLG material, such values lie in the range of a distinct broad peak in \(g_{\mathrm{m}}(r)\) indicating that interconnection between the molecular centers on this length scale has now formed a dense network as is shown by the grey bonds displayed in the simulation box of Fig. 5 (b). Fig. 6 (c) shows a typical example for the mutual arrangement of five [(NpSn)\({}_{4}\)S\({}_{6}\)] molecules arbitrarily selected from the m-RMC simulation box. The central molecule is shaded blue for better distinction. It is surrounded by two other molecules about 8.5 A apart and by two further molecules at a about 11 A. Both, EXAFS analysis and m-RMC simulations indicate that the adamantane-like molecular cores in the WLG material are distorted, while in the SHG material molecules are undistorted. To reproduce the X-ray and EXAFS patterns, more of the shorter S-S distances are required for the WLG than can be provided by the undistorted adamantane cluster. Therefore, in the simulation, the sulfur atoms move out of their original positions to form shorter intermolecular S-S distances. This is indicated by the intense intermolecular S-S correlation peak in Fig. 4 (c). In the real amorphous WLG material, the sulfur atoms seem to strive for a uniformly distributed sulfur network, but since the atoms are tightly bound in their molecular framework, this results in a distortion of the molecular cores. Since no chemical bonds exist between the sulfur atoms, it is tempting to identify the formation of such a sulfur mesh as a vibrational network. Such a network could be the source for an enhanced density of vibrational states with a broad Figure 6: Arbitrarily chosen mutual molecular arrangements taken from the m-RMC simulation boxes. Yellow: positions of S atoms, purple: positions of Sn atoms, grey: organic ligands. **(a)** [(PhSn)\({}_{4}\)S\({}_{6}\)]-dimer from the m-RMC simulation box. Average spacing between molecular centers is displayed by red numbers. **(b)** [(PhSn)\({}_{4}\)S\({}_{6}\)]-tetramer from the m-RMC simulation box. Red numbers indicate spacings between molecular centers, blue numbers and arrows are intermolecular S-S distances. Some intramolecular S-S spacings are given in black. **(c)** Molecular crosslinking between five [(NpSn)\({}_{4}\)S\({}_{6}\)] molecules. The central molecule (shaded blue for better distinction) is surrounded by two other molecules about 8.5 Å apart and by two more at a about 11 Å. Blue and red numbers denote spacings between intermolecular centers and S-S atoms, respectively. range of \(k\)-values which could explain the observed high receptivity of the WLG materials for infrared radiation. All cluster nuclei are distorted differently leading to strong non-uniform spatial fluctuations in the interaction forces that suppress crystallization. Strong isotropic core-core interactions were previously already suspected to hinder crystal formation [20]. Our results and interpretations presented so far can however not answer the important question, why the [(NpSn)\({}_{4}\)S\({}_{6}\)] system does not act as a WLG although it appears to be amorphous. Also, the scattering law of [(NpSn)\({}_{4}\)S\({}_{6}\)] does not show any distinct Bragg peaks and resembles the typical \(S(Q)\) of a disordered condensed phase. Nevertheless, it should be noted that it also exhibits peculiar fluctuations between one and three A\({}^{\text{-1}}\) that are untypical for fully disordered systems like liquids and glasses, where \(S(Q)\) is a rather smooth function. Therefore, a comprehensive study was also carried out to explore the structural properties covering the range from mesoscopic down to microscopic scales using (S)TEM, combined with (S)PED [18]. The latter allows to perform electron diffraction experiments at different sample positions with a spatial resolution down to about 1.5 nm. Figure 7 (a) shows a high angle annular dark field overview image from a [(NpSn)\({}_{4}\)S\({}_{6}\)] sample obtained by (S)TEM. The tin-sulfide compound is displayed by the bright areas in the image. Two different modifications of the compound can be identified: large, \(\upmu\)m-sized round particles and also significantly smaller, rod-like units. Fig. 7 (b) shows the virtual bright field image reconstructed from the diffraction patterns intensities obtained across the scanned area. The dashed box indicates the final (S)PED acquisition data set region that corresponds to the data shown in Figures 8 (d-f). Diffraction patterns (DP) were recorded and stored for all scan points in this region. From selected pixels of the DPs for each scan point (see Figure 8 (a-c)) virtual dark field (VDF) images were generated which are proportional to their intensity (Figure 8(d-f)). Most of the scan points (a, d) confirm the amorphous structure, also inferred from the X-ray experiments. However, (S)PED clearly reveals nanoscale regions exhibiting distinct diffraction spots (Fig. 8 (b-c) and (e-f)). These crystallites, indicated by cyan and orange arrows, are about 50 to 150 nm in size, and are mostly found around the rodlike particles (cyan) as well as on the edges of the round particles (orange). The (S)PED analysis of the WLG [(PhSn)\({}_{4}\)S\({}_{6}\)] does not show any crystalline inclusions and a fully amorphous morphology is confirmed. Electron scattering pattern of the amorphous areas from low dose TEM measurements could be reduced down to the absolute \(S(Q)\) level from which PDFs were also obtained by Fourier transform [18, 19]. Within the limits of the slightly different relative scattering lengths, they are in good agreement with the findings from direct X-ray scattering [21]. Figure 7: (a) High angle annular dark field (HAADF) (S)TEM images of differently sized [(NpSn)S\({}_{6}\)] particles (bright regions) embedded in an epoxy matrix (black) on lacey carbon support (dark grey). (b) Virtual bright field image generated from the (S)PED dataset showing an inverted contrast compared to the HAADF. {SnS}cluster region is darker than the background. From this image a small subset was generated for the indicated region of interest (ROI). Diffraction patterns from different regions of that data set are shown in Fig. 8 [18]. Apparently, on mesoscopic length scales, [(NpSn)\({}_{4}\)S\({}_{6}\)] consists of crystalline and amorphous regions but the amorphous regions predominate by far. The tendency for crystallization may stem from the fact that the molecular cluster cores are undistorted. Thus, identical directed interactions exist between all clusters. On the other hand, the crystallization is also sterically hindered by the bulky organic ligands, which are free to rotate about the Sn-C axis. As a result, the cluster cores cannot approach close enough for effective crystallization. Also, nanometer-sized crystalline spots are found to exist in the amorphous matrix. The unusual intensity oscillations observed in \(S(Q)\) of [(NpSn)\({}_{4}\)S\({}_{6}\)] between 0.1 and \(\sim\)3 A-1 could be remnants of Bragg peaks originating from such spots. Due to the extremely small crystallite sizes in the nm range, such peaks were extremely Scherrer broadened and can therefore not be resolved in a conventional scattering experiment. ## Conclusions The structural properties of the WLG material [(PhSn)\({}_{4}\)S\({}_{6}\)] and the SHG material [(NpSn)\({}_{4}\)S\({}_{6}\)] were investigated in an extensive structural study, addressing correlations from the micrometer range down to inter- and intramolecular scales. Clear structural differences exist between the two materials on all scales, which on one hand justifies why they don't condense crystalline, and on the other hand also provides indications for their different optical behavior. On molecular scales, EXAFS and X-ray scattering reveal pronounced molecular distortions for the WLG material, which can be attributed to variations of the sulfur positions in the cluster core. Inter- and intramolecular sulfur distances are similar, suggesting a sulfur network. Since there are no chemical bonds between these atoms, a pure vibrational network may be speculated, which could contribute to an increased vibrational density of states which could explain the high IR receptivity of the WLG. No molecular distortions are observed in the SHG material. Here, larger distances between the {SnS} cluster nuclei are found as a result of steric hindrance by the more voluminous organic naphthly ligands. These dominate the intermolecular interaction, but also suppress crystallization. While (S)TEM studies of the [(PhSn)\({}_{4}\)S\({}_{6}\)] material show a homogeneous amorphous matrix on the micrometer scale, different morphologies are found for the [(NpSn)\({}_{4}\)S\({}_{6}\)] on this length scale: larger round and rod-shaped particles can be distinguished, respectively. Electron diffraction with spatial resolution in the nanometer range on these different domains demonstrate the existence of nanocrystalline domains in the otherwise amorphous matrix, suggesting that the crystallization suppression by the organic ligands is weaker in this system than in the [(PhSn)\({}_{4}\)S\({}_{6}\)] material, where the strong distortion of the cluster cores was made responsible. Figure 8: The diffraction pattern in (a) originates from amorphous regions of the [(NpSn)\({}_{4}\)S\({}_{6}\)] material whereas (b) and (c) originate from regions highlighted in (e) and (f), respectively. By selecting specific regions in the diffraction pattern of the data set, virtual dark field images can be generated. An arbitrary pixel taken from (a) shows a dark field map of the scanned area, as depicted in (d). In contrast, a region chosen on a diffraction spot like indicated in (b) and (c) highlights regions that generate these diffraction spots. It is apparent that the origin of the diffraction spots stems from small crystalline regions [18]. **Acknowledgments** We acknowledge funding by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), Grant No. 398143140, related to the Research Unit FOR 2824. The authors also acknowledge the great working conditions and support of the following large-scale facilities: German Electron Synchrotron (Deutsche Elektronen Synchrotron, DESY, a member of the Helmholtz Association HGF), beamlines P65 (proposal ID I-20190122), P02.1 (proposal ID RAt-20010143), and the HiSOR facility of the Hiroshima Synchrotron Radiation Center (BL-11, proposal No. 20AG034).
2310.18376
SQLformer: Deep Auto-Regressive Query Graph Generation for Text-to-SQL Translation
In recent years, the task of text-to-SQL translation, which converts natural language questions into executable SQL queries, has gained significant attention for its potential to democratize data access. Despite its promise, challenges such as adapting to unseen databases and aligning natural language with SQL syntax have hindered widespread adoption. To overcome these issues, we introduce SQLformer, a novel Transformer architecture specifically crafted to perform text-to-SQL translation tasks. Our model predicts SQL queries as abstract syntax trees (ASTs) in an autoregressive way, incorporating structural inductive bias in the encoder and decoder layers. This bias, guided by database table and column selection, aids the decoder in generating SQL query ASTs represented as graphs in a Breadth-First Search canonical order. Our experiments demonstrate that SQLformer achieves state-of-the-art performance across six prominent text-to-SQL benchmarks.
Adrián Bazaga, Pietro Liò, Gos Micklem
2023-10-27T00:13:59Z
http://arxiv.org/abs/2310.18376v4
# SQLformer: Deep Auto-Regressive Query Graph Generation for Text-to-SQL Translation ###### Abstract In recent years, there has been growing interest in text-to-SQL translation, which is the task of converting natural language questions into executable SQL queries. This technology is important for its potential to democratize data extraction from databases. However, some of its key hurdles include domain generalisation, which is the ability to adapt to previously unseen databases, and alignment of natural language questions with the corresponding SQL queries. To overcome these challenges, we introduce SQLformer, a novel Transformer architecture specifically crafted to perform text-to-SQL translation tasks. Our model predicts SQL queries as abstract syntax trees (ASTs) in an autoregressive way, incorporating structural inductive bias in the encoder and decoder layers. This bias, guided by database table and column selection, aids the decoder in generating SQL query ASTs represented as graphs in a Breadth-First Search canonical order. Comprehensive experiments illustrate the state-of-the-art performance of SQLformer in the challenging text-to-SQL Spider benchmark. Our implementation is available at [https://github.com/AdrianBZG/SQLformer](https://github.com/AdrianBZG/SQLformer). ## 1 Introduction Relational databases are essential tools within various critical sectors like healthcare and industry among others. For those with technical expertise, accessing data from these databases using some form of structured query language, such as SQL, can be efficient. However, the intricate nature of SQL can make it daunting for non-technical users to learn, creating significant barriers to use.. Consequently, there has been a surge in interest in the field of text-to-SQL (Cai et al., 2018; Zelle and Mooney, 1996; Xu et al., 2017; Yu et al., 2018; Yaghmazadeh et al., 2017), which aims to convert natural language questions (NLQs) directly into SQL queries. This has the potential to dramatically reduce the obstacles faced by non-expert users when interacting with relational databases (DBs). Early work in the field primarily focused on developing and evaluating semantic parsers for individual databases (Hemphill et al., 1990; Dahl et al., 1994; Zelle and Mooney, 1996; Zettlemoyer and Collins, 2012; Dong and Lapata, 2016). However, given the widespread use of DBs, an approach based on creating a separate semantic parser for each database does not scale. One of the key hurdles in achieving domain generalisation (Wang et al., 2021; Cao et al., 2021; Wang et al., 2022; Cai et al., 2022; Hui et al., 2022) is the need for complex reasoning to generate SQL queries rich in structure. This involves the ability to accurately contextualise a user query against a specific DB by considering both explicit relations (like the table-column relations defined by the DB schema) and implicit relations (like determining if a phrase corresponds or applies to a specific column or table). Recently, there has been a release of large-scale datasets (Yu et al., 2019; Zhong et al., 2017) comprising hundreds of DBs and their associated question-SQL pairs. This has opened up the possibility of developing semantic parsers capable of functioning effectively across different DBs (Guo et al., 2019; Bogin et al., 2019; Zhang et al., 2019; Wang et al., 2021; Suhr et al., 2020; Choi et al., 2020; Bazaga et al., 2021). However, this demands the model to interpret queries in the context of relational DBs unseen during training, and precisely convey the query intent through SQL logic. As a result, cross-DB text-to-SQL semantic parsers cannot simply rely on memorising observed SQL patterns. Instead, they must accurately model the natural language query, the underlying DB structures, and the context of both. Current strategies for cross-DB text-to-SQL semantic parsers generally follow a set of design principles to navigate these challenges. First, the ques tion and schema representation are contextualised mutually by learning an embedding function conditioned on the schema Hwang et al. (2019); Guo et al. (2019); Wang et al. (2021). Second, pre-trained language models (LMs), such as BERT Devlin et al. (2019) or RoBERTa Liu et al. (2019), have been shown to greatly improve parsing accuracy by enhancing generalisation over language variations and capturing long-range dependencies. Related approaches Yin et al. (2020); Yu et al. (2021) have adopted pre-training on a BERT architecture with the inclusion of grammar-augmented synthetic examples, which when combined with robust base semantic parsers, have achieved state-of-the-art results. In this paper, we present SQLformer, which integrates the above design principles into a novel Transformer variant for text-to-SQL translation. We conceptualize each NLQ as a graph with multiple relationships, including syntactic dependencies and part-of-speech. The database schema is depicted as a graph, described by the metadata for the tables, columns, and their relations. Drawing inspiration from the image domain Dosovitskiy et al. (2021), we incorporate two learnable token embeddings for table and column representations into the encoder. These are used to select a set of \(k_{1}\) and \(k_{2}\) tables and columns over the target database. Our model learns embeddings for the suggested tables and columns, enriching the decoder input with database information. This guides the decoder by contextualizing the input with the most relevant tables and columns from the given NLQ. Finally, we propose an autoregressive decoder, that predicts the SQL query as an AST. Experimental results on the Spider benchmark show that SQLformer achieves 78.2% exact match (EM) accuracy, surpassing multiple state-of-the-art baselines. ## 2 Related Work In earlier research, a sketch-based slot filling approach was commonly used, which employs different modules to predict distinct parts of the generated SQL query. This approach breaks down the task of SQL generation into several independent sketches and utilises different classifiers to predict the separate parts, as shown in methods such as SQLNet Xu et al. (2017), TypeSQL Yu et al. (2018), SQLOVA Hwang et al. (2019), X-SQL He et al. (2019) or RYANSQL Choi et al. (2020). However, most of these methods only address simple queries and struggle to generate accurate queries in the more complex scenarios found in the Spider dataset Yu et al. (2019). The main challenge lies in the multi-table relations in the Spider dataset queries. There have been multiple approaches to address the challenges brought by these complex SQL tasks. A common approach has been the use of attention-based architectures for question-schema encoding, and rule-based structural architectures for query decoding. For instance, IRNet Guo et al. (2019) separately encodes the question and schema using a LSTM and a self-attention mechanism respectively. Schema linking is accomplished by enhancing the question-schema encoding with custom type embeddings. The rule-based decoder from Yin and Neubig (2017) was then used in order to decode a query into an intermediate representation, attaining a high-level abstraction for SQL. On the other hand, multiple works make use of graph structures to encapsulate a range of complex relationships. For instance, Global-GNN Bogin et al. (2019) models the database as a graph, while RAT-SQL Wang et al. (2021) introduces schema encoding and linking, attributing a relation to every pair of input items. Further developments include LGESQL Cao et al. (2021), which distinguishes between local and non-local relations using a line graph enhanced hidden module; SADGA Cai et al. (2022) which utilises contextual and dependency structure to jointly encode the question graph with the database schema graph; \(S^{2}SQL\)Hui et al. (2022) which incorporates syntactic dependency information in a relational graph attention network architecture Wang et al. (2020), and RASAT Qi et al. (2022) which integrates a relation-aware self-attention module into a T5 model Raffel et al. (2020). Recent work has demonstrated the effectiveness of fine-tuning pre-trained models. For instance, Shaw et al. (2021) showed that fine-tuning a pre-trained T5-3B model could yield competitive results. Building on this, Scholak et al. (2021) introduced PICARD, a technique that constrains the auto-regressive decoder by applying incremental parsing during inference time. This approach filters out grammatically incorrect sequences in real time during beam search, improving the quality of the generated SQL. Preliminaries ### Problem Formulation Given a natural language question \(Q\) and a schema \(S=\) < _T, C_ > for a relational database, our objective is to generate a corresponding SQL query \(Y\). Here, the sequence \(Q=\)q\({}_{1}\)... q\({}_{|Q|}\) is a sequence of natural language tokens or words, where \(|Q|\) is the length of the question. The database schema is comprised of tables \(T=\{\)t\({}_{1}\),..., t\({}_{|T|}\}\) and columns \(C\)\(=\)\(\{\)c\({}_{1}\),..., c\({}_{|C|}\}\), where \(|T|\) and \(|C|\) are the number of tables and columns in the database, respectively. Each column name c\({}_{i}\)\(\in\)\(C\), is comprised of tokens c\({}_{i,1}\),..., c\({}_{i,|C_{i}|}\), where \(|C_{i}|\) is the number of tokens in the column name, and similarly table names are also comprised of tokens t\({}_{i,1}\),..., t\({}_{i,|t_{i}|}\), where \(|t_{i}|\) is the number of tokens in the table name. ### Query Construction In contrast to previous work, we model the output SQL query \(Y\) as a graph, representing the AST of the query in the context-free grammar of SQL, which our model learns to generate in an autoregressive fashion. The query is an undirected graph \(G=(V,E)\), of vertices \(V\) and edges \(E\). Its nodes \(V=P\cup T\cup C\) are the possible SQL context-free grammar rules, \(P\), such as _UNION_, _SELECT_, _FROM_, _INTERSECTION_, etc, as well as the tables (\(T\)) and the columns (\(C\)) of the database schema. \(P\) are used to represent non-terminal nodes, depicting rules of the grammar, whereas \(T\) and \(C\) are used for terminal nodes, such as when selecting table or column names to be applied within a specific rule. The edge set \(E=\{\)(v\({}_{i}\),v\({}_{j}\)) \(|\) v\({}_{i}\), v\({}_{j}\)\(\in\) V\(\}\) defines the connectivity between the different nodes in the graph. We represent the graph using an adjacency matrix, under a Breadth-First-Search (BFS) node ordering scheme \(\pi\) that maps nodes to rows of the adjacency matrix as a sequence (You et al., 2018). This approach permits the modelling of graphs of varying size, such as the ones representing the ASTs of complex SQL queries. Formally, given a mapping \(f_{S}\) from graphs (\(G\)) to sequences (\(S\)), and a graph \(G\) with \(n\) nodes under BFS node ordering \(\pi\), we can formulate \[S^{\pi}=f_{S}(G,\pi)=(S_{1}^{\pi},\ldots,S_{n}^{\pi}) \tag{1}\] where \(S_{i}^{\pi}\in\) {0, 1}\({}^{\text{i-1}}\), \(i\in\) {1,..., \(n\)} depicts an adjacency vector between node \(\pi\)(v\({}_{i}\)) and the previous nodes \(\pi\)(v\({}_{j}\)), \(j\in\) {1,..., \(i\) - 1} already existing in the graph, so that: \[S_{i}^{\pi}=A(\pi_{1,i}^{\pi},\ldots,A_{i-1,i}^{\pi})^{T},\forall i\in\{2, \ldots,n\} \tag{2}\] Then, using \(S^{\pi}\), we can determine uniquely the SQL graph \(G\) in a sequential form and learn to predict it autoregressively. ## 4 SQLformer ### Model Overview In light of recent advancements in the field (Shaw et al., 2021; Scholak et al., 2021; Li et al., 2023), we approach the text-to-SQL problem as a translation task by using an encoder-decoder architecture. We extend the original Transformer encoder (see Subsection 4.3) by incorporating learnable table and column tokens in the encoder, used to select the most relevant tables and columns in the database schema given the NLQ. This information is injected as input to the decoder, so that it can be enriched with the representation of the schema-aware question encoding and the most relevant tables and columns in the database schema selected by the model. The SQLformer decoder extends the original Transformer decoder (see Subsection 4.4) in a way that integrates both node adjacency and type embeddings for generating a SQL query autoregressively. The overall architecture of our SQLformer model is described in Fig. 1. ### Model Inputs In this section, we detail how the inputs to our model are constructed, in particular, the construction of both the NLQ and schema graphs are explained. Question Graph Construction.The natural language question can be formulated as a graph \(G_{Q}\)\(=\) <\(Q\), \(R\)> where the node set \(Q\) are the natural language tokens, and \(R=\{\)r\({}_{1}\),..., r\({}_{|R|}\}\), refers to one-hop relations between words. In this work, we employ two groups of relations for the question graph. First, we use syntactic dependencies between the words in the question. Second, we use part-of-speech tagging to incorporate grammatical meaning across the words in the question. We create a joint question graph using both types of relations. This graph is then linearized as a Levi graph. Fig. 2 shows an example question graph with some illustrative relationships. To encode the question graph we use a GAT (Velickovic et al., 2018), obtaining an embedding for each of the question tokens, \(Z_{i}\in\mathbb{R}^{d}\), with \(i\in\{1\), \(\ldots\), \(|Q|\}\), where \(d\) is the hidden size. Database Schema Graph Construction.Similarly, a database schema graph can be represented by \(G_{S}=\textless S\), \(R\textgreater\) where the node set \(S=\textless T\), \(C\textgreater\) represents the tables, \(T\), and the columns, \(C\), in the schema. The edge set \(\text{R}=\{\text{r}_{1},\ldots,\text{r}_{|\text{R}|}\}\) depicts the structural relationships among tables and columns in the schema. Similarly to previous works, we use the common relational database-specific relations, such as primary/foreign key for column pairs, column types, and whether a column belongs to a specific table. Fig. 3 shows an example database schema graph. We encode the schema graph using a GAT (Velickovic et al., 2018) and use global average pooling to obtain a single embedding to represent each database schema. ### Table and Column Selection Encoder The SQLformer encoder receives as input the previously described 1-D sequence of natural language token embeddings, \(Z\), and we prepend two learnable tokens to the sequence of embeddings: \(Z_{tables}\) and \(Z_{cols}\). The state of these tokens at the output of the Transformer encoder, depicted here as \(\hat{X}^{tables}\) and \(\hat{X}^{columns}\) for tables and columns, respectively, serves as input to two Multi Layer Perceptron (MLP) blocks, that are responsible for, given the Figure 1: Overview of the model: our model inherits the seq2seq nature of the Transformer architecture, consisting of \(L\) layers of encoders and decoders. SQLformer encoder introduces two learnable embeddings, \(Z_{tables}\) and \(Z_{cols}\), for representing the tables and columns, respectively. These two embeddings are used to select a set of \(k_{1}\) and \(k_{2}\) tables and columns, respectively. Subsequently, the learnable embeddings for these tables and columns are aggregated together with the representation of the question. In this example, the question consists of six tokens (see Fig. 2). This schema-conditioned question representation serves as input to the SQLformer decoder module. The architecture for the decoder module is detailed in Fig. 4. Figure 2: An illustration of an example Spider question with six tokens as a graph \(G\) with part-of-speech and dependency relations. In this example, the token \(number\) has a OBJECT dependency with \(Find\), and \(Find\) and \(number\) are tagged as verb (VB) and noun (NN), respectively. We do not show all edges and label types to prevent clutter NLQ, selecting \(k_{1}\) and \(k_{2}\) tables and columns, respectively. \(k_{1}\) and \(k_{2}\) are both hyperparameters to the model. Sinusoidal vectors are added to the sequence embeddings to retain the original positional information of the question. The Transformer encoder [20] consists of alternating layers of multi-head self-attention (MHA) and Fully-connected Forward Network (FFN) blocks. Before every block, Layer Normalisation (LN) is applied, and after every block, a residual connection is added. More formally, in the \(\ell^{th}\) encoder layer, the hidden states are represented as \(X_{S}^{\ell}=\{x_{1}^{\ell},\dots,x_{N}^{\ell}\}\), where \(N\) is the maximum length of the inputs. First, a MHA block maps \(X\) into a query matrix \(Q\in\mathbb{R}^{n\times d_{k}}\), key matrix \(K\in\mathbb{R}^{n\times d_{k}}\) and value matrix \(V\in\mathbb{R}^{n\times d_{v}}\), where \(m\) is the number of query vectors, and \(n\) the number of key or value vectors. Then, an attention vector is calculated as follows \[\text{Attention(Q, K, V)}=\text{softmax}(A)V \tag{3}\] \[\text{A}=\frac{QK^{\text{T}}}{\sqrt{d_{\text{k}}}} \tag{4}\] In practice, the MHA block calculates the self-attention over \(h\) heads, where each head \(i\) is independently parametrized by \(W_{i}^{Q}\in\mathbb{R}^{d_{m}\times d_{k}}\), \(W_{i}^{K}\)\(\in\mathbb{R}^{d_{m}\times d_{k}}\) and \(W_{i}^{V}\in\mathbb{R}^{d_{m}\times d_{v}}\), mapping the input embeddings \(X\) into queries and key-value pairs. Then, the attention for each head is calculated and concatenated, as follows \[\text{H}_{i}=Attention(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}) \tag{5}\] \[\text{MHA}(X_{S}^{\ell})=Concat(\text{H}_{1},\dots,\text{H}_{h})W ^{\text{O}}\] (6) \[\overline{\textbf{X}}_{S}^{\ell}=\text{MHA}(X_{S}^{\ell}) \tag{7}\] where \(\text{W}^{\text{O}}\in\mathbb{R}^{d_{m}\times d_{m}}\) is a trainable parameter matrix. Next, to acquire the semantic hidden states of the input, a FFN block is applied, as follows \[\text{FFN}(\overline{\textbf{X}}_{S}^{\ell})=\text{max}(0,\overline{\textbf{ X}}_{S}^{\ell}W_{1}+b_{1})W_{2}+b_{2} \tag{8}\] where \(\text{W}_{1}\in\mathbb{R}^{d_{m}\times d_{ff}}\) and \(\text{W}_{2}\in\mathbb{R}^{d_{ff}\times d_{m}}\) are linear weight matrices. Finally, layer normalisation and residual connection are applied as follows \[\hat{\textbf{X}}_{S}^{\ell}=\text{LayerNorm}(\overline{\textbf{X}}_{S}^{\ell} +\text{FFN}(\overline{\textbf{X}}_{S}^{\ell})) \tag{9}\] Therefore, after \(L\) encoder layers, we obtain the input question embedding as \(\hat{X}\). Where the first and second tokens, \(\hat{\textbf{X}_{0}}\) and \(\hat{\textbf{X}_{1}}\), correspond to \(\hat{\textbf{X}_{\text{tables}}}\) and \(\hat{\textbf{X}_{\text{columns}}}\), and the remaining tokens correspond to the natural language question tokens embeddings, depicted as \(\hat{X_{Q}}\in\mathbb{R}^{d\times Q}\). \(\hat{\textbf{X}_{\text{tables}}}\) and \(\hat{\textbf{X}_{\text{columns}}}\) are the input of two MLP blocks, \(\text{MLP}^{\text{tables}}\in\mathbb{R}^{d\times T}\) and \(\text{MLP}^{\text{columns}}\in\mathbb{R}^{d\times C}\), where \(d\) is the hidden size of the token embeddings, and \(T\) and \(C\) are the sizes of the tables and columns vocabularies, respectively. Both MLP blocks project the embeddings for the additional tokens into two separate vectors of probabilities, as follows \[\text{P}_{\text{tables}}=\text{softmax}(\text{MLP}^{\text{tables}}(\hat{ \textbf{X}}^{\text{tables}})) \tag{10}\] \[\text{P}_{\text{columns}}=\text{softmax}(\text{MLP}^{\text{columns}}(\hat{ \textbf{X}}^{\text{columns}})) \tag{11}\] Then, the top \(k_{1}\) and \(k_{2}\) tables and columns, respectively, are selected according to \(P_{tables}\) and \(P_{columns}\). Next, two embedding lookup tables, \(\text{E}_{\text{T}}\)\(\in\mathbb{R}^{T\times d_{t}}\) and \(\text{E}_{\text{C}}\)\(\in\mathbb{R}^{C\times d_{c}}\), are used for mapping the \(k\) top tables and columns, respectively, into embeddings, as \(X_{tables}^{k}\in\mathbb{R}^{k_{1}\times d}\) and \(X_{columns}^{k}\in\mathbb{R}^{k_{2}\times d}\), where \(d\) is the size of the learnable embeddings. These are aggregated and concatenated, giving the final representation for the schema, depicted as \(\hat{X_{schema}}\) Finally, \(\hat{X_{Q}}\) and \(\hat{X_{schema}}\) are aggregated to effectively contextualize the natural language question embedding by the embedding of the most likely tables and columns in the schema being mentioned. The result of this aggregation is given as input to the decoder module. Figure 3: An illustration of an example Spider schema for database \(scientist\_1\). In this example, there are a total of 3 tables (\(scientists\), \(projects\), \(assigned\_to\)), with multiple columns for each table and relationships between the tables. For instance, table \(scientists\) has 2 columns: \(name\) and \(asn\), where \(asn\) is also a \(foreign\_key\) relationship to table \(assigned\_to\). ### Autoregressive Graph Generation Decoder During the decoding phase, previous works (e.g. Wang et al. (2021); Cao et al. (2021); Hui et al. (2022); Cai et al. (2022)) widely adopt the LSTM-based tree decoder in Yin and Neubig (2017) to generate SQL grammar rules. In contrast, the SQLformer decoder (see Fig. 4) extends the original Transformer decoder to predict the SQL AST autoregressively. This approach has two main advantages. First, it is able to maintain the context of previously generated parts of the query for longer sequences than LSTM-based decoders. This is especially important for long SQL queries, such as these containing sub-queries. Second, it encourages the generation of valid SQL queries by constraining the decoder to directly generate SQL ASTs. Also, the Transformer permutation invariance is desirable for processing the node embeddings of the SQL graph, as the graph is invariant under any permutation of the nodes. In the SQLformer decoder, the node embeddings are represented as a linear transformation of the node adjacency vectors, here called _node adjacency channels_. Formally, given a query graph \(G\), we represent the node adjacencies \(A=\{\text{A}_{0}\), \(\text{A}_{1}\), \(\ldots\), \(\text{A}_{\text{N}}\}\), where N is the number of nodes and \(\text{A}_{i}\in\{0,1\}^{\text{M}}\). M is the maximum frontier of the BFS ordering. The _node adjacency channels_ of A, represented as \(\text{H}_{\text{A}}\), are calculated as follows \[\text{H}_{\text{A}}=AW_{A} \tag{12}\] where \(W_{A}\in\mathbb{R}^{|\text{A}|\times d_{A}}\) is a learnable weight matrix with a hidden size of \(d_{A}\). In addition, we introduce the _node types_, represented as \(V=\{\text{V}_{0}\), \(\text{V}_{1}\), \(\ldots\), \(\text{V}_{\text{N}}\}\), where \(N\) is the number of nodes and \(\text{V}_{i}\) is a one-hot representation of the node type for node \(i\). The objective of \(V\) is to include the information about the query graph node types into the decoding process. Similarly to \(A\), we transform \(V\) by using a linear transformation into the _node type channels_, \(\text{H}_{\text{V}}\), as follows \[\text{H}_{\text{V}}=VW_{V} \tag{13}\] where \(W_{V}\in\mathbb{R}^{|\text{V}|\times d_{V}}\) is a learnable weight matrix with a hidden size of \(d_{V}\), and \(|\text{V}|\) is the number of possible node types. However, the basic Transformer does not have a direct way to incorporate both channels. Consequently, in order to alleviate this issue and incorporate both _node adjacency channels_ and _node type channels_ into the SQLformer decoder, we extend the original Transformer decoder architecture. In particular, inspired by (Ying et al., 2021), we include the _node type channels_ in the multi head self-attention aggregation process as a bias term (see Fig. 4). Formally, we modify Eq. 4 so that \(\text{H}_{\text{V}}\) acts as a bias term in the attention calculation, such that \[\text{A} =\frac{QK^{\text{T}}}{\sqrt{d_{\text{k}}}}+U \tag{14}\] \[U =W_{U}\times H_{V} \tag{15}\] Figure 4: Overview of the SQLformer decoder architecture. The inputs to the decoder are the node adjacencies and node types in the current timestep of the generation process. These are transformed into node adjacency and node type channels by 2 different linear layers. The node type channels are integrated into the aggregation process of the MHA mechanism as a bias term. The adjacencies and type channels are transformed through a series of \(L\) decoding layers with \(H\) heads. The final representation is used to generate the next node in the query graph. where \(W_{U}\in\mathbb{R}^{d\times d_{U}}\) is a learnable weight matrix that updates the input node type embeddings \(\text{H}_{\text{V}}\) into U, and \(d_{U}\) is the node type embeddings dimensionality. In addition to the original Transformer output projection layer, depicted here as \(\text{O}_{\text{A}}\in\mathbb{R}^{h\times h}\), we add an additional output projection layer for updating the residuals of the node type embeddings, defined here as \(\text{O}_{\text{V}}\in\mathbb{R}^{d_{U}\times h}\). Specifically, the update of the embeddings \(H_{A}^{\ell}\) and \(H_{V}^{\ell}\), for node adjacency and node type channels, respectively, at layer \(\ell\), can be formalised as \[\begin{gathered} H_{A}^{\ell}=H_{A}^{\ell-1}+\text{O}_{A}^{\ell} \parallel_{k=1}^{K}\sum_{j=1}^{N}G^{k,\ell}(\mathbf{V}^{k,\ell}h_{A}^{\ell}) \\ H_{V}^{\ell}=H_{V}^{\ell-1}+\text{O}_{V}^{\ell}\parallel_{k=1}^ {K}A^{k,\ell}\\ G^{k,\ell}=\text{softmax}(A^{k,\ell})\end{gathered} \tag{16}\] where \(\parallel\) means concatenation, and \(K\) is the number of attention heads. Finally, the representation of both \(\text{H}_{\text{A}}\) and \(\text{H}_{\text{V}}\) after the last decoder layer is then fed to two distinct MLP heads, which emit the predicted (soft) node adjacencies a*1, and node types v*1, for timestep t+1 as follows \[\text{a}^{\text{t+1}}=\sigma(W_{a_{2}}\text{ReLU}(W_{a_{1}}H_{A}^{t})) \tag{17}\] \[\text{v}^{\text{t+1}}=\text{softmax}(W_{v_{2}}\text{ReLU}(W_{v_{1}}H_{V}^{t})) \tag{18}\] where \(W_{a_{1}}\), \(W_{v_{1}}\in\mathbb{R}^{512\times d}\), \(W_{a_{2}}\in\mathbb{R}^{M\times 512}\), \(W_{v_{2}}\in\mathbb{R}^{|\mathbf{V}|\times 512}\), \(M\) is the maximum size of the BFS-ordering frontier, \(|\mathbf{V}|\) the size of the node type vocabulary, and \(\sigma\) represents the sigmoid operation. ## 5 Experiments In this section, we show our model performance on the Spider text-to-SQL dataset Yu et al. (2019). Also, we present ablation studies to analyse the importance of the different components of the SQL-former architecture. ### Experimental Setup Dataset.Our experiments use the Spider dataset, a large-scale cross-domain text-to-SQL benchmark. This dataset also incorporates multiple text-to-SQL datasets. The Spider dataset contains 8,659 training examples of question and SQL query pairs (along with the corresponding database schemas) and 1,034 development (dev) examples, spanning 200 complex databases across 138 different domains. The test set is not available for examination. Evaluation Metrics.Following Yu et al. (2019), we report results using the same metrics. In particular we compute Exact Match (EM) accuracy on all examples, as well as grouped by difficulty levels. EM can evaluate how much a predicted SQL query is comparable to the ground truth query. Similarly to previous work Wang et al. (2021) on Spider, these metrics do not take into account the model's performance on generating the constant values in the SQL query. In our ablation study experiments, we also use the EM accuracy metric over the development set. Implementation Details.We implemented SQLformer in PyTorch Paszke et al. (2019). For the graph neural network components, we use PyTorch Geometric (Fey and Lenssen, 2019). The questions, column and table names are tokenized and lemmatized using _stanza_Qi et al. (2020). For dependency parsing and part-of-speech tagging, _stanza_Qi et al. (2020) is used. To transform the SQL queries into their corresponding ASTs, we use _sqlplot_. We find the best set of hyperparameters on a randomly sampled subset of 10% samples from the dev dataset. For training, we set the maximum input length as 1024, maximum number of generated AST nodes to 200, maximum previous AST nodes in the BFS ordering as 30, batch size as 16, and training steps to 20,000. The number of layers for the encoder and decoder are both set to 6, number of heads is 8. The dimensionality of the encoder and the decoder are set to 512. \(k_{1}\) and \(k_{2}\) are set to 20. The embedding sizes for tables and columns are set to 512. The node adjacency and type embeddings sizes are 512. The output MLP for generating the node adjacency and types have 2 layers and dimensionality of 512. Tokens embeddings are initialized with ELECTRA Clark et al. (2020) using the official weights from the HuggingFace library Wolf et al. (2020). We use teacher forcing in the decoder. Results are on the dev set unless stated otherwise. ### Overall Performance The EM accuracy results on the Spider benchmark are presented in Table 1. As shown in the table, our proposed model SQLformer achieves competitive performance in EM accuracy. On the development set, compared with RAT-SQL Wang et al. (2021), our model's EM increases from 73.7\(\%\) to 75.6\(\%\), achieving 1.9\(\%\) absolute improvement. When compared to approaches that fine-tune a Lan guage Model (LM) with a much larger amount of parameters, such as T5-3B (71.5\(\%\)), we achieve a 4.1\(\%\) absolute improvement. This effectively shows the benefit of our proposed architecture for solving text-to-SQL tasks. Furthermore, we provide a breakdown of accuracy by query difficulty level, i.e. easy, medium, hard and extra hard, as defined by Yu et al. (2019). In Table 2 we provide a comparison between our approach and state-of-the-art baselines on the EM accuracy metric, for the four query difficulty subsets. As expected, performance drops significantly with increasing query difficulty, dropping from 92.7% to 51.2% accuracy on \(easy\) and \(extra\) queries, respectively. Focusing on the most complex types of queries, when compared with RAT-SQL, SQLformer achieves an absolute improvement of 9.7% and 8.3% on \(hard\) and \(extra\) queries, respectively. This consolidates our motivation to employ a Transformer-based SQL decoder, allowing the model to capture longer dependencies. Therefore, SQLformer surpasses the baseline methods across all four subsets by a significant margin, giving supporting evidence for the effectiveness of our approach. ### Ablation Study In order to better validate the importance of each component in our architecture, we perform a series of ablation studies on the best performing SQLformer model. In Table 3, we compare 4 different design choices that we believe are critical in our architecture. In particular, we assess the impact of removing the table and column selection component from the encoder, the part-of-speech question encoding, and the dependency graph question encoding. As shown in Table 3, the component that has the biggest impact in the architecture is the table and column selection. Upon removing this component, the EM accuracy drops from 78.2\(\%\) to 72.3\(\%\), leading to a 5.9\(\%\) absolute performance drop. We hypothesise that such mechanism injects the notion of schema-question linking, which has been demonstrated to be critical. Therefore, without schema linking, the joint contextualisation of question and schema is missing, increasing significantly the difficulty of the task. On the other hand, the effect of removing the dependency graph and part-of-speech question encodings have less impact on performance, leading to an absolute performance decrease of 0.7\(\%\) and 0.9\(\%\), respectively. When swapping our decoder with the one in Yin and Neubig (2017), performance decreases by 4%. ## 6 Conclusion In this work, we introduced SQLformer, a new model for text-to-SQL generation, unique compared to previous models due to its autoregressive prediction of the SQL AST. With a specially designed encoder, SQLformer links questions and schema, utilizing pre-trained models for effective representation. A novel decoder layer integrates node adjacency and type information during learning, and is conditioned on top-selected tables, columns, and schema-aware question encoding to generate SQL queries. We anticipate that this architecture can generate queries in other languages modelled as graphs, such as SPARQL. Notably, SQLformer outperformed other competitive text-to-SQL baselines, showcasing its state-of-the-art performance. \begin{table} \begin{tabular}{l|c} \hline Method & EM accuracy (\(\%\)) \\ \hline T5-3B Scholak et al. (2021) & 71.5 \\ SADGA + GAP Cai et al. (2022) & 73.1 \\ RAT-SQL + GraPta Yu et al. (2021) & 73.4 \\ RAT-SQL + GAP + NatSQL Shi et al. (2021) & 73.7 \\ SMBOO + GraPta Rubin and Berant (2021) & 74.7 \\ DT-Fiving SQL-SP + RoBERTa Xu et al. (2021) & 75.0 \\ LGESQL + ELECTRA Cao et al. (2021) & 75.1 \\ RASAT + PICARD Qi et al. (2022) & 75.3 \\ T5-3B + PICARD Scholak et al. (2021) & 75.5 \\ S5-SQL + ELECTRA Hui et al. (2022) & 76.4 \\ GRAPHX-3B + PICARD Li et al. (2023) & 77.1 \\ \hline **SQLformer** (our approach) & **78.2** \\ \hline \end{tabular} \end{table} Table 1: Exact Match (EM) results on Spider’s dev dataset. We compare our approach with some state-of-the-art baseline methods. \begin{table} \begin{tabular}{l|c c c c c} \hline Method & Easy & Medium & Hard & Extra & All \\ \hline RAT-SQL + BERT & 86.4 & 73.6 & 62.1 & 42.9 & 69.7 \\ T5-3B & 89.5 & 78.3 & 58.6 & 40.4 & 71.6 \\ LGESQL & 91.5 & 76.7 & 66.7 & 48.8 & 74.1 \\ GRAPHIX-T5-3B & 91.9 & 81.6 & 61.5 & 50 & 75.6 \\ \hline **SQLformer** & **92.7** & **82.9** & **71.8** & **51.2** & **76.8** \\ \hline \end{tabular} \end{table} Table 2: EM accuracy on the Spider queries across different levels of difficulty as defined by Yu et al. (2019). \begin{table} \begin{tabular}{l|c} \hline Method & EM accuracy (\(\%\)) \\ \hline SQLformer & **78.2 \(\pm\) 0.75** \\ SQLformer w/o dependency graph & 77.5 \(\pm\) 0.72 \\ SQLformer w/o Part-of-Speech graph & 77.3 \(\pm\) 0.63 \\ SQLformer encoder + LSTM-based decoder & 74.2 \(\pm\) 0.38 \\ SQLformer w/o table + column selection & 72.3 \(\pm\) 0.38 \\ \hline \end{tabular} \end{table} Table 3: EM accuracy (and \(\pm\) 95% confidence interval) of SQLformer ablation study on the development set. ### Limitations One of the main limitations of our work is its focus on the English language, as it is the language used by most publicly available datasets. A potential way to alleviate this is by using multi-language PLMs for processing the questions. Another relevant drawback is the requirement to be able to transform queries into ASTs, such that model training is possible. However, most popular modern query languages have libraries available for performing such transformations. Finally, it is worth noting the significant GPU resource requirements for training the architecture.
2310.17045
Phase Change Induced Magnetic Switching through Metal-insulator Transition in VO2/TbFeCo Films
The ability to manipulate spins in magnetic materials is essential in designing spintronics devices. One method for magnetic switching is through strain. In VO2 on TiO2 thin films, while VO2 remains rutile across the metal-insulator transition, the in-plane lattice area expands going from low temperature insulating phase to high temperature conducting phase. In a VO2/TbFeCo bilayer, the expansion of the VO2 lattice area exerts tension on the amorphous TbFeCo layer. Through the strain effect, magnetic properties, including the magnetic anisotropy and magnetization, of TbFeCo can be changed. In this work, the changes in magnetic properties of TbFeCo on VO2/TiO2(011) are demonstrated using anomalous Hall effect measurements. Across the metal-insulator transition, TbFeCo loses perpendicular magnetic anisotropy, and the magnetization in TbFeCo turns from out-of-plane to in-plane. Using atomistic simulations, we confirm these tunable magnetic properties originating from the metal-insulator transition of VO2. This study provides the groundwork for controlling magnetic properties through a phase transition.
Chung T. Ma, Salinporn Kittiwatnakul, Apiprach Sittipongpittaya, Yuhan Wang, Md Golam Morshed, Avik W. Ghosh, S. Joseph Poon
2023-10-25T22:50:18Z
http://arxiv.org/abs/2310.17045v1
Phase Change Induced Magnetic Switching through Metal-insulator Transition in VO\({}_{2}\)/TbFeCo Films ###### Abstract The ability to manipulate spins in magnetic materials is essential in designing spintronics devices. One method for magnetic switching is through strain. In VO\({}_{2}\) on TiO\({}_{2}\) thin films, while VO\({}_{2}\) remains rutile across the metal-insulator transition, the in-plane lattice area expands going from low temperature insulating phase to high temperature conducting phase. In a VO\({}_{2}\)/TbFeCo bilayer, the expansion of the VO\({}_{2}\) lattice area exerts tension on the amorphous TbFeCo layer. Through the strain effect, magnetic properties, including the magnetic anisotropy and magnetization, of TbFeCo can be changed. In this work, the changes in magnetic properties of TbFeCo on VO\({}_{2}\)/TiO\({}_{2}\)(011) are demonstrated using anomalous Hall effect measurements. Across the metal-insulator transition, TbFeCo loses perpendicular magnetic anisotropy, and the magnetization in TbFeCo turns from out-of-plane to in-plane. Using atomistic simulations, we confirm these tunable magnetic properties originating from the metal-insulator transition of VO\({}_{2}\). This study provides the groundwork for controlling magnetic properties through a phase transition + Footnote †: preprint: APS/123-QED ## I Introduction With the rapid developments of automation, the need for fast processing and compact data storage has promptly increased. Spintronic devices have the potential to serve as the building blocks of speedy data processors and high-density memory [1, 2, 3, 4, 5, 6]. In spintronics, magnetic moments are the key components for reading and writing data. Being able to control magnetic moments is crucial in designing spintronic devices [3, 4, 5, 6]. Several methods, such as current and laser pulses, can switch magnetic moments in multilayer thin films [7, 8, 9]. Investigating other methods to manipulate spins is critical for future developments in spintronics. Among many mechanisms to control magnetism, strain-tronics, which employs strain-mediated effects for switching, presents an intriguing opportunity. It can serve as a foundation for energy-efficient devices [10, 11, 12]. One possibil- ity of is using strain arises from the metal-insulator transi- tion (MIT). For example, MIT in Vanadium dioxide (VO\({}_{2}\)) has drawn interest from both fundamental theories and technological applications [13, 14]. Recent studies have shown possible applications in ultrafast optics and electronic devices for sensing and switching [15, 16, 17, 18, 19]. In bulk VO, MIT occurs at \(340\)K [20] and it is accompanied by abrupt changes in structural and electronic properties. Across MIT, bulk VO\({}_{2}\) undergoes a structural transition from a low-temperature monoclinic to a high-temperature rutile phase. In VO\({}_{2}\) thin films under uniaxial strain, recent reports reveal a complex mix of structural phases near MIT [21, 22, 23, 24, 25, 26, 27]. When VO\({}_{2}\) films are epitaxially grown on TiO\({}_{2}\) substrates, due to epitaxial bi-axial strains, the transitions are isostructural. In addition, MIT occurs at different temperatures, for VO\({}_{2}\) films grown on different orientations of the TiO\({}_{2}\) substrates. In VO\({}_{2}\)/TiO\({}_{2}\), although VO\({}_{2}\) films remain rutile, the lattice parameters change along in-plane and out-of-plane directions [27]. Furthermore, in a similar V\({}_{2}\)O\({}_{3}\) system, this coexistence of nanoscale phases near MIT leads to changes in magnetic properties in V\({}_{2}\)O\({}_{3}\)/Ni bilayers [28, 29]. Moreover, magnetism in paramagnetic centers is found to be affected by MIT in VO\({}_{2}\) due to magnetoelastic anisotropy [30]. In these samples, the changes in lattice parameters of VO\({}_{2}\) serve as the most important mechanism for tuning magnetic properties. Because of their high magnetostrictions, ferrimagnetic rare-earth (RE) transitional-metal (TM) alloys such as TbFeCo are promising materials to study the effect on magnetism from MIT. Amorphous ferrimagnetic RE-TM thin films have been widely studied for their applications in high-density low-current spintronics devices [31], sub-ps ultrafast magnetic switching [32, 33, 34, 8, 9], and a host for magnetic skyrmions with tunable Dzyaloshinskii-Moriya Interaction [35, 36, 37, 38, 39]. These ferrimagnetic films exhibit strong perpendicular magnetic anisotropy (PMA) and can be synthesized at room-temperature requiring no epitaxial growth [40, 41]. Magnetic properties, such as magnetization and coercivity, are greatly influenced by the compensation temperature, which can be tuned by varying composition and thickness [42, 43]. These properties make TbFeCo a good material to reveal the effect on magnetism from MIT. In this work, the impact on magnetic properties from MIT is investigated in VO\({}_{2}\)/TbFeCo bilayer. Amorphous TbFeCo films are grown on epitaxial VO\({}_{2}\) samples and Si/SiO\({}_{2}\) substrate. Comparison of magnetic properties reveals changes in magnetic anisotropy and magnetization in TbFeCo near MIT of VO\({}_{2}\). Furthermore, atomistic simulations are employed to incorporate the strain effect induced by VO\({}_{2}\) on TbFeCo near MIT. These results can serve as a foundation for devel oping techniques to control magnetic properties through MIT for device applications. More importantly, since properties of VO\({}_{2}\)[15; 16; 17; 18; 19] and RE-TM[32; 33; 34; 8] can be controlled through an ultrafast laser, these results open up the possibility of high-speed data processing using RE-TM on VO\({}_{2}\). ## II Materials and methods \(\sim\)100 nm VO\({}_{2}\) thin films were grown on (011), and (100) TiO\({}_{2}\) substrates by reactive biased target ion beam deposition (RBTIBD). Details of growth conditions can be found in a previous publication[44]. 15 nm thick amorphous Tb\({}_{26}\)Fe\({}_{64}\)Co\({}_{10}\) thin films were deposited on VO\({}_{2}\)/TiO\({}_{2}\) films and thermally oxidized Si substrates by RF magnetron sputtering at room temperature under base pressure of 5 x 10\({}^{-7}\) torr from co-sputtering of Tb and TbFeCo targets. The TbFeCo layers were deposited on the VO\({}_{2}\)/TiO\({}_{2}\) films and SiO\({}_{2}\)/Si substrates at the same time to eliminate changes in TbFeCo properties due to growth conditions. A 5 nm Ta capping layer was deposited on the samples to prevent oxidation. These samples were made in Hall bar devices for magneto-transport measurement, and Hall measurements were obtained for TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(100), (011), and TbFeCo/SiO\({}_{2}\)/Si samples. Structural characterization of the samples was performed by X-ray diffraction (XRD) using a SmartLab system (Rigaku Inc.) in the 2\(\theta\) range between 20 degrees and 80 degrees. Thin thickness measurements were performed by X-ray reflectivity (XRR) technique in the SmartLab. The film surface morphology was characterized via atomic force microscopy (AFM) by Cypher (Asylum Research Inc.). The magnetic properties at various temperatures were performed by vibrating sample magnetometer (VSM) option in a Versa Lab system (Quantum Design Inc.). The magneto-transport properties at various temperatures were performed by the electric transport option in the Versa Lab system. Temperatures were varied from 250 K to 350 K and applied magnetic fields were varied from -2 T to 2 T for these measurements. An atomistic simulation was employed to study the change in magnetic hysteresis due to strain. A handmade atomistic code was used for the atomistic simulations. Since Fe and Co atoms belong to the same TM sublattice in the RE-TM ferrimagnet, Co atoms are treated as Fe atoms. Tb and Fe atoms are distributed in a 1.6 nm x 1.6 nm x 1.6 nm RE\({}_{25}\)TM\({}_{75}\) amorphous structure. We placed replicas of this box next to each other in a 3 x 3 x 9 configuration to expand the simulation's size to 4.8 nm x 4.8 nm x 14.4 nm, and 20250 atoms in total. The parameters used in the simulation are listed in Table 1. The anisotropy axis for each atom is distributed ran- domly within a 30-degree cone, with the axis of cone pointing along the out-of-plane direction. The exchange interactions are benchmarked based on Oslter _et al.[45]_ and our experiments to maintain the same Curie temperature and compensation temperature for a given composition. Using stochastic Landau-Lifshitz-Gilbert (LLG) equation[46], hysteresis loops were simulated and compared to experiments. The strain anisotropy (K\({}_{strain}\)) is given by \[K_{strain}=-\frac{3}{2}\dot{\lambda}E_{y}\varepsilon \tag{1}\] where \(\dot{\lambda}\) = 100 ppm is the magnetostriction of amorphous TbFeCo, \(E_{y}\) = 100 GPa is the Young's Modulus of TbFeCo and \(\varepsilon\) is the strain exerted on TbFeCo by MIT of VO\({}_{2}\). In the case of TbFeCo thin films, a positive K\({}_{strain}\) leads to perpendicular magnetic anisotropy, while a negative K\({}_{strain}\) leads to in-plane magnetic anisotropy. The percentage of atoms that experience strain \(\varepsilon\) varies with the phase distribution in VO\({}_{2}\) as VO\({}_{2}\) undergoes MIT, based on the fraction of metallic phase obtained from experiments. From Laverock _et al.[26]_, VO\({}_{2}\)'s transition is not abrupt across MIT. Near the MIT, there is a mixture of a low-temperature insulating phase and a high-temperature metallic phase present in the sample. To model this behavior, we approximated the fraction of atoms experiencing strain from VO\({}_{2}\)'s MIT, based on the fraction of metallic phase obtained from the experiment by Laverock _et al.[26]_ at various temperatures. For example, in TbFeCo on VO\({}_{2}\)/TiO\({}_{2}\)(011), no atoms experience strain at 250 K, 25% of atoms experience strain at 300 K and 75% of atoms experience strain at 320 K. ## III Results and discussions VO\({}_{2}\) films were grown on TiO\({}_{2}\) substrates with three different orientations. Fig. 1(a) presents XRD patterns of VO\({}_{2}\)/TiO\({}_{2}\)(011) (green), (100) (blue) films measured at room temperature. The 2\(\theta\) peaks are indexed using rutile VO\({}_{2}\) (R-VO\({}_{2}\)) and TiO\({}_{2}\). Different orientations of R-VO\({}_{2}\) are found in samples grown on different orientations of TiO\({}_{2}\) substrates. R-VO\({}_{2}\) (101), R-VO\({}_{2}\) (002), and R-VO\({}_{2}\) (200) peaks are observed in VO\({}_{2}\)/TiO\({}_{2}\) (101), and VO\({}_{2}\)/TiO\({}_{2}\) (100), respectively. These correspond to the epitaxial growth of VO\({}_{2}\) films in each TiO\({}_{2}\) orientation. The rutile phase in VO\({}_{2}\) at room temperature is consistent with the findings in a previous publication by Kittiwantanakul _et al.[27]_. In VO\({}_{2}\) thin films epitaxially grown on TiO\({}_{2}\) substrates, due to epitaxial bi-axial strains, VO\({}_{2}\) remains rutile in both the low-temperature insulating phase and the high-temperature conducting phase. Although VO\({}_{2}\) remains rutile, temperature-dependent XRD shows a change in relative lattice spacing across MIT. Above the MIT, the relative lattice space in VO\({}_{2}\) on TiO\({}_{2}\)(100) becomes comparable to that of bulk VO\({}_{2}\)[47]. Thus, in VO\({}_{2}\)/TiO\({}_{2}\)(011), the in-plane lattice area, defined as A = a x c expands from 12.66 A\({}^{2}\), where a and \begin{table} \begin{tabular}{|l|l|} \hline Parameter & Value \\ \hline Fe Magnetic moment (\(\mu_{Fe}\)) & 2.22 \(\mu_{B}\) \\ \hline Tb Magnetic moment (\(\mu_{Tb}\)) & 9.34 \(\mu_{B}\) \\ \hline Fe-Fe Exchange Interaction (\(I_{Fe-Fe}\)) & 2.83 x 10\({}^{-21}\) J \\ \hline Tb-Tb Exchange Interaction (\(I_{\mathrm{J}\eta-Tb}\)) & 0.99 x 10\({}^{-21}\) J \\ \hline Fe-Tb Exchange Interaction (\(I_{Fe-Tb}\)) & -1.09 x 10\({}^{-21}\) J \\ \hline Anisotropy (K\({}_{\alpha}\)) & 1 x 10\({}^{5}\) J/m\({}^{3}\) \\ \hline Damping (\(\alpha\)) & 0.05 \\ \hline \end{tabular} \end{table} Table 1: Values of parameters used in the atomistic simulations of TbFeCo. c are lattice contants equal to 4.41A and 2.87A respectively, to 12.99 A\({}^{2}\), where a and c are lattice contants equal to 4.56A and 2.85A respectively, going from the low-temperature insulating phase to the high-temperature conducting phase. On the other hand, in VO\({}_{2}\)/TiO\({}_{2}\)(100), the in-plane lattice area compresses from 13.03 A\({}^{2}\), where a and c are lattice contants equal to 4.51A and 2.89A respectively, to 12.99 A\({}^{2}\), where a and c are lattice contants equal to 4.56A and 2.85A respectively, across MIT. To characterize the MIT of VO\({}_{2}\)/TiO\({}_{2}\), resistance measurements from 240 K to 400 K are shown in Fig. 1 (b). Across the MIT, VO\({}_{2}\)/TiO\({}_{2}\) films show several orders of magnitude decrease in resistance, confirming the transition to a metallic state from an insulating state. Different orientations of VO\({}_{2}\)/TiO\({}_{2}\) have different MIT temperatures between 310 K and 350 K. The MIT temperature is found in VO\({}_{2}\)/TiO\({}_{2}\) (011) at \(\sim\)320 K, followed by VO\({}_{2}\)/TiO\({}_{2}\)(100) at \(\sim\)350 K. Hysteresis-like behavior is present near MIT in all three orientations, where sharp changes in resistance occur at different temperatures under heating and cooling. The shift in MIT is due to different epitaxial bi-axial strains in VO\({}_{2}\)/TiO\({}_{2}\) for different orientations [27]. To study the strain effect on magnetic properties from VO\({}_{2}\) across MIT, we deposited 15 nm thick TbFeCo with 5 nm thick Ta capping on top of various VO\({}_{2}\)/TiO\({}_{2}\) films at the same time. Fig. 2 (a) shows a schematic diagram of the heterostructure investigated in this work. We studied the surface morphology and roughness in these films by AFM. Fig. 2 (b)-(e) show the AFM images of TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\) before and after the depositions of TbFeCo and Ta capping layer. Before the deposition of TbFeCo, the RMS roughnesses of the samples are 1.19 nm in VO\({}_{2}\)/TiO\({}_{2}\)(011), and Figure 1: (a) Room temperature X-ray diffraction (XRD) pattern of VO\({}_{2}\)/TiO\({}_{2}\)(011) (green), and VO\({}_{2}\)/TiO\({}_{2}\)(100) (blue) films. The 2\(\theta\) peaks are indexed with rutile VO\({}_{2}\) (R-VO\({}_{2}\)) and TiO\({}_{2}\). (b) Resistance obtained from 240K to 400K in VO\({}_{2}\)/TiO\({}_{2}\)(011) (green), and VO\({}_{2}\)/TiO\({}_{2}\)(100) (blue). MIT of different orientations is observed at different temperatures between 310K and 350K. 0.66 nm in VO\({}_{2}\)/TiO\({}_{2}\)(100). After the deposition of TbFeCo and Ta capping layer, the RMS roughnesses of the sam- ples are 1.29 nm in TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011), and 0.81 nm in TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(100). This means that the changes in roughnesses after the deposition of TbFeCo are rather small. Furthermore, the AFM images show little changes to the samples' surfaces. These indicate the TbFeCo layers with Ta capping deposited on VO\({}_{2}\)/TiO\({}_{2}\) maintained the same roughnesses and uniformity for each sample. To investigate if there is any magnetic switching of TbFeCo due to MIT in VO\({}_{2}\), we fabricated each sample into Hall bar configurations and performed the anomalous Hall effect measurements on the patterned films. Anomalous Hall effect is considered here instead of direct hysteresis loops. This is because the TbFeCo films here have a low magneiza- tion of about \(1\times 10\)\({}^{5}\) A/m, resulting in a small magnetic moment signal in M-H loops measurements. Thus, anoma- lous Hall effect is considered here for clearer results from measurements. Fig. 3 (a)-(c) show normalized Hall resistance as a function of out-of-plane applied magnetic field of (a) TbFeCo/SiO\({}_{2}\)/Si, (b) TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011), and (c) TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(100). For higher temperatures, above 330 K, increases in noise are observed in both Fig. 3 (a) and (c). We suspect this is due to the temperature effect in the patterned films. In Fig. 3 (a), the Hall resistance of TbFeCo/SiO\({}_{2}\)/Si shows very minor changes from 300 K to 350 K, which is expected. Since SiO\({}_{2}\)/Si substrate has no transitions within this temperature range, the only source of strain acting on TbFeCo arises from the difference in thermal expansion between TbFeCo and SiO\({}_{2}\)/Si substrate. The thermal expansion coefficient of SiO\({}_{2}\)/Si substrate is 0.24 ppm/K. In comparison, the thermal expansion coefficient of amorphous TbFeCo near 300 K is about 10 ppm/K, estimated from amorphous TbFe alloy [48]. From 300 K to 350 K, \(\varepsilon\) due to thermal expansion is \(\sim\) 500 ppm, which is 5 x 10\({}^{-4}\). Using Eq. 1, this gives K\({}_{strain}\) of about -7.5 x 10\({}^{3}\) J/m\({}^{3}\), much smaller than K\({}_{u}\) of 1 x 10\({}^{3}\) J/m\({}^{3}\) in TbFeCo. As shown in Fig. 3 (a),\(\varepsilon\) of 5 x 10\({}^{-4}\) is too small to have any effects on magnetic anisotropy in TbFeCo, and the magnetic moments of TbFeCo remain pointing in the out-of-plane directions at zero fields. These minor changes in hysteresis loops are likely due to an increase in temperature. The lack of significant changes in TbFeCo's out-of-plane loops shows that the magnetic anisotropy of TbFeCo is near constant around room temperature. Next, we focus on the behavior of TbFeCo on VO\({}_{2}\)/TiO\({}_{2}\) near room temperatures. From Fig. 3 (b), normalized Hall resistance as a function of out-of-plane applied magnetic field of TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011) shows a clear loss of PMA going from 250 K to 320 K. The magnetic hysteresis loops become less squared and the magnetic moments of TbFeCo switch from out-of-plane to in-plane. From Fig. 1 (b), the MIT of VO\({}_{2}\)/TiO\({}_{2}\)(011) (green line) occurs near 320 K. This means that the loss of PMA in TbFeCo corresponds to the MIT of VO\({}_{2}\) near 320 K. As the temperature goes up from 250 K to 320 K, VO\({}_{2}\)'s in-plane lattice area expands across the MIT of VO\({}_{2}\)/TiO\({}_{2}\)(011). From Kittiwantanakul _et al._[27], the in-plane lattice area expands from 12.66 A\({}^{2}\) in the low-temperature phase to 12.99 A\({}^{2}\) in the high-temperature phase. This corresponds to \(\varepsilon\) of 2.6 x 10\({}^{-2}\) and K\({}_{strain}\) of -3.9 x 10\({}^{5}\) J/m\({}^{3}\) using Eq. 1, greater than K\({}_{u}\) of 1 x 10\({}^{5}\) J/m\({}^{3}\) in TbFeCo. Besides strain from VO\({}_{2}\)'s in-plane lattice expansion, another source of strain arises from the difference in the thermal expansion between TbFeCo and VO\({}_{2}\). As discussed earlier, the thermal expansion coefficient of amorphous TbFeCo near 300 K is about 10 ppm/K. On the other hand, the thermal expansion coefficient of VO\({}_{2}\) near 300 K is about 21.1 ppm/K [49]. Thus, \(\varepsilon\) due to thermal expansion going from 250 K to 320 K is \(\sim\) 800 ppm, which is 8 x 10\({}^{-4}\). This is over an order of magnitude smaller than the \(\varepsilon\) of 2.6 x 10\({}^{-2}\) arises from VO\({}_{2}\)'s MIT. Moreover, from Laverock _et al._[26], VO\({}_{2}\) films are not homogeneous. The MIT of VO\({}_{2}\) films involves a mixture of a low-temperature insulating phase and a high-temperature conducting phase across a temperature range. The means that TbFeCo on VO\({}_{2}\) is experiencing a gradual change in strain across MIT. This is supported by the progressive loss of PMA in TbFeCo going from 250 K to 320 K, as seen in Fig. 3 (b). This shows that the switching of TbFeCo from out-of-plane to in-plane is likely due to the tensile strain that arises from VO\({}_{2}\)'s in-plane lattice expansion across MIT. Fig. 3 (c) shows the normalized Hall resis-tance as a function out-of-plane applied magnetic field of TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(100). The Hall effect of TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(100) reveals the absence of PMA in TbFeCo throughout the measured temperature. This is probably due to the presence of tensile strain acting on TbFeCo by VO\({}_{2}\)/TiO\({}_{2}\)(100). The in-plane lattice area of the low-temperature insulating phase in VO\({}_{2}\)/TiO\({}_{2}\)(100) is 13.03 A [27], which is larger compared to the in-plane lattice area in VO\({}_{2}\)/TiO\({}_{2}\)(011) (12.66 A\({}^{2}\)). This means that the underlayer of VO\({}_{2}\)/TiO\({}_{2}\)(100) is most likely applying a tensile interfacial strain on the TbFeCo atoms in these multilayer thin films. Since amorphous TbFeCo has positive magnetostriction, a tensile strain will lead to an additional in-plane anisotropy contribution. In contrast, in TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011), the smaller in-plane lattice area at the low-temperature insulating phase in VO\({}_{2}\)/TiO\({}_{2}\)(011) is creating a compressive interfacial strain on TbFeCo, resulting in PMA in TbFeCo. Furthermore, when the in-plane lattice area of VO\({}_{2}\)/TiO\({}_{2}\)(011) expands to 12.99 A\({}^{2}\) across the MIT, TbFeCo on VO\({}_{2}\)/TiO\({}_{2}\)(011) lost PMA. This shows that the in-plane lattice area of 12.99 A\({}^{2}\) or larger is supplying a tensile interfacial strain on TbFeCo. From Fig. 3 (c), as the temperature changes from 300 K to 350 K, there are no changes in the magnetic anisotropy of TbFeCo, magnetic moments of TbFeCo remain in-plane at zero external fields throughout the measured temperatures. From Fig. 1 (b), the MIT of VO\({}_{2}\)/TiO\({}_{2}\)(100) (blue line) occurs near 350 K. This means that across the MIT of VO\({}_{2}\), magnetic properties of TbFeCo remain unaffected. This can be explained by the change in the in-plane lattice area of VO\({}_{2}\)/TiO\({}_{2}\)(100) across MIT. In VO\({}_{2}\)/TiO\({}_{2}\)(100), the in-plane lattice area shrank from 13.03 A\({}^{2}\) in the low-temperature phase to 12.99 A\({}^{2}\) in the high-temperature phase [27]. This corresponds to an \(\varepsilon\) of -3 x 10\({}^{-3}\). Note that the negative sign here corresponds to the compressive strain exerted on TbFeCo, compared to tensile strain in the other samples. The strain in VO\({}_{2}\)/TiO\({}_{2}\)(100) is almost 10 times smaller than Figure 3: Anomalous Hall effect of TbFeCo measured at various temperatures under an out-of-plane external field. (a) TbFeCo/SiO\({}_{2}\)/Si; (b) TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011); (c) TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(100). Figure 2: (a) An illustration of the TbFeCo/VO\({}_{2}\) heterostructure (not to scale). (b-e) Atomic force microscopy (AFM) images of TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\) (b-c) before and (d-e) after the deposition of TbFeCo layer with Ta capping layer, (b) VO\({}_{2}\)/TiO\({}_{2}\)(011); (c) VO\({}_{2}\)/TiO\({}_{2}\)(100); (d) TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011); (e) TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(100). the strain in VO\({}_{2}\)/TiO\({}_{2}\)(011), which is 2.6 %. Therefore, it makes sense that the change in magnetic anisotropy of TbFeCo is only observed in TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011), but not in TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(100). To verify that the strain from VO\({}_{2}\)'s MIT is the source of magnetic switching in TbFeCo, an atomistic model is employed. In this model, strain anisotropy is given by Eq. 1. Fig. 4 (a) and (b) show the comparison of measured out-of-plane anomalous Hall effect and simulated hysteresis loops from 300 K to 350 K in TbFeCo on SiO\({}_{2}\)/Si substrate, respectively. In this sample, no strain anisotropy is included in the simulations because SiO\({}_{2}\)/Si substrate does not undergo transition across these temperatures. Results indicate the minor changes in measured anomalous Hall effect from 300 K to 350 K are due to an increase in temperature. A discrepancy in the coercivity between measurements and simulations is observed in Fig. 4 (a) and (b). We suspect the discrep-ancy originates from the complex cone-shaped anisotropy in amorphous rare-earth transition-metal films [40]. Next, we investigated hysteresis loops of TbFeCo on VO\({}_{2}\)/TiO\({}_{2}\)(011) using atomistic simulations. Fig. 4 (c) and (d) show the comparison of measured out-of-plane anomalous Hall effect and simulated out-of-plane hysteresis loops from 250 K to 320 K in TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011), respectively. With the incorporated model of the strain anisotropy, the measured and simulated hysteresis loops are in good agreement. Both show the gradual loss of PMA in TbFeCo from 250 K to 320 K and the magnetic moments turn from out-of-plane to in-plane at zero fields going from 250 K to 320 K. This confirms that strain from VO\({}_{2}\)'s MIT is the source of magnetic switching in TbFeCo. ## IV Conclusions In summary, 15 nm thick amorphous TbFeCo films were deposited on VO\({}_{2}\)/TiO\({}_{2}\) to study the strain effect of metal-insulator transition (MIT) on magnetic properties. Us- ing TbFeCo on thermally oxidized Si substrate as a ref- ence sample, changes in magnetic anisotropy were ob- served in TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011) film. Near the MIT of VO\({}_{2}\)/TiO\({}_{2}\)(011), a decrease in magnetic anisotropy was found Figure 4: Comparison of (a) measured out-of-plane anomalous Hall effect (extracted from Fig. 3 (a)) and (b) simulated out-of-plane hys- toresis loops at various temperatures in TbFeCo/SiO\({}_{2}\)/Si by atomistic simulations. Comparison of (c) measured out-of-plane anomalous Hall effect (extracted from Fig. 3 (b)) and (d) simulated out-of-plane hysteresis loops at various temperatures with strain anisotropy in TbFeCo/VO\({}_{2}\)/TiO\({}_{2}\)(011) by atomistic simulations. in TbFeCo and the magnetization of TbFeCo switched from out-of-plane to in-plane at zero external fields. This decrease in magnetic anisotropy originated from the tensile strain arising from the transition of VO\({}_{2}\)/TiO\({}_{2}\)(011), where the in-plane lattice area of VO\({}_{2}\) expands. Furthermore, atomistic simulations of TbFeCo with strain anisotropy from VO\({}_{2}\) were in agreement with measurements, confirming that the in-plane lattice expansion in VO\({}_{2}\)/TiO\({}_{2}\)(011) across MIT is sufficient to switch magnetic moments in TbFeCo. These results offer a platform for using the phase transition to achieve magnetic switching in spintronics devices for desirable applications. C.T.M, S.K, A.S, and Y.W.: sample fabrication, and measurements. C.T.M and M.G.M: modeling, writing-review, and editing. A.W.G and S.J.P.: supervision and writing-review. All authors have read and agreed to the published version of the manuscript. This research received no external funding. The data that support the findings of this study are available from the corresponding author upon reasonable request. The authors declare no conflict of interest. ## Acknowledgments We thank Dr. Jiwei Lu for his thoughtful and stimulating discussions. S.K. acknowledges the support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F650024].
2308.12377
The BNS invariants of the braid groups and pure braid groups of some surfaces
We compute and explicitly describe the Bieri-Neumann-Strebel invariants $\Sigma^1$ for the full and pure braid groups of the sphere $\mathbb{S}^2$, the real projective plane $\mathbb{R}P^2$ and specially the torus $\mathbb{T}$ and the Klein bottle $\mathbb{K}$. In order to do this for $M=\mathbb T$ or $M=\mathbb K$, and $n \geq 2$, we use the $n^{th}$-configuration space of $M$ to show that the action by homeomorphisms of the group $Out(P_n(M))$ on the character sphere $S(P_n(M))$ contains certain permutation of coordinates, under which $\Sigma^1(P_n(\mathbb T))^c$ and $\Sigma^1(P_n(\mathbb K))^c$ are invariant. Furthermore, $\Sigma^1(P_n(\mathbb T))^c$ and $\Sigma^1(P_n(\mathbb{S}^2))^c$ (the latter with $n \geq 5$) are finite unions of pairwise disjoint circles, and $\Sigma^1(P_n(\mathbb K))^c$ is finite. This last fact implies that there is a normal finite index subgroup $H \leq Aut(P_n(\mathbb K))$ such that the Reidemeister number $R(\varphi)$ is infinite for every $\varphi \in H$.
Carolina de Miranda e Pereiro, Wagner Sgobbi
2023-08-23T18:46:13Z
http://arxiv.org/abs/2308.12377v1
# The BNS invariants of the braid groups and pure braid groups of some surfaces ###### Abstract. We compute and explicitly describe the Bieri-Neumann-Strebel invariants \(\Sigma^{1}\) for the full and pure braid groups of the sphere \(\mathbb{S}^{2}\), the real projective plane \(\mathbb{R}P^{2}\) and specially the torus \(\mathbb{T}\) and the Klein bottle \(\mathbb{K}\). In order to do this for \(M=\mathbb{T}\) or \(M=\mathbb{K}\), and \(n\geq 2\), we use the \(n^{th}\)-configuration space of \(M\) to show that the action by homeomorphisms of the group \(Out(P_{n}(M))\) on the character sphere \(S(P_{n}(M))\) contains certain permutation of coordinates, under which \(\Sigma^{1}(P_{n}(\mathbb{T}))^{c}\) and \(\Sigma^{1}(P_{n}(\mathbb{K}))^{c}\) are invariant. Furthermore, \(\Sigma^{1}(P_{n}(\mathbb{T}))^{c}\) and \(\Sigma^{1}(P_{n}(\mathbb{S}^{2}))^{c}\) (the latter with \(n\geq 5\)) are finite unions of pairwise disjoint circles, and \(\Sigma^{1}(P_{n}(\mathbb{K}))^{c}\) is finite. This last fact implies that there is a normal finite index subgroup \(H\leq Aut(P_{n}(\mathbb{K}))\) such that the Reidemeister number \(R(\varphi)\) is infinite for every \(\varphi\in H\). Key words and phrases:BNS invariants, braid groups, \(R_{\infty}\) property ## 1. Introduction The Bieri-Neumann-Strebel invariant \(\Sigma^{1}(G)\) of a finitely generated group \(G\) was first defined in [2], based on a previously defined invariant for metabelian groups [3]. Given such group \(G\) and a finitely generated \(G\)-operator group \(A\), the authors of [2] associate to \(A\) a subset \(\Sigma_{A}=\Sigma_{A}(G)\) of the character sphere \(S(G)\). Many general properties of \(\Sigma_{A}\) are shown in [2], including its openness in \(S(G)\) and a characterization of the finitely generated normal subgroups of \(G\) containing the commutator subgroup \(G^{\prime}\)[35, Theorem A4.1]. Nowadays, the invariant \(\Sigma^{1}=\Sigma_{G^{\prime}}\) is well known as an important object in Geometric Group Theory which has many connections to other topics, including closed \(1\)-forms on \(3\)-manifolds, group actions on \(\mathbb{R}\)-trees [6], and the Thurston norm and fibering of manifolds over \(S^{1}\) (see e.g. Theorem E of [2] and [14]). Despite of its importance, the BNS-invariant is in general hard to be effectively computed and there are only a few classes of groups for which \(\Sigma^{1}\) is already known (we refer to [30] and its references for a list of such families). One can try to avoid this problem by using the Cayley graph definition of \(\Sigma^{1}(G)\) in [35], which is the basic literature for this geometric approach. Then, the well known Geometric Criterion for \(\Sigma^{1}\) (Theorem 2.3) provides us with a more elementary approach to \(\Sigma^{1}\) than the original definition of [2]. This is especially useful when dealing with groups \(G\) which are mostly known by a finite presentation, such as the full and pure braid groups of surfaces, which are the object of study of this paper. Another tool that can be useful for the computation of \(\Sigma^{1}(G)\) is the study of the action by homeomorphisms \(Out(G)\curvearrowright S(G)\). Since \(\Sigma^{1}(G)\subset S(G)\) is invariant under this action, knowing how certain ## 1. Introduction Let \(G\) be a finite group. A finite group \(G\) is said to be _\(G\)-invariant_ if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be _\(G\)-invariant_ if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be _\(G\)-invariant_ if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be _\(G\)-invariant_ if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be _\(G\)-invariant_ if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be _\(G\)-invariant_ if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be _\(G\)-invariant_ if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be _\(G\)-invariant_ if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant. A finite group \(G\) is said to be \(G\)-invariant if there exists a finite group \(G\) such that \(G\) is \(G\)-invariant if there The theory of (surface) braid groups has been gaining importance over the years. Several algebraic properties of these groups are known, such as their finite presentations, study of the center, study of torsion elements, the lower central and derived series, among others (see [4, 5, 13, 17, 18, 21, 24, 26, 32, 33] and also the survey [25], and its references). Furthermore, these groups have proven to be a valuable tool when dealing with problems of low-dimensional algebraic topology, such as knot theory, fixed points and coincidence theory, Borsuk-Ulam property, multivalued functions and other topics [20, 22, 31]. Regarding the BNS invariants and braid groups, N. Koban, J. McCammond and J. Meier [28] were able to compute \(\Sigma^{1}(P_{n})\), for the Artin pure braid groups, and showed that its complement in the sphere \(S(G)\) is a finite union of certain circles (more details are given on Section 5). Also, two years later, Zaremsky [36] investigated the higher BNS invariants (_a.k.a._ the BNSR-invariants) \(\Sigma^{m}(P_{n}),\ m\geq 1\), using Morse theory. Motivated by these papers and by the recent studies of braid groups of surfaces, we investigate the BNS invariants for the braid groups and pure braid groups of some surfaces, namely the sphere, the projective plane, the torus and the Klein bottle. As far as we know, there have been no computations on the BNS invariants for braid groups and pure braid groups of other surfaces besides the disc \(\mathbb{D}^{2}\) in the literature. We obtain the following. **Theorem 1.1**.: _Let \(n\geq 3\). If \(n=3\), the BNS-invariant \(\Sigma^{1}(P_{n+1}(\mathbb{S}^{2}))\) is empty. If \(n\geq 4\), then \(\Sigma^{1}(P_{n+1}(\mathbb{S}^{2}))\) is the complement of the union of some \(P_{3}\)-circles \(\widetilde{C}_{i,j,k}\), for \(1\leq i<j<k\leq n\), and \(P_{4}\)-circles \(\widetilde{C}_{i,j,k,l}\), for \(1\leq i<j<k<l\leq n\) (Definition 5.3) in its character sphere. There are exactly \(\binom{n}{3}+\binom{n}{4}\) such circles, which are pairwise disjoint._ If \(M\) is either the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\), the fact that \(B_{n}(M)\) has a non-trivial center, as well as the knowledge of a set of generators for \(Z(B_{n}(M))\), was essential in our next results. If \(M\) is an orientable (resp. non-orientable) closed surface with genus \(g\geq 2\) (resp. \(g\geq 3\)), the same techniques cannot be used, because \(Z(B_{n}(M))\) is trivial [32]. **Theorem 1.2**.: _Let \(M\) be either the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\). Then_ \[\Sigma^{1}(B_{n}(M))=S(B_{n}(M))\simeq\left\{\begin{array}{ll}\mathbb{S}^{ 1},&\mbox{ if }M=\mathbb{T};\\ \mathbb{S}^{0},&\mbox{ if }M=\mathbb{K}.\end{array}\right.\] The next result is the main theorem of our paper. It is interesting to observe that the complement of \(\Sigma^{1}\) for the pure braid groups of the torus resembles the results for the pure braid groups of both the disc and the sphere, being the union of pairwise disjoint circles. On the other hand, in the case of the Klein bottle we obtain a finite set, which differs significantly from the previous ones. **Theorem 1.3**.: _Let \(M\) be either the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\) and \(n\geq 2\). The complement \(\Sigma^{1}(P_{n}(M))^{c}\) inside the sphere \(S(P_{n}(M))\cong\mathbb{S}^{2n-1}\) if \(M=\mathbb{T}\) (respectively \(S(P_{n}(M))\cong\mathbb{S}^{n-1}\) if \(M=\mathbb{K}\)) is given by_ \[\Sigma^{1}(P_{n}(\mathbb{T}))^{c}=\{[\chi_{i,j,p,q}]\ |\ 1\leq i,j\leq n,i\neq j,(p,q) \neq 0\},\] _where \(\chi_{i,j,p,q}(a_{i})=\chi_{i,j,p,q}(a_{j}^{-1})=p\), \(\chi_{i,j,p,q}(b_{i})=\chi_{i,j,p,q}(b_{j}^{-1})=q\) and \(\chi_{i,j,p,q}(a_{k})=\chi_{i,j,p,q}(b_{k})=0\) for \(k\neq i,j\). This is a union of \(\binom{n}{2}\) pairwise disjoint circles. And,_ \[\Sigma^{1}(P_{n}(\mathbb{K}))^{c}=\{[\chi_{i,j}]\ |\ 1\leq i,j\leq n,i\neq j\},\] _where \(\chi_{i,j}(b_{i})=\chi_{i,j}(b_{j}^{-1})=1\) and \(\chi_{i,j}(b_{k})=0\) for \(k\neq i,j\). This is a union of \(2\binom{n}{2}\) points._ Theorem 1.3 imply the following algebraic result (more details on Section 6). **Corollary 1.4**.: _For any \(n\geq 2\), there exists a normal subgroup \(H\leq Aut(P_{n}(\mathbb{K}))\) of finite index \(|Aut(P_{n}(\mathbb{K})):H|\leq\big{(}2\binom{n}{2}\big{)}!\) such that the Reidemeister number \(R(\varphi)\) is infinite for every \(\varphi\in H\)._ This manuscript is organised as follows. On Section 2 we briefly recall the BNS invariant and present our main tool to compute \(\Sigma^{1}\). On Section 3 we present the surface braid groups, focusing on the cases were the surface is the torus and the Klein bottle. We derive some presentations and relations which are useful to prove Theorem 1.3. On Section 4 we find certain useful automorphisms of \(P_{n}(M)\), which give us valuable information on \(\Sigma^{1}(P_{n}(M))\) for \(M=\mathbb{T}\) and \(M=\mathbb{K}\) and enable its computation. Section 5 is then devoted to the computation of \(\Sigma^{1}\) for the groups \(B_{n}(\mathbb{R}P^{2})\), \(P_{n}(\mathbb{R}P^{2})\), \(B_{n}(\mathbb{S}^{2})\) and to the proofs of Theorems 1.1, 1.2 and especially 1.3. We finish on Section 6, by showing some applications regarding commutator subgroups (Corollary 6.1) and twisted conjugacy (Corollary 1.4) for the groups in question. ## Acknowledgements We would like to thank Prof. Peter Wong (Bates College - USA) for pointing out this research to us, Prof. Daniel Vendruscolo (UFSCar - Brazil) for the help with some braid visualizations and Prof. Daciberg Goncalves (IME-USP - Brazil) for the help with the braid groups of the sphere. The second author would like to thank Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) for the financial support during the research through process 2022/07198-0. ## 2. Preliminaries on \(\Sigma^{1}\) We begin by introducing the invariant \(\Sigma^{1}(G)\) of a finitely generated group \(G\). There are many equivalent definitions in the literature (see [2] and [35]), but we choose the following Cayley graph approach of [35]. Remember that the character sphere of a finitely generated group \(G\) is the quotient space \[S(G)=(Hom(G,\mathbb{R})-\{0\})/\sim=\{[\chi]\ |\ \chi\in Hom(G,\mathbb{R})-\{0\}\}\] of nonzero homomorphisms \(\chi:G\to\mathbb{R}\) (characters), where \(\chi\sim\chi^{\prime}\Leftrightarrow r\chi=\chi^{\prime}\) for some \(r>0\). Let \(\gamma_{n}(G)\), \(n\geq 1\), be the terms of the lower central series of \(G\), that is, \(\gamma_{1}(G)=G\) and \(\gamma_{n+1}(G)=[\gamma_{n}(G),G]\), for any \(n\geq 1\). It is well known that if the free rank of the abelianized group \(G^{Ab}=G/\gamma_{2}(G)\) is \(r\geq 1\) with generators \(x_{1},...,x_{r}\), then \(S(G)\simeq\mathbb{S}^{r-1}\), with homeomorphism \[\mathfrak{h}:S(G) \longrightarrow\mathbb{S}^{r-1}\] \[[\chi] \longmapsto\frac{(\chi(x_{1}),\ldots,\chi(x_{r}))}{\|(\chi(x_{1} ),\ldots,\chi(x_{r}))\|}.\] Each automorphism \(\varphi\in Aut(G)\) gives rise to a sphere homeomorphism \(\varphi^{*}:S(G)\to S(G)\), \([\chi]\mapsto[\chi\circ\varphi]\), and there is a natural left-action by homeomorphisms \(Aut(G)\curvearrowright S(G)\) with \(\varphi\cdot[\chi]=[\chi\circ\varphi^{-1}]\). We say that a subset \(A\subset S(G)\) is invariant under \(\varphi\) if \(\varphi^{*}(A)\subset A\). We say that \(A\) is invariant under automorphisms if \(A\) is invariant under \(\varphi\) for every \(\varphi\in Aut(G)\). Since an inner automorphism acts trivially on \(S(G)\), one can consider the action above as an action \(Out(G)\curvearrowright S(G)\). This action is particularly important for the study of twisted conjugacy on \(G\): it is well known since [16] (see also [34]) that the existence of an invariant finite set of rational points inside an open hemisphere of \(S(G)\) guarantees property \(R_{\infty}\) for \(G\). Now, we recall the definition of \(\Sigma^{1}\) as in [35]. Let \(G\) be a finitely generated group and fix a finite generating set \(X\subset G\). Denote by \(\Gamma=\Gamma(G,X)\) the Cayley graph of \(G\) with respect to \(X\). The first \(\Sigma\)-invariant (or BNS invariant) of \(G\) is \[\Sigma^{1}(G)=\{[\chi]\in S(G)\ |\ \Gamma_{\chi}\ \text{is connected}\},\] where \(\Gamma_{\chi}\) is the subgraph of \(\Gamma\) whose vertices are the elements \(g\in G\) such that \(\chi(g)\geq 0\) and whose edges are those of \(\Gamma\) which connect two such vertices. It is well known that isomorphic finitely generated groups possess homeomorphic BNS invariants [35, Section B1.2a]. The complement of \(\Sigma^{1}(G)\) in the sphere \(S(G)\) will be denoted by \(\Sigma^{1}(G)^{c}\) and the center of \(G\) by \(Z(G)\). It is also known [35, Proposition B1.5] that both \(\Sigma^{1}(G)\) and \(\Sigma^{1}(G)^{c}\) are invariant under automorphisms of \(G\). Assume from now on that \(G\) is finitely generated. In this work, we will make use of the following standard properties of the \(\Sigma^{1}\)-invariant, together with many other important ones that can be found in [34, 35]. **Proposition 2.1** ([35], Proposition A2.4).: _If a point \([\chi]\in S(G)\) is such that \(\chi(Z(G))\neq\{0\}\), then \([\chi]\in\Sigma^{1}(G)\)._ **Theorem 2.2** ([35], Proposition A2.7).: _If \(G=G_{1}\times G_{2}\) is the direct product of two finitely generated groups, then_ \[\Sigma^{1}(G)^{c}={\pi_{1}}^{*}(\Sigma^{1}(G_{1})^{c})\cup{\pi_{2}}^{*}( \Sigma^{1}(G_{2})^{c}),\] _where \({\pi_{i}}^{*}:S(G_{i})\longrightarrow S(G)\) with \([\chi]\longmapsto[\chi\circ\pi_{i}]\) are induced from the projections \(\pi_{i}:G\to G_{i}\)._ For the next property, we use the following notation: a path in the Cayley graph \(\Gamma=\Gamma(G,X)\) of \(G\) is denoted by \(p=(g,z_{1}\cdots z_{n})\), with \(z_{i}\in Z=X^{\pm}\). The path \(p\) starts at \(g\), walks through the edge \((g,z_{1})\) until the vertex \(gz_{1}\), walks through \((gz_{1},z_{2})\) until \(gz_{1}z_{2}\) and so on, until it reaches its terminus \(gz_{1}\cdots z_{n}\). Given \(\chi\in Hom(G,\mathbb{R})\), the evaluation function \(\nu_{\chi}\) is given by \[\nu_{\chi}(p)=\min\{\chi(g),\chi(gz_{1}),\ldots,\chi(gz_{1}\cdots z_{n})\}.\] With this notation, it is clear that the path \(p\) is inside \(\Gamma_{\chi}\) if, and only if, \(\nu_{\chi}(p)\geq 0\), in which case we say that \(p\) is a _\(\chi\)-nonnegative_ path. Similarly, we say \(p\) is _\(\chi\)-positive_ if \(\nu_{\chi}(p)>0\). The following theorem is called the "Geometric Criterion for \(\Sigma^{1}\)". **Theorem 2.3** ([35], Theorem A3.1).: _Let \(G\) be a finitely generated group with finite generating set \(X\) and denote \(Z=X^{\pm}\). Let \([\chi]\in S(G)\) and choose \(t\in Z\) such that \(\chi(t)>0\). Then the following statements are equivalent:_ 1. \(\Gamma_{\chi}\) _is connected (or_ \([\chi]\in\Sigma^{1}(G)\)_);_ 2. _For every_ \(z\in Z\)_, there exists a path_ \(p_{z}\) _from_ \(t\) _to_ \(zt\) _in_ \(\Gamma\) _such that_ \(\nu_{\chi}(p_{z})>\nu_{\chi}((1,z))\)_._ **Remark 2.4**.: We will use Theorem 2.3 above in Section 5, in the proof of Theorem 1.3, to guarantee that certain points belong to \(\Sigma^{1}(P_{n}(M))\), for \(M\) the torus or the Klein bottle. To find such paths \(p_{z}\), one needs certain knowledge of the Cayley graph of the group \(G=P_{n}(M)\) in question; in particular, since \(p_{z}\) must go from a vertex \(t\) to \(zt\), it is useful for us to know relations of the form \(tg=zt\) for certain \(g\in G\) or, equivalently, of the form \(t^{-1}zt=g\). Relations of this type are explored in the next section to help us in Section 5. ## 3. Preliminaries on the braid groups of surfaces Let \(M\) be a closed surface. In this section we present some general properties about \(B_{n}(M)\) and \(P_{n}(M)\). Moreover, we will examine in greater detail the particular case in which \(M\) is either the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\), thereby obtaining specific relationships for these groups. In order to obtain information about them, we will provide a description of their generators, as well as some presentations of these groups. If \(M=\mathbb{S}^{2}\) it is possible to generate \(B_{n}(\mathbb{S}^{2})\) (resp. \(P_{n}(\mathbb{S}^{2})\)) with the same generators of the Artin braid group \(B_{n}\) (resp. Artin pure braid group \(P_{n}\)) [13, 17], but it has some additional relations. If \(M\) is a closed orientable (resp. non-orientable) surface of genus \(g\geq 1\), _i.e._\(M\neq\mathbb{S}^{2}\), we can visualize \(M\) as a polygon whose edges are identified as indicated in Figure 2. One possible way of visualize the geometric braids in \(M\) is to take a cylinder \(M\times[0,1]\) and represent them similarly to the braids in the disc, but with the additional property that a string could "cross a wall" of the cylinder, from one side to another. To simplify the drawing, we can also imagine that we are looking to the cylinder from Figure 2. Orientable and non-orientable surface above, and we use arrows to indicate the direction of the string. These two ways of visualization are illustrated in Figure 3, when \(M\) is the torus. There are several presentations for the (pure) braid groups of \(M\) in the literature, with several differences in notation, as well as in generators and relations depending on the way they are chosen to represent the surface. We have chosen the visualization above, which resembles what was done in [24, 26, 33]. Some common properties of theses groups are the following. For \(1\leq i\leq n\) and \(1\leq r\leq 2g\), if \(M\) orientable (resp. \(1\leq r\leq g\), if \(M\) is non-orientable), we can consider \(\rho_{i,r}\) the braid in \(P_{n}(M)\) such that the \(i\)-th string is the only non-trivial one, which crosses the edge \(\epsilon_{i}\). For \(1\leq j<k\leq n\), consider \(C_{j,k}\) the braid whose \(k\)-th string is the only non-trivial one, which encircles all the basepoints between the \(j\)-th and \(k\)-th points, counterclockwise. These braids can be visualized in Figure 4 and the elements \(\{\rho_{i,r},C_{j,k}\}\) generate \(P_{n}(M)\). A set of generators of \(B_{n}(M)\) is the union of the generators of \(P_{n}(M)\) with the Artin generators of \(B_{n}\). In this paper we will make use of the following presentation for \(P_{n}(M)\), if \(M\) is either the torus or the Klein bottle, given by [26]. By adapting the notation above with the notation of [26], we have \(a_{i}=\rho_{i,1}\) and Figure 4. Generators of \(P_{n}(M)\) Figure 3. A braid in the torus **Theorem 3.1** ([26], Theorem 2.1).: _Let \(n\geq 1\), and let \(M\) be the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\). The following constitutes a presentation of the pure braid group \(P_{n}(M)\) of \(M\): generators: \(\{a_{i},\,b_{i},\,i=1,\ldots,n\}\cup\{C_{i,j},\,1\leq i<j\leq n\}\)._ _relations:_ 1. \(a_{i}a_{j}=a_{j}a_{i}\)_,_ \((1\leq i<j\leq n)\)__ 2. \(a_{i}^{-1}b_{j}a_{i}=b_{j}a_{j}C_{i,j}^{-1}C_{i+1,j}a_{j}^{-1}\)_,_ \((1\leq i<j\leq n)\)__ 3. \(a_{i}^{-1}C_{j,k}a_{i}=\left\{\begin{array}{ll}C_{j,k},\ (1\leq i<j<k\leq n)\ \text{or}\ (1\leq j<k<i\leq n)\\ a_{k}C_{i+1,k}^{-1}C_{i,k}a_{k}^{-1}C_{j,k}C_{i,k}^{-1}C_{i+1,k},\ (1\leq j\leq i<k\leq n) \end{array}\right.\)__ 4. \(C_{i,l}^{-1}C_{j,k}C_{i,l}=\left\{\begin{array}{ll}C_{j,k},\ (1\leq i<l<j<k\leq n)\ \text{or}\ (1\leq j \leq i<l<k\leq n)\\ C_{i,k}C_{l+1,k}^{-1}C_{l,k}C_{i,k}^{-1}C_{j,k}C_{l,k}^{-1}C_{l+1,k},\ (1\leq i<j\leq l<k\leq n) \end{array}\right.\)__ 5. \(\left\{\begin{array}{ll}\prod_{i=i+1}^{n}C_{i,j}^{-1}C_{i+1,j}=a_{i}b_{i}C_{ i,i}a_{i}^{-1}b_{i}^{-1},\ \ (1\leq i\leq n),\ \text{if}\ M=\mathbb{T}\\ \prod_{j=i+1}^{n}C_{i,j}C_{i+1,j}^{-1}=b_{i}C_{1,i}a_{i}^{-1}b_{i}^{-1}a_{i}^{ -1},\ \ (1\leq i\leq n),\ \text{if}\ M=\mathbb{K}\end{array}\right.\)__ 6. \(\left\{\begin{array}{ll}b_{j}b_{i}=b_{i}b_{j},\ (1\leq i<j\leq n),&\text{if}\ M= \mathbb{T}\\ b_{j}b_{i}=b_{i}b_{j}C_{i,j}C_{i+1,j}^{-1},\ (1\leq i<j\leq n),&\text{if}\ M= \mathbb{K}\end{array}\right.\)__ 7. \(\left\{\begin{array}{ll}b_{i}^{-1}a_{j}b_{i}=a_{j}b_{j}C_{i,j}C_{i+1,j}^{-1 }b_{j}^{-1},\ (1\leq i<j\leq n),&\text{if}\ M=\mathbb{T}\\ b_{i}^{-1}a_{j}b_{i}=a_{j}b_{j}(C_{i,j}C_{i+1,j}^{-1})^{-1}b_{j}^{-1},\ (1\leq i<j\leq n),&\text{if}\ M= \mathbb{K}\end{array}\right.\)__ 8. \(\left\{\begin{array}{ll}b_{i}^{-1}C_{j,k}b_{i}=\left\{\begin{array}{ll}C_{ j,k},\ (1\leq i<j<k\leq n)\ \text{or}\ (1\leq j<k<i\leq n)\\ C_{i+1,k}C_{i,k}^{-1}C_{j,k}b_{k}C_{i,k}C_{i+1,k}^{-1}b_{k}^{-1},\ (1\leq j\leq i<k\leq n) \end{array}\right.&\text{if}\ M=\mathbb{T}\\ C_{j,k},\ (1\leq i<j<k\leq n)\ \text{or}\ (1\leq j<k<i\leq n)&\text{if}\ M= \mathbb{K}.\\ C_{i+1,k}C_{i,k}^{-1}C_{j,k}b_{k}(C_{i,k}C_{i+1,k}^{-1})^{-1}b_{k}^{-1},\ (1 \leq j\leq i<k\leq n)&\text{if}\ M=\mathbb{K}.\end{array}\right.\)__ In the following pages, we obtain some particular relations in \(P_{n}(M)\), for \(M\) the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\). These relations are important in their own right, but they will also play an essential role in the computation of \(\Sigma^{1}(P_{n}(M))\) on Theorem 1.3. **Proposition 3.2**.: _Let \(M\) be the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\). The following relations are valid in \(P_{n}(M)\)._ 1. \(a_{i}b_{j}a_{i}^{-1}=b_{j}C_{i,j}C_{i+1,j}^{-1}\)_,_ \((1\leq i<j\leq n)\)_;_ 2. \(b_{i}a_{j}b_{i}^{-1}=a_{j}C_{i+1,j}C_{i,j}^{-1}\)_,_ \((1\leq i<j\leq n)\)_;_ 3. \(a_{i}C_{j,k}a_{i}^{-1}=C_{i+1,k}C_{i,k}C_{j,k}a_{k}^{-1}C_{i,k}C_{i+1,k}^{-1}a_ {k}\)_,_ \((1\leq j\leq i<k\leq n)\)__ 4. \(b_{i}C_{j,k}b_{i}^{-1}=\left\{\begin{array}{ll}b_{i}^{-1}C_{i+1,k}^{-1}C_{i,k }b_{k}C_{j,k}C_{i,k}^{-1}C_{i+1,k},\ (1\leq j\leq i<k\leq n)&\text{if}\ M=\mathbb{T}\\ b_{k}^{-1}(C_{i+1,k}^{-1}C_{i,k})^{-1}b_{k}C_{j,k}C_{i+1,k}^{-1}C_{i+1,k},\ (1 \leq j\leq i<k\leq n)&\text{if}\ M=\mathbb{K}.\\ \end{array}\right.\)__ 5. \(b_{i}b_{j}b_{i}^{-1}=\left\{\begin{array}{ll}b_{j},\ (1\leq i<j\leq n)&\text{if}\ M= \mathbb{T},\\ C_{i+1,j}^{-1}C_{i,j}b_{j},\ (1\leq i<j\leq n)&\text{if}\ M=\mathbb{K};\end{array}\right.\)__ Proof.: To prove item \((S_{1})\), we use relations (2) and (3) from Theorem 3.1, \[b_{j}=a_{i}\Big{(}b_{j}a_{j}C_{i,j}^{-1}C_{i+1,j}a_{j}^{-1}\Big{)}a_{i}^{-1}=a_ {i}\Big{(}b_{j}\cdot a_{i}^{-1}C_{i+1,j}C_{i,j}^{-1}a_{i}\Big{)}a_{i}^{-1}=a_{i} b_{j}a_{i}^{-1}\cdot C_{i+1,j}C_{i,j}^{-1}.\] Analogously, we can prove item \((S_{2})\), by using relations (7) and (8) from Theorems 3.1, \[a_{j}=b_{i}\Big{(}a_{j}b_{j}(C_{i,j}C_{i+1,j}^{-1})^{\pm 1}b_{j}^{-1}\Big{)}b_{i}^{-1}=b_ {i}\Big{(}a_{j}\cdot b_{i}^{-1}C_{i+1,j}^{-1}C_{i,j}b_{i}\Big{)}b_{i}^{-1}=b_{i} a_{j}b_{i}^{-1}\cdot C_{i+1,j}^{-1}C_{i,j}.\] To prove item \((S_{3})\), we start with the case \(i=j\). Using that \(a_{i}\) commutes with both \(C_{i+1,k}\) and \(a_{k}\) by relations (3) and (1), respectively, and using relation (3) for \(i=j\), we obtain \[C_{i,k}=a_{i}\Big{(}a_{k}C_{i+1,k}^{-1}C_{i,k}a_{k}^{-1}C_{i+1,k}\Big{)}a_{i}^{- 1}=a_{k}C_{i+1,k}^{-1}\cdot a_{i}C_{i,k}a_{i}^{-1}\cdot a_{k}^{-1}C_{i+1,k},\] from where \((S_{3})\) follows. For \(j<i\), by using the same relations above and item \((S_{3})\) for \(i=j\), we obtain \[C_{j,k}=a_{i}\Big{(}a_{k}C_{i+1,k}^{-1}C_{i,k}a_{k}^{-1}C_{j,k}C_{i,k}^{-1}C_{i +1,k}\Big{)}a_{i}^{-1}=(C_{i,k}C_{i+1,k}^{-1})\cdot a_{i}C_{j,k}a_{i}^{-1}\cdot( a_{k}^{-1}C_{i+1,k}C_{i,k}^{-1}a_{k}),\] from where \((S_{3})\) follows. The proof of item \((S_{4})\) is analogous to the previous one. If \(M=\mathbb{T}\), for \(i=j\), using that \(b_{i}\) commutes with \(C_{i+1,k}\) and \(b_{k}\) by relations (8) and (6), we obtain \[C_{i,k}=b_{i}\Big{(}C_{i+1,k}b_{k}(C_{i,k}C_{i+1,k}^{-1})b_{k}^{-1}\Big{)}b_{i }^{-1}=C_{i+1,k}b_{k}\cdot b_{i}C_{i,k}b_{i}^{-1}\cdot C_{i+1,k}^{-1}b_{k}^{-1},\] from where \((S_{4})\) follows. For \(j<i\), by using the same relations above and \((S_{4})\) for \(i=j\), we obtain \[C_{j,k}=b_{i}\Big{(}C_{i+1,k}C_{i,k}^{-1}C_{j,k}b_{k}(C_{i,k}C_{i+1,k}^{-1})b_ {k}^{-1}\Big{)}b_{i}^{-1}=(b_{k}^{-1}C_{i,k}^{-1}C_{i+1,k}b_{k})\cdot b_{i}C_{ j,k}b_{i}^{-1}\cdot(C_{i+1,k}^{-1}C_{i,k}),\] from where \((S_{4})\) follows. If \(M=\mathbb{K}\), to prove \((S_{4})\) for \(i=j\), we use that \(b_{i}\) commutes with \(C_{i+1,k}\) and that \(b_{k}b_{i}C_{i+1,k}C_{i,k}^{-1}=b_{i}b_{k}\) by relations (8) and (6), to obtain \[C_{i,k}= b_{i}\Big{(}C_{i+1,k}b_{k}(C_{i,k}C_{i+1,k}^{-1})^{-1}b_{k}^{-1} \Big{)}b_{i}^{-1}=C_{i+1,k}\Big{(}b_{k}b_{i}C_{i+1,k}C_{i,k}^{-1}\Big{)}(C_{i,k }C_{i+1,k}^{-1})^{-1}\Big{(}C_{i,k}C_{i+1,k}^{-1}b_{i}^{-1}b_{k}^{-1}\Big{)}\] \[= C_{i+1,k}b_{k}C_{i+1,k}\cdot b_{i}C_{i,k}^{-1}b_{i}^{-1}\cdot b_ {k}^{-1},\] from where \((S_{4})\) follows. For \(j<i\), by using the same relations above and \((S_{4})\) for \(i=j\), we obtain \[C_{j,k}=b_{i}\Big{(}C_{i+1,k}C_{i,k}^{-1}C_{j,k}b_{k}(C_{i,k}C_{i+1,k}^{-1})^{ -1}b_{k}^{-1}\Big{)}b_{i}^{-1}=(b_{k}^{-1}C_{i+1,k}^{-1}C_{i,k}b_{k})\cdot b_{i }C_{j,k}b_{i}^{-1}\cdot(C_{i+1,k}^{-1}C_{i,k}),\] from where \((S_{4})\) follows. Now, item \((S_{5})\) is relation (6) from Theorem 3.1 if \(M=\mathbb{T}\). If \(M=\mathbb{K}\), by using relation (6) and relation \((S_{4})\) we get \[b_{j}=b_{i}\Big{(}b_{j}C_{i,j}C_{i+1,j}^{-1}\Big{)}b_{i}^{-1}=b_{i}b_{j}b_{i}^{ -1}\cdot b_{i}\Big{(}C_{i,j}C_{i+1,j}^{-1}\Big{)}b_{i}^{-1}=b_{i}b_{j}b_{i}^{-1 }\cdot b_{j}^{-1}\Big{(}C_{i,j}^{-1}C_{i+1,j}\Big{)}b_{j}\] Now, we will highlight some relations in \(B_{n}(M)\) between the Artin generators and the generators of \(P_{n}(M)\). **Remark 3.3**.: The elements \(C_{j,k}\) can be seen in terms of the Artin generators. If \(j=k\), the braid \(C_{j,j}\) is defined to be the trivial braid. If \(k=j+1\), we have \(C_{j,j+1}=\sigma_{j}^{2}\). And if \(j+1<k\), then \[C_{j,k}=\sigma_{k-1}\cdots\sigma_{j+1}\sigma_{j}^{2}\sigma_{j+1}\cdots\sigma_{k- 1}.\] We also have some relations between the pure braids \(a_{j}\) and \(b_{j}\) and the Artin generators \(\sigma_{i}\). These relations can be found in [26] and can also be seen in Figure 5. There, we choose to present the relations in an arbitrary surface \(M\) with generators \(\rho_{i,r}\) to emphasize that the same relations are valid in any closed surface \(M\neq\mathbb{S}^{2}\). We obtain the following \[\sigma_{i}^{-1}\rho_{j,r}\sigma_{i}=\left\{\begin{array}{ll}\sigma_{i}^{-2} \rho_{i+1,r},&\text{if $j=i$}\\ \rho_{i,r}\sigma_{i}^{2},&\text{if $j=i+1$}\\ \rho_{j,r},&\text{otherwise}\end{array}\right. \tag{3.1}\] Moreover, it is possible to verify that the orientation of the edge \(\epsilon_{r}\) does not modify the relations. In (3.2) we explicit this relation if \(M\) is the torus or the Klein bottle, using the notation of Theorem 3.1, _i.e._\(a_{i}=\rho_{i,1}\) and \(b_{i}=\rho_{i,2}^{-1}\). \[\sigma_{i}^{-1}a_{j}\sigma_{i}=\left\{\begin{array}{ll}\sigma_{i}^{-2}a_{i+ 1},&\text{if $j=i$}\\ a_{i}\sigma_{i}^{2},&\text{if $j=i+1$}\\ a_{j},&\text{otherwise}\end{array}\right.\qquad\sigma_{i}^{-1}b_{j}\sigma_{i}= \left\{\begin{array}{ll}b_{i+1}\sigma_{i}^{2},&\text{if $j=i$}\\ \sigma_{i}^{-2}b_{i},&\text{if $j=i+1$}\\ b_{j},&\text{otherwise}\end{array}\right. \tag{3.2}\] In the following Lemma, we present some more important relations in \(B_{n}(\mathbb{T})\) and \(B_{n}(\mathbb{K})\) that will play a key role in the proof of Theorem 1.3. In the case of the torus, some of these relations are already in Theorem 3.1, but we rewrite them for completeness. Denote by \[\alpha_{j,i}=\prod_{k=i}^{j}a_{j+i-k}\quad\text{and}\quad\beta_{j,i}=\prod_{k =i}^{j}b_{j+i-k}, \tag{3.3}\] for \(1\leq i\leq j\leq n\). If \(i>j\), it will be convenient to define \(\alpha_{j,i}\) and \(\beta_{j,i}\) to be the trivial braid. **Lemma 3.4**.: _Let \(M\) be the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\). Using the presentation given in Theorem 3.1, the following relations are valid in \(B_{n}(M)\)._ 1. \(a_{i+1}a_{i}\cdot\sigma_{i}=\sigma_{i}\cdot a_{i+1}a_{i},\ (1\leq i<n)\)_;_ 2. \(\alpha_{j,i}\cdot\sigma_{k}=\sigma_{k}\cdot\alpha_{j,i},\ (1\leq i\leq k<j\leq n)\)_;_ [MISSING_PAGE_POST] \[\begin{array}{l}\left(R_{4}\right)\,\left\{\begin{array}{ll}b_{i+1}b_{i}\cdot \sigma_{i}=\sigma_{i}\cdot b_{i+1}b_{i},&(1\leq i<n),\quad\text{if}\,M=\mathbb{T},\\ b_{i+1}b_{i}\cdot\sigma_{i}=\sigma_{i}^{-1}\cdot b_{i+1}b_{i},&(1\leq i<n), \quad\text{if}\ M=\mathbb{K};\end{array}\right.\\ \left(R_{5}\right)\,\left\{\begin{array}{ll}\beta_{j,i}\cdot\sigma_{k}= \sigma_{k}\cdot\beta_{j,i},&(1\leq i\leq k<j\leq n),\quad\text{if}\ M=\mathbb{T}, \\ \beta_{j,i}\cdot\sigma_{k}=\sigma_{k}^{-1}\cdot\beta_{j,i},&(1\leq i\leq k<j \leq n),\quad\text{if}\ M=\mathbb{K};\end{array}\right.\\ \left(R_{6}\right)\,\left\{\begin{array}{ll}\beta_{j,i}\cdot C_{k,t}=C_{k,t} \cdot\beta_{j,i},&(1\leq i\leq k<t\leq j\leq n)\quad\text{if}\,M=\mathbb{T},\\ \beta_{j,i}\cdot C_{k,t}=C_{k,t}^{-1}\cdot\beta_{j,i},&(1\leq i\leq k<t\leq j \leq n)\quad\text{if}\,M=\mathbb{K};\end{array}\right.\\ \left(R_{7}\right)\,\left\{\begin{array}{ll}\beta_{j,i}\cdot b_{j}=b_{j}\cdot \beta_{j,i},&(1\leq i<j\leq n),\quad\text{if}\ M=\mathbb{T},\\ \beta_{j,i}\cdot b_{j}C_{i,j}=b_{j}\cdot\beta_{j,i},&(1\leq i<j\leq n),\quad \text{if}\ M=\mathbb{K}.\end{array}\right.\end{array}\] Proof.: Relation \((R_{1})\) follows from (3.2) and relation (1) from Theorem 3.1: \[\sigma_{i}^{-1}(a_{i+1}a_{i})\sigma_{i}=(a_{i}\sigma_{i}^{2})(\sigma_{i}^{-2}a _{i+1})=a_{i}a_{i+1}=a_{i+1}a_{i},\] and relation \((R_{2})\) (resp. \((R_{3})\)) is an immediate consequence of \((R_{1})\) and (3.2) (resp. \((R_{2})\) and Remark 3.3). Similarly, relation \((R_{4})\) follows from (3.2), relation (6) from Theorem 3.1 and the fact that \(\sigma_{i}^{2}=C_{i,i+1}\) by Remark 3.3: \[\left\{\begin{array}{ll}\sigma_{i}^{-1}(b_{i+1}b_{i})\sigma_{i}=\sigma_{i}^{ -1}(b_{i}b_{i+1})\sigma_{i}=(b_{i+1}\sigma_{i}^{2})(\sigma_{i}^{-2}b_{i})=b_{i +1}b_{i},&\text{if}\ M=\mathbb{T}\\ \sigma_{i}^{-1}(b_{i+1}b_{i})\sigma_{i}=\sigma_{i}^{-1}(b_{i}b_{i+1}\sigma_{i} ^{2})\sigma_{i}=(b_{i+1}\sigma_{i}^{2})(\sigma_{i}^{-2}b_{i})\sigma_{i}^{2}=b_ {i+1}b_{i}\sigma_{i}^{2},&\text{if}\ M=\mathbb{K}\end{array}\right.\] and relation \((R_{5})\) (resp. \((R_{6})\)) is an immediate consequence of \((R_{4})\) and (3.2) (resp. \((R_{5})\) and Remark 3.3). If \(M=\mathbb{T}\), relation \((R_{7})\) follows directly from (6) of Theorem 3.1, however if \(M=\mathbb{K}\) we need a few more steps. Fix \(1<j\leq n\) and let us show \((R_{7})\) by (descending) induction on \(1\leq i<j\). For \(i=j-1\), \((R_{7})\) follows directly from relation (6) of Theorem 3.1. Suppose \((R_{7})\) is true for \(1<i<j\leq n\) and let us show it to \(i-1\). By using the induction hypothesis and relation (8) of Theorem 3.1 we obtain \[\beta_{j,i-1}^{-1}b_{j}\beta_{j,i-1}=b_{i-1}^{-1}\beta_{j,i}^{-1}b_{j}\beta_{j, i}b_{i-1}=b_{i-1}^{-1}b_{j}C_{i,j}b_{i-1}=(b_{j}C_{i-1,j}C_{i,j}^{-1})C_{i,j}=b_{j}C_ {i-1,j},\] so \(b_{j}\beta_{j,i-1}=\beta_{j,i-1}b_{j}C_{i-1,j}\), as desired. This completes the induction step and, therefore, the proof of the Theorem. **Lemma 3.5**.: _Let \(M\) be the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\), and \(n\geq 3\). The following relations are valid in \(P_{n}(M)\)._ \[\begin{array}{l}\left(P_{1}\right)\,\left\{\begin{array}{ll}b_{n}^{-1}a_{i}b_ {n}=\left\{\begin{array}{ll}C_{i,n}C_{i+1,n}^{-1}a_{i},&(1\leq i<n)\\ a_{n}C_{1,n}^{-1},&(i=n)\end{array}\right.&\text{if}\ M=\mathbb{T}\\ b_{n}^{-1}a_{i}b_{n}=\left\{\begin{array}{ll}C_{i,n}C_{i+1,n}^{-1}a_{i},&(1 \leq i<n)\\ C_{1,n}a_{n}^{-1},&(i=n)\end{array}\right.&\text{if}\ M=\mathbb{K}\end{array}\right.\\ \left(P_{2}\right)\,\left\{\begin{array}{ll}b_{n}^{-1}C_{i,n}b_{n}=\left\{ \begin{array}{ll}\beta_{n-1,i}C_{i,n}\beta_{n-1,i}^{-1},&(1<i<n)\\ \beta_{n-1,3}b_{1}\beta_{n-1,2}C_{2,n}^{-1}C_{3,n}\beta_{n-1,2}^{-1}\delta b_{n}b _{1}^{-1}\beta_{n,3}^{-1},&(i=1)\end{array}\right.&\text{if}\ M=\mathbb{T}\\ b_{n}^{-1}C_{i,n}b_{n}=\left\{\begin{array}{ll}\beta_{n-1,i}C_{i,n}^{-1} \beta_{n-1,i}^{-1},&(1<i<n)\\ \beta_{n-1,3}b_{1}\delta^{-1}\beta_{n-1,2}C_{3,n}C_{2,n}^{-1}\beta_{n-1,2}^{-1}b_ {n}\delta b_{1}^{-1}\beta_{n,3}^{-1},&(i=1)\end{array}\right.&\text{if}\ M= \mathbb{K}\end{array}\right.\\ \left(P_{3}\right)\,\text{if}\ M=\mathbb{T}\end{array}\right.\\ \left(P_{4}\right)\,\left\{\begin{array}{ll}a_{n}^{-1}b_{i}a_{n}=\left\{ \begin{array}{ll}C_{i,n}^{-1}C_{i+1,n}b_{i},&(1\leq i<n)\\ b_{n}C_{1,n},&(i=n)\end{array}\right.&\text{if}\ M=\mathbb{T}\end{array}\right.\\ \left(P_{5}\right)\,\left\{\begin{array}{ll}b_{n}^{-1}C_{i,n}b_{n}=\left\{ \begin{array}{ll}\beta_{n-1,i}C_{i,n}^{-1}C _(\(P_{4}\)) if \(M=\mathbb{T}\), then \(a_{n}^{-1}C_{i,n}a_{n}=\left\{\begin{array}{ll}\alpha_{n-1,i}C_{i,n}\alpha_{n-1,i}^{-1},&(1<i<n)\\ \alpha_{n-1,3}a_{1}\bar{\bar{\delta}}\alpha_{n-1,2}C_{2,n}C_{3,n}^{-1}\alpha_ {n-1,2}^{-1}a_{n}a_{1}^{-1}\alpha_{n,3}^{-1},&(i=1)\end{array}\right.\)_ _where \(\delta=C_{1,n}C_{2,n}^{-1}C_{3,n}\) and \(\bar{\delta}=C_{3,n}C_{2,n}^{-1}C_{1,n}\)._ Proof.: If \(1\leq i<n\), relation (\(P_{1}\)) is a consequence of relations (2) and (3) from Theorem 3.1: \[a_{i}^{-1}b_{n}a_{i}=b_{n}a_{n}C_{i,n}^{-1}C_{i+1,n}a_{n}^{-1}=b_{n}a_{i}^{-1}C_ {i+1,n}C_{i,n}^{-1}a_{i}\] therefore, \(a_{i}b_{n}=b_{n}C_{i,n}C_{i+1,n}^{-1}a_{i}\). The case \(i=n\) comes from relation (5) of Theorem 3.1. To prove relation (\(P_{2}\)), if \(1<i<n\), by Lemma 3.4 (\(R_{6}\)) we have \[C_{i,n}b_{n}=C_{i,n}b_{n}\beta_{n-1,i}\beta_{n-1,i}^{-1}=C_{i,n}\beta_{n,i} \beta_{n-1,i}^{-1}=\left\{\begin{array}{ll}b_{n}\beta_{n-1,i}C_{i,n}\beta_{n -1,i}^{-1},&M=\mathbb{T}\\ b_{n}\beta_{n-1,i}C_{i,n}^{-1}\beta_{n-1,i}^{-1},&M=\mathbb{K}.\end{array}\right. \tag{3.4}\] For the case \(i=1\), we need some additional steps. In the following, we use Lemma 3.4, Theorem 3.1 (\(6\)), Proposicao 3.2 (\(S_{4}\)) and (\(S_{5}\)), and also (3.4). If \(M=\mathbb{T}\), we have \[C_{1,n}b_{n}\beta_{n,3}b_{1}= C_{1,n}b_{n}\beta_{n,1}(b_{1}^{-1}b_{2}^{-1}b_{1})=\beta_{n,1}C_{1,n }b_{n}b_{2}^{-1}=\beta_{n,3}b_{1}b_{2}C_{1,n}b_{n}b_{2}^{-1}=\beta_{n,3}b_{1}(b _{n}^{-1}C_{3,n}^{-1}C_{2,n}b_{n}\delta)b_{n}\] \[= \beta_{n,3}b_{1}(\beta_{n-1,3}C_{3,n}^{-1}\beta_{n-1,3}^{-1} \beta_{n-1,2}C_{2,n}\beta_{n-1,2}^{-1})\delta b_{n}=\beta_{n,3}b_{1}\beta_{n-1,3}C_{3,n}^{-1}b_{2}C_{2,n}\beta_{n-1,2}^{-1}\delta b_{n}\] \[= b_{n}\cdot\beta_{n-1,3}b_{1}\beta_{n-1,2}C_{3,n}^{-1}C_{2,n}\beta _{n-1,2}^{-1}\delta b_{n}.\] Similarly, if \(M=\mathbb{K}\), we obtain \[C_{1,n}b_{n}\beta_{n,3}b_{1}=C_{1,n}b_{n}\beta_{n,1}(b_{1}^{-1} b_{2}^{-1}b_{1})=\beta_{n,1}C_{1,n}^{-1}b_{n}C_{1,n}(b_{1}^{-1}b_{2}^{-1}b_{1})= \beta_{n,3}(b_{2}b_{1})C_{1,n}^{-1}b_{n}C_{1,n}(b_{1}^{-1}b_{2}^{-1}b_{1})\] \[=\beta_{n,3}(b_{1}b_{2}C_{1,2})C_{1,n}^{-1}b_{n}C_{1,n}(C_{1,2}^{- 1}b_{2}^{-1})=\beta_{n,3}b_{1}b_{2}C_{1,n}^{-1}b_{n}C_{1,n}b_{2}^{-1}=\beta_{n,3 }b_{1}(\delta^{-1}\cdot b_{n}^{-1}C_{3,n}^{-1}C_{2,n}b_{n}\cdot b_{n}\delta)\] \[=\beta_{n,3}b_{1}\delta^{-1}(\beta_{n-1,3}C_{3,n}\beta_{n-1,3}^{- 1}\beta_{n-1,2}C_{2,n}^{-1}\beta_{n-1,2}^{-1})b_{n}\delta=\beta_{n,3}b_{1}\delta ^{-1}\beta_{n-1,3}C_{3,n}b_{2}C_{2,n}^{-1}\beta_{n-1,2}^{-1}b_{n}\delta\] \[=b_{n}\cdot\beta_{n-1,3}b_{1}\delta^{-1}\beta_{n-1,2}C_{3,n}C_{2,n }^{-1}\beta_{n-1,2}^{-1}b_{n}\delta.\] In a similar way as relation (\(P_{1}\)), to prove (\(P_{3}\)) we use relations (7) and (8) of Theorem 3.1, if \(1\leq i<n\) and relation (5) if \(i=n\). To prove (\(P_{4}\)), the first part is analogous to the proof of (\(P_{2}\)). Now, if \(i=1\), using Lemma 3.4, Theorem 3.1 (\(1\)), the previous case and also Lema 3.2 (\(S_{3}\)), we obtain \[C_{i,n}a_{n}\alpha_{n,3}a_{1}= C_{1,n}a_{n}\alpha_{n,1}a_{2}^{-1}=\alpha_{n,1}C_{1,n}a_{n}a_{2}^{-1}= \alpha_{n,3}a_{1}a_{2}C_{1,n}a_{n}a_{2}^{-1}=\alpha_{n,3}a_{1}(\bar{\delta}a_{n} ^{-1}C_{2,n}C_{3,n}^{-1}a_{n})a_{n}\] \[= \alpha_{n,3}a_{1}\bar{\delta}(\alpha_{n-1,2}C_{2,n}\alpha_{n-1,2} ^{-1}\alpha_{n-1,3}C_{3,n}^{-1}\alpha_{n-1,3}^{-1})a_{n}=\alpha_{n,3}a_{1}\bar{ \delta}(\alpha_{n-1,2}C_{2,n}a_{2}^{-1}C_{3,n}^{-1}\alpha_{n-1,3}^{-1})a_{n}\] \[= a_{n}\cdot\alpha_{n-1,3}a_{1}\bar{\delta}\alpha_{n-1,2}C_{2,n}C_{3,n}^{-1}\alpha_{n-1,2}^{-1}a_{n},\] which concludes the proof. To obtain the character sphere of a group, we need its abelianization, which we present in the following proposition. The proof is straightforward from the presentation of those groups, and a precise proof can be found in [23] and its references. **Proposition 3.6**.: _[_23_, Corollary 7 and Lemma 15]_ _Let \(M\) be a compact surface without boundary, and let \(n\in\mathbb{N}\). The group \(P_{n}(M)/\gamma_{2}(P_{n}(M))\) is isomorphic to:_ 1. \(\mathbb{Z}_{2}\oplus\mathbb{Z}^{n(n-3)}\) _if_ \(M=\mathbb{S}^{2}\) _and_ \(n\geq 3\)_._ 2. \(\mathbb{Z}^{2gn}\) _if_ \(M\) _is an orientable surface of genus_ \(g\geq 1\) _._ 3. \(\mathbb{Z}_{2}^{n}\oplus\mathbb{Z}^{(g-1)n}\) _if_ \(M\) _is a non-orientable surface of genus_ \(g\geq 1\)_._ **Remark 3.7**.: If \(M\) is the torus or the Klein bottle, it is easy to see that for all \(1\leq i<j\leq n\), \(C_{i,j}\in\gamma_{2}(P_{n}(M))\) by Theorem 3.1 (2). In fact, \(C_{i,j}\in\gamma_{2}(P_{n}(M))\) for all closed surfaces \(M\neq\mathbb{S}^{2}\), and the generators \(\rho_{i,r}\) form a basis for \(P_{n}(M)/\gamma_{2}(P_{n}(M))\)[23]. Now, adjusting to the notation of Theorem 3.1, if \(M=\mathbb{T}\) then \(\{a_{i},b_{i}\,:\,1\leq i\leq n\}\) is a basis for the free abelian group \(P_{n}(\mathbb{T})/\gamma_{2}(P_{n}(\mathbb{T}))\). On the other hand, if \(M=\mathbb{K}\), then by Theorem 3.1 (5), we have that \(\{a_{i}\,:\,1\leq i\leq n\}\) is a basis for the torsion part of \(P_{n}(\mathbb{K})/\gamma_{2}(P_{n}(\mathbb{K}))\) and \(\{b_{i}\,:\,1\leq i\leq n\}\) is a basis for the free part of \(P_{n}(\mathbb{K})/\gamma_{2}(P_{n}(\mathbb{K}))\). It is also straightforward to obtain the abelianization of \(B_{n}(M)\), by looking to the presentation of each group. This can be found in [5, 18, 21, 26], and we state the result below. **Proposition 3.8**.: _Let \(M\) be a compact surface without boundary, and let \(n\in\mathbb{N}\). The group \(B_{n}(M)/\gamma_{2}(B_{n}(M))\) is isomorphic to:_ 1. \(\mathbb{Z}_{2(n-1)}\) _if_ \(M=\mathbb{S}^{2}\) _and_ \(n\geq 2\)_._ 2. \(\mathbb{Z}_{2}\oplus\mathbb{Z}^{2g}\) _if_ \(M\) _is an orientable surface of genus_ \(g\geq 1\)_._ 3. \(\mathbb{Z}_{2}^{2}\oplus\mathbb{Z}^{g-1}\) _if_ \(M\) _is a non-orientable surface of genus_ \(g\geq 1\)_._ **Remark 3.9**.: It follows from the Artin relations that the \(\gamma_{2}(B_{n}(M))\)-cosets of \(\sigma_{1},\ldots,\sigma_{n-1}\) in \(B_{n}(M)/\gamma_{2}(B_{n}(M))\) are all identified to a single element, which we denote by \(\sigma\). In particular, if \(M\neq\mathbb{S}^{2}\), the element \(\sigma\) is a torsion element of order \(2\) of \(B_{n}(M)/\gamma_{2}(B_{n}(M))\), by Remarks 3.3 and 3.7. **Remark 3.10**.: Let \(M\) be the torus \(\mathbb{T}\) or the Klein bottle \(\mathbb{K}\). If \(2\leq i\leq n\), by (3.2) we have \[a_{i}=(\sigma_{i-1}\cdots\sigma_{1})\cdot a_{1}\cdot(\sigma_{1}\cdots\sigma_{i -1})\quad\text{and}\quad b_{i}=(\sigma_{i-1}^{-1}\cdots\sigma_{1}^{-1})\cdot b _{1}\cdot(\sigma_{1}^{-1}\cdots\sigma_{i-1}^{-1}), \tag{3.5}\] therefore, by Remark 3.9, it follows that the \(\gamma_{2}(B_{n}(M))\)-cosets of \(a_{1},\ldots,a_{n}\) (resp. \(b_{1},\ldots,b_{n}\)) in \(B_{n}(M)/\gamma_{2}(B_{n}(M))\) are all identified to a single element, which we denote by \(a\) (resp. \(b\)). If \(M=\mathbb{T}\), then \(a\) and \(b\) are torsion free, and if \(M=\mathbb{K}\), then \(a\) has order \(2\) and \(b\) is torsion free. ### Pure braid groups of \(\mathbb{T}\) and \(\mathbb{K}\) as iterated semi-direct products In this section, we give another presentation for some pure braid groups of the torus and the Klein bottle. If \(M\) is a surface without boundary, due the work of E. Fadell and L. Neuwirth [12], we have the following short exact sequence of pure braid groups: (3.6) where \(n>3\) if \(M\) is the sphere \(\mathbb{S}^{2}\)[11, 13], \(n>2\) if \(M\) is the projective plane \(\mathbb{R}P^{2}\)[13], and \(n\geq 2\) otherwise [12]. The homomorphism \(\iota\) is the inclusion and \(p_{j\ast}\) can be interpreted as the homomorphism that erase the \(j\)-th string. One important result of Fadell and Neuwirth [12] guarantees that for \(M\) either the torus or the Klein bottle, due the existence of a non-vanishing vector field in \(M\), the short exact sequence (3.6) splits for all \(n\); therefore, \(P_{n}(M)\) may be decomposed as an iterated semi-direct product, which is not true in general (see [19, Theorem 2]). Using the explicit algebraic description of the section of \(p_{n\ast}\) (see [26, Proposition 5.1] and [33, Proposition 2.2.1]), we can obtain the desired iterated semi-direct product for all \(n\). In the following, we explicitly describe these structures for \(P_{2}(\mathbb{T})\), \(P_{3}(\mathbb{T})\), \(P_{4}(\mathbb{T})\) and \(P_{2}(\mathbb{K})\) We will make use of these alternative presentations in the proof of Theorem 1.3, through Lemmas 5.4, 5.5, 5.8 and 5.9. **Proposition 3.11**.: _The following assertions hold:_ 1. _The pure braid group_ \(P_{2}(\mathbb{T})\) _is isomorphic to_ \(G_{2}(\mathbb{T})=F_{2}\times\mathbb{Z}\times\mathbb{Z}\)_, the direct product of the free group_ \(F_{2}=\langle x,y\rangle\) _with_ \(\mathbb{Z}\times\mathbb{Z}=\langle a,b\rangle\)_._ 2. _The pure braid group_ \(P_{3}(\mathbb{T})\) _is isomorphic to_ \(G_{3}(\mathbb{T})=F_{3}\rtimes G_{2}(\mathbb{T})\)_, the semi-direct product of the free group_ \(F_{3}=\langle u,v,w\rangle\) _with the group_ \(G_{2}(\mathbb{T})\)_, defined above, equipped with the following action_ 1. \(a^{-1}za=z\)_, if_ \(z=u,v,w\)_;_ 2. \(x^{-1}zx=\left\{\begin{array}{ll}u&\mbox{if $z=u$},\\ u^{-1}vuw^{-1}&\mbox{if $z=v$},\\ w&\mbox{if $z=w$};\end{array}\right.\) _(c)_ \(b^{-1}zb=z\)_, if_ \(z=u,v,w\)_;_ 3. _The pure braid group_ \(P_{4}(\mathbb{T})\) _is isomorphic to_ \(G_{4}(\mathbb{T})=F_{4}\rtimes G_{3}(\mathbb{T})\)_, the semi-direct product of the free group_ \(F_{4}=\langle\bar{u},\bar{v},w_{2},w_{3}\rangle\) _with the group_ \(G_{3}(\mathbb{T})\)_, defined above, equipped with the following action_ 1. \(a^{-1}za=z\)_, if_ \(z=\bar{u},\bar{v},w_{2},w_{3}\)_;_ 2. \(x^{-1}zx=\left\{\begin{array}{ll}\bar{u}&(z=\bar{u}),\\ \bar{u}^{-1}\bar{v}\bar{u}w_{2}^{-1}&(z=\bar{v}),\\ w_{i}&(z=w_{i},\,i=2,3);\end{array}\right.\) _(e)_ \(u^{-1}zu=\left\{\begin{array}{ll}\bar{u}&(z=\bar{u}),\\ \bar{v}\bar{u}w_{2}^{-1}\bar{u}^{-1}&(z=\bar{v}),\\ w_{3}\bar{u}^{-1}w_{2}w_{3}^{-1}\bar{u}&(z=w_{2}),\\ w_{3}&(z=w_{3});\end{array}\right.\) _(f)_ \(v^{-1}zv=\left\{\begin{array}{ll}\bar{u}\bar{v}w_{2}\bar{v}^{-1}&(z=\bar{u}),\\ \bar{v}&(z=\bar{v}),\\ \bar{v}^{-1}w_{3}^{-1}w_{2}\bar{v}w_{3}&(z=w_{2}),\\ w_{3}&(z=w_{3});\end{array}\right.\) _(g)_ \(w^{-1}zw=\left\{\begin{array}{ll}w_{3}w_{2}^{-1}zw_{2}w_{3}^{-1}&(z=\bar{u}, \bar{v}),\\ w_{3}w_{2}w_{3}^{-1}&(z=w_{2}),\\ w_{3}&(z=w_{3}).\end{array}\right.\)__ Proof.: We use the presentation of \(P_{n}(\mathbb{T})\) given in Theorem 3.1. For (i), consider the homomorphism \(\phi:G_{2}(\mathbb{T})\longrightarrow P_{2}(\mathbb{T})\) defined in the generators as: \[\phi:\left\{\begin{array}{ll}x&\longmapsto&a_{2}\\ y&\longmapsto&b_{2}\\ a&\longmapsto&a_{1}a_{2}\\ b&\longmapsto&b_{2}b_{1}\end{array}\right. \tag{3.7}\] It is straightforward to prove that \(\phi\) is well defined and that \(\phi\) is bijetive, by using that \(Z(P_{2}(\mathbb{T}))=\langle(a_{1}a_{2}),(b_{1}b_{2})\rangle\)[4, 32]. Similarly, for items (ii) and (iii), consider the homomorphisms \(\Phi:G_{3}(\mathbb{T})\longrightarrow P_{3}(\mathbb{T})\) and \(\Psi:G_{4}(\mathbb{T})\longrightarrow P_{4}(\mathbb{T})\) defined as: \[\Phi:\left\{\begin{array}{rcl}u&\longmapsto&a_{3}\\ v&\longmapsto&b_{3}\\ w&\longmapsto&C_{2,3}\end{array}\right.\qquad\qquad\qquad\Phi:\left\{ \begin{array}{rcl}x&\longmapsto&a_{2}a_{3}\\ y&\longmapsto&b_{2}b_{3}\\ a&\longmapsto&a_{1}a_{2}a_{3}\\ b&\longmapsto&b_{1}b_{2}b_{3}\end{array}\right. \tag{3.8}\] and \[\Psi:\left\{\begin{array}{rcl}\bar{u}&\longmapsto&a_{4}\\ \bar{v}&\longmapsto&b_{4}\\ w_{2}&\longmapsto&C_{2,4}\\ w_{3}&\longmapsto&C_{3,4}\end{array}\right.\qquad\Psi:\left\{\begin{array}{rcl}u &\longmapsto&a_{3}a_{4}\\ v&\longmapsto&b_{3}b_{4}\\ w&\longmapsto&C_{2,3}C_{2,4}C_{3,4}^{-1}\end{array}\right.\qquad\Psi:\left\{ \begin{array}{rcl}x&\longmapsto&a_{2}a_{3}a_{4}\\ y&\longmapsto&b_{2}b_{3}b_{4}\\ a&\longmapsto&a_{1}a_{2}a_{3}a_{4}\\ b&\longmapsto&b_{1}b_{2}b_{3}b_{4}\end{array}\right. \tag{3.9}\] It is straightforward to prove that \(\Phi\) and \(\Psi\) are well defined and bijetive, by using Theorem 3.1. **Remark 3.12**.: The following relations are valid in \(G_{3}(\mathbb{T})\): 1. \(xvx^{-1}=uvwu^{-1}\); 2. \(yuy^{-1}=vuw^{-1}v^{-1}\). For completeness, we state below a resembling result for the Klein bottle. The isomorphism is defined in the generators in the same way as \(\phi\) in (3.7). **Proposition 3.13** ([26], Remark 5.3).: _The pure braid group \(P_{2}(\mathbb{K})\) is isomorphic to \(G_{2}(\mathbb{K})=F_{2}\rtimes(\mathbb{Z}\rtimes\mathbb{Z})\), the semidirect product of the free group \(F_{2}=\langle x,y\rangle\) with \(\mathbb{Z}\rtimes\mathbb{Z}=\langle a,b\,|\,ab=ba^{-1}\rangle\), equipped with the following action_ 1. \(a^{-1}za=\left\{\begin{array}{ll}x&\mbox{if $z=x$},\\ x^{-2}y&\mbox{if $z=y$};\end{array}\right.\qquad\qquad\) _(2)_ \(b^{-1}zb=\left\{\begin{array}{ll}x^{-1}&\mbox{if $z=x$},\\ xyx&\mbox{if $z=y$};\end{array}\right.\)__ ## 4. The action \(Aut(P_{n}(M))\curvearrowright S(P_{n}(M))\) contains certain permutations In this section, we use homeomorphisms of the configuration spaces of \(M\neq\mathbb{S}^{2}\) to obtain automorphisms of \(P_{n}(M)\) which induce certain permutation of coordinates on the character spheres \(S(P_{n}(M))\). This will be useful on the computation of \(\Sigma^{1}(P_{n}(M))\) (Theorem 1.3). **Theorem 4.1**.: _Let \(M\neq\mathbb{S}^{2}\) be a closed surface. For any \(\tau\in S_{n}\), there is an automorphism of \(P_{n}(M)\) whose induced automorphism on \(P_{n}(M)^{Ab}\) is of the form \(\rho_{i,r}\mapsto\rho_{\tau(i),r}\), for \(1\leq i\leq n\) and \(1\leq r\leq 2g\), if \(M\) orientable (resp. \(1\leq r\leq g\), if \(M\) is non-orientable)._ Proof.: First, notice that, since transpositions of the form \(\tau=(i\quad i+1)\), for \(1\leq i<n\), generate \(S_{n}\), it suffices to show the result for such a transposition \(\tau\). Given such \(\tau\), let \(f:F_{n}(M)\to F_{n}(M)\) be the homeomorphism \[f(x_{1},...,x_{i},x_{i+1},...,x_{n})=(x_{1},...,x_{i+1},x_{i},...,x_{n})\] which permutes the \(i\)-th and \((i+1)\)-th coordinates. Fix distinct base points \(q_{1},...,q_{n}\in M\) and denote \(Q=(q_{1},...,q_{n})\in F_{n}(M)\), so that we have by definition \(P_{n}(M)=\pi_{1}(F_{n}(M),Q)\). Since \(f(Q)=(q_{1},...,q_{i+1},q_{i},...,q_{n})=Q^{\prime}\), the map \(f\) induces the group isomorphism \[f_{*}:\pi_{1}(F_{n}(M),Q)\rightarrow\pi_{1}(F_{n}(M),Q^{\prime})\] with \(f_{*}([\gamma])=[f\circ\gamma]\). Furthermore, let \(\gamma*\delta\) denote the known concatenation of two paths \(\gamma\) and \(\delta\), meaning "\(\gamma\) followed by \(\delta\)". The braid \(\sigma_{i}\in B_{n}(M)\) can be naturally seen as a path \(\gamma:[0,1]\to F_{n}(M)\) from \(Q\) to \(Q^{\prime}\). Denote by \(\hat{\gamma}:[0,1]\to F_{n}(M)\), \(\hat{\gamma}(t)=\gamma(1-t)\) its inverse path. By basic topology, we have the group isomorphism \[\psi:\pi_{1}(F_{n}(M),Q^{\prime}) \longrightarrow\pi_{1}(F_{n}(M),Q)\] \[\longmapsto[\gamma*\delta*\hat{\gamma}]\] Therefore, we obtain the group automorphism \(\varphi=\psi\circ f_{*}\) of \(\pi_{1}(F_{n}(M),Q)=P_{n}(M)\). Let us visualize the image under \(\varphi\) of \(\rho_{i,r}=[\tilde{\rho}_{i,r}]\), which is \([\gamma*(f\circ\tilde{\rho}_{i,r})*\hat{\gamma}]\). This is the homotopy class of a pure \(n\)-braid in \(M\). See, for example, Figure 5. By the definitions of \(\gamma\), \(\tilde{\rho}_{i,r}\) and \(f\), the \(i\)-th coordinate starts at \(q_{i}\), passes in front of the \((i+1)\)-th string via \(\sigma_{i}\) until \(q_{i+1}\), stays constant and then crosses the \((i+1)\)-th string again via \(\sigma_{i}^{-1}\). The \((i+1)\)-th coordinate starts at \(q_{i+1}\), passes behind the \(i\)-th string until \(q_{i}\), crosses the wall \(\epsilon_{i}\), coming back to \(q_{i}\) and then passes behind the \(i\)-th string until \(q_{i+1}\). All other coordinates stay constant in this composition of paths. Since \(\pi_{1}(F_{n}(M),Q)=P_{n}(M)\), this braid is the pure braid \(\sigma_{i}\rho_{i,r}\sigma_{i}^{-1}\) and, therefore, by relation (3.1) and Remark 3.3, it is \[\sigma_{i}\rho_{i,r}\sigma_{i}^{-1}=\sigma_{i}^{2}(\sigma_{i}^{-1}\rho_{i,r} \sigma_{i})\sigma_{i}^{-2}=\sigma_{i}^{2}\sigma_{i}^{-2}\rho_{i+1,r}\sigma_{i }^{-2}=\rho_{i+1,r}\sigma_{i}^{-2}=\rho_{i+1,r}C_{i,i+1}^{-1}.\] Therefore, \(\varphi(\rho_{i,r})=\rho_{i+1,r}C_{i,i+1}^{-1}\). Similarly, one can see that, under the identification \(\pi_{1}(F_{n}(M),Q)=P_{n}(M)\), the braid \(\varphi(\rho_{i+1,r})\) is \(\sigma_{i}\rho_{i+1,r}\sigma_{i}^{-1}\) and, therefore, by relation (3.1) and Remark 3.3, it is \[\sigma_{i}\rho_{i+1,r}\sigma_{i}^{-1}=\sigma_{i}^{2}(\sigma_{i}^{-1}\rho_{i+1, r}\sigma_{i})\sigma_{i}^{-2}=\sigma_{i}^{2}(\rho_{i,r}\sigma_{i}^{2}) \sigma_{i}^{-2}=\sigma_{i}^{2}\rho_{i,r}=C_{i,i+1}\rho_{i,r}.\] Therefore, \(\varphi(\rho_{i+1,r})=C_{i,i+1}\rho_{i,r}\). Also, for \(j\notin\{i,i+1\}\), again by relation (3.1) we have \(\varphi(\rho_{j,r})=\sigma_{i}\rho_{j,r}\sigma_{i}^{-1}=\rho_{j,r}\). Since \(C_{i,i+1}\in\gamma_{2}(P_{n}(M))\) by Remark 3.7, the theorem is proved for \(\tau=(i\quad i+1)\). This finishes our proof. The next corollary is immediate from Theorem 4.1. From now on, we will denote the coordinates of \(S(P_{n}(\mathbb{K}))\simeq S^{n-1}\) and \(S(P_{n}(\mathbb{T}))\simeq S^{2n-1}\) (see Remark 3.7) respectively by \((x_{1},...,x_{n})=(\chi(b_{1}),...,\chi(b_{n}))\) and \[(y_{1},...,y_{n})\times(x_{1},...,x_{n})=(\chi(a_{1}),...,\chi(a_{n}))\times( \chi(b_{1}),...,\chi(b_{n})).\] **Corollary 4.2**.: _For any \(\tau\in S_{n}\), the following assertions hold:_ 1. _there is an automorphism of_ \(P_{n}(\mathbb{K})\) _whose induced homeomorphism on_ \(S(P_{n}(\mathbb{K}))\)_, under the identification_ \(S(P_{n}(\mathbb{K}))\simeq S^{n-1}\)_, is the associated permutation of coordinates_ \[(x_{1},x_{2},...,x_{n})\mapsto(x_{\tau(1)},x_{\tau(2)},...,x_{\tau(n)});\] 2. _there is an automorphism of_ \(P_{n}(\mathbb{T})\) _whose induced homeomorphism on_ \(S(P_{n}(\mathbb{T}))\)_, under the identification_ \(S(P_{n}(\mathbb{T}))\simeq S^{2n-1}\)_, is the associated permutation of coordinates_ \[(y_{1},y_{2},...,y_{n})\times(x_{1},x_{2},...,x_{n})\mapsto(y_{\tau(1)},y_{ \tau(2)},...,y_{\tau(n)})\times(x_{\tau(1)},x_{\tau(2)},...,x_{\tau(n)}).\] In particular, since \(\Sigma^{1}\) is invariant under the sphere homeomorphisms above, we obtain the following geometric results, which will be useful on the next section for the computation of \(\Sigma^{1}(P_{n}(M))\). **Corollary 4.3**.: _The BNS invariant \(\Sigma^{1}(P_{n}(\mathbb{K}))\) (and its complement \(\Sigma^{1}(P_{n}(\mathbb{K}))^{c}\)) is invariant under all permutation of coordinates in \(S(P_{n}(\mathbb{K}))\)._ **Corollary 4.4**.: _The BNS invariant \(\Sigma^{1}(P_{n}(\mathbb{T}))\) (and its complement \(\Sigma^{1}(P_{n}(\mathbb{T}))^{c}\)) is invariant under all permutations in \(S(P_{n}(\mathbb{T}))\) of the form_ \[(y_{1},y_{2},...,y_{n})\times(x_{1},x_{2},...,x_{n})\mapsto(y_{\tau(1)},y_{ \tau(2)},...,y_{\tau(n)})\times(x_{\tau(1)},x_{\tau(2)},...,x_{\tau(n)}).\] ## 5. Computation of the BNS invariants of \(B_{n}(M)\) and \(P_{n}(M)\) In this section we compute the BNS invariants for the total and pure braid groups of some closed surfaces of low genus. On Subsection 5.1, we compute \(\Sigma^{1}\) for the total braid groups of \(\mathbb{S}^{2}\), \(\mathbb{R}P^{2}\), \(\mathbb{T}\) and \(\mathbb{K}\) and on Subsection 5.2 for the pure braid groups of \(\mathbb{S}^{2}\) and \(\mathbb{R}P^{2}\). On subsection 5.3, we focus on the two cases that turned out to be the most difficult ones: we show Theorem 1.3, which describes \(\Sigma^{1}\) of both \(P_{n}(\mathbb{T})\) and \(P_{n}(\mathbb{K})\). We suppose from now on that \(n\geq 2\), since the BNS invariants of the fundamental groups \(\pi_{1}(M)=P_{1}(M)=B_{1}(M)\) of the surfaces \(M\) are easy to compute. In fact, the trivial group \(\pi_{1}(\mathbb{S}^{2})\) and the finite group \(\pi_{1}(\mathbb{R}P^{2})\simeq\mathbb{Z}_{2}\) have empty character spheres and \(\Sigma^{1}\). The groups \(\pi_{1}(\mathbb{T})\simeq\mathbb{Z}\times\mathbb{Z}\) and \(\pi_{1}(\mathbb{K})\simeq\mathbb{Z}\rtimes\mathbb{Z}\) have full BNS invariant \(\Sigma^{1}(\pi_{1}(\mathbb{T}))=S(\pi_{1}(\mathbb{T}))\simeq\mathbb{S}^{1}\) and \(\Sigma^{1}(\pi_{1}(\mathbb{K}))=S(\pi_{1}(\mathbb{K}))\simeq\mathbb{S}^{0}\) because they are virtually abelian, and virtually abelian groups have full BNS invariants as a direct consequence of Propositions 2.1 and B1.11 of [35]. ### Computation of \(\Sigma^{1}(B_{n}(M))\) Let us first obtain the BNS invariants for \(B_{n}(M)\) in some cases. For the Artin braid group \(B_{n}\), the computation of \(\Sigma^{1}(B_{n})\) is well known. For the sake of completeness, we provide a short proof of this fact. **Proposition 5.1**.: _Let \(n\geq 2\). Then \(\Sigma^{1}(B_{n})=S(B_{n})=\mathbb{S}^{0}\)._ Proof.: Using the Artin relations, note that in the abelianized group \(B_{n}/\gamma_{2}(B_{n})\) we have \(\sigma_{i}=\sigma_{j}\) for all \(1\leq i,j\leq n-1\), so \(B_{n}/\gamma_{2}(B_{n})=\langle\sigma_{1}\rangle\simeq\mathbb{Z}\) and, therefore, \(S(B_{n})=\mathbb{S}^{0}\). If \([\chi]\in S(B_{n})\), then \(\chi(\sigma_{1})\neq 0\), hence \(\chi\) does not vanish the full twist \(\Delta=(\sigma_{1}\cdots\sigma_{n})^{n}\), which generates the center of \(B_{n}\)[8]. Therefore, \(\chi(Z(B_{n}))\neq 0\) and \(\Sigma^{1}(B_{n})=S(B_{n})\simeq\mathbb{S}^{0}\) by Proposition 2.1. For the other surfaces, it is necessary to compute the character sphere of \(B_{n}(M)\) first. Notice that, according to Proposition 3.8, \(S(B_{n}(\mathbb{R}P^{2}))\) and \(S(B_{n}(\mathbb{S}^{2}))\) are empty, and therefore their \(\Sigma^{1}\) are also empty. Therefore, there is nothing to compute in these cases. If \(M\) is the torus or the Klein bottle, we obtain Theorem 1.2. Proof of Theorem 1.2.: By Proposition 3.8, it follows that \(S(B_{n}(\mathbb{T}))\simeq\mathbb{S}^{1}\) (resp. \(S(B_{n}(\mathbb{K}))\simeq\mathbb{S}^{0}\)), with the torsion-free generators of \(B_{n}(M)/\gamma_{2}(B_{n}(M))\) being \(a,b\) if \(M=\mathbb{T}\) (resp. \(b\) if \(M=\mathbb{K}\)) by Remark 3.10. Also, if \([\chi]\in S(B_{n}(M))\), then \(\chi(a_{1})=\chi(a_{i})\) and \(\chi(b_{1})=\chi(b_{i})\), for all \(1\leq i\leq n\). The centre of \(B_{n}(M)\) is well known, namely \(Z(B_{n}(\mathbb{T}))=\langle(a_{1}\cdots a_{n}),(b_{1}\cdots b_{n})\rangle \simeq\mathbb{Z}^{2}\)[32, Proposition 4.2] and \(Z(B_{n}(\mathbb{K}))=\langle(b_{n}\cdots b_{1})^{2}\rangle\simeq\mathbb{Z}\)[26, Proposition 5.2]. Now, if \([\chi]\in S(B_{n}(M))\), then \(\chi(a_{1})\neq 0\) or \(\chi(b_{1})\neq 0\) if \(M=\mathbb{T}\) (resp. \(\chi(b_{1})\neq 0\) if \(M=\mathbb{K}\)); therefore, \(\chi(Z(B_{n}(M)))\neq 0\) and, by Proposition 2.1, it follows that \([\chi]\in\Sigma^{1}(B_{n}(M))\), as desired. The knowledge of the generators of \(Z(B_{n}(M))\) was essential in the previous proof. Since \(Z(B_{n}(M))\) is trivial if \(M\) is a compact surface without boundary and different from \(\mathbb{S}^{2},\mathbb{T},\mathbb{R}P^{2},\mathbb{K}\)[32, Proposition 1.6], the methods used above do not apply, and we were not able to compute the \(\Sigma^{1}(B_{n}(M))\) for other surfaces. We intend to complete this task on a near future, as well as dealing with punctured surfaces. ### Computation of \(\Sigma^{1}(P_{n}(\mathbb{S}^{2}))\) and \(\Sigma^{1}(P_{n}(\mathbb{R}P^{2}))\) Now we focus on pure braid groups. If \(M=\mathbb{R}P^{2}\), then by Proposition 3.6 (3), the character sphere of \(P_{n}(\mathbb{R}P^{2})\) is empty, and so \(\Sigma^{1}(P_{n}(\mathbb{R}P^{2}))\) is empty. Let \(M=\mathbb{S}^{2}\). We know \(P_{n+1}(\mathbb{S}^{2})\) is finite if \(0\leq n\leq 2\)[13]. If \(n\geq 3\), then \(\Sigma^{1}(P_{n+1}(\mathbb{S}^{2}))\) can be obtained by the knowledge of \(\Sigma^{1}(P_{n})\) given in [28], as we shall see in the following. Recall that by [28, Lemma 2.5] the character sphere \(S(P_{n})\) is homeomorphic to \(S^{\binom{n}{2}-1}\), and a point \([\chi]\) is determined by the images \(\chi(A_{i,j})\), \(1\leq i<j\leq n\). The generators \(A_{i,j}\) of \(P_{n}\) (denoted by \(S_{i,j}\) in [28]) are written in terms of the Artin generators as \[A_{i,j}=\sigma_{j-1}\sigma_{j-2}...\sigma_{i+1}\sigma_{i}^{2}\sigma_{i+1}^{-1 }...\sigma_{j-2}^{-1}\sigma_{j-1}^{-1}. \tag{5.1}\] According to [28, Definition 4.3], a point \([\chi]\) is said to belong to a \(P_{3}\)-circle (say, \(\mathcal{C}_{i,j,k}\)) iff there are \(1\leq i<j<k\leq n\) such that \[\left\{\begin{array}{l}\chi(A_{i,j})+\chi(A_{i,k})+\chi(A_{j,k})=0,\text{ and}\\ \chi(A_{r,s})=0\text{ if }\{r,s\}\not\subset\{i,j,k\}.\end{array}\right.\] A point \([\chi]\) is said to belong to a \(P_{4}\)-circle (say, \(\mathcal{C}_{i,j,k,l}\)) iff there are \(1\leq i<j<k<l\leq n\) such that \[\left\{\begin{array}{l}\chi(A_{i,j})=\chi(A_{k,l}),\\ \chi(A_{i,k})=\chi(A_{j,l}),\\ \chi(A_{i,l})=\chi(A_{j,k}),\\ \chi(A_{i,j})+\chi(A_{i,k})+\chi(A_{i,l})=0,\text{ and}\\ \chi(A_{r,s})=0\text{ if }\{r,s\}\not\subset\{i,j,k,l\}.\end{array}\right.\] **Theorem 5.2**.: _[_28_, Theorem A]_ _The BNS-invariant for the pure braid group \(P_{n}\) is the complement of the union of the \(P_{3}\)-circles and the \(P_{4}\)-circles in its character sphere. There are exactly \(\binom{n}{3}+\binom{n}{4}\) such circles._ We use the result above and the following relation to compute \(\Sigma^{1}(P_{n+1}(\mathbb{S}^{2}))\). As it is observed by [17, Theorem 4 (i)], the natural epimomorphism \(P_{n+1}(\mathbb{S}^{2})\to P_{3}(\mathbb{S}^{2})\simeq\mathbb{Z}_{2}\) has a kernel isomorphic to \[H_{n-2}\simeq P_{n-2}(\mathbb{S}^{2}\setminus\{x_{0},x_{1},x_{2}\})\simeq P_ {n-2}(\mathbb{D}^{2}\setminus\{x_{1},x_{2}\}).\] Furthermore, this exact sequence gives rise to the isomorphism \(P_{n+1}(\mathbb{S}^{2})\simeq H_{n-2}\times\mathbb{Z}_{2}\). We will therefore use the identifications \(P_{n+1}(\mathbb{S}^{2})=H_{n-2}\times\mathbb{Z}_{2}\) and \(H_{n-2}=P_{n-2}(\mathbb{D}^{2}\setminus\{x_{1},x_{2}\})\). Now, we make use of [17, Theorem 4 (ii)]: the natural epimomorphism \(P_{n}\to P_{2}=\langle\Delta\rangle\simeq\mathbb{Z}\) also has a kernel isomorphic to \(H_{n-2}\). Therefore, the generators of \(H_{n-2}\) can be identified with the Artin generators of \(P_{n}\) (except \(A_{1,2}\)) and will be denoted by \(\tilde{A}_{i,j}\), for \(1\leq i<j\leq n\) \(\{i,j\}\neq\{1,2\}\). In addition, this exact sequence gives rise to an isomorphism \(\varphi:P_{n}\to H_{n-2}\times\mathbb{Z}\), by identifying \(\Delta\) with the full twist \(\Delta=(\sigma_{1}\cdots\sigma_{n})^{n}\in P_{n}\), which is a pure braid and can also be written in term of the generators of \(P_{n}\) as \(\Delta=A_{1,2}(A_{1,3}A_{2,3})...(A_{1,n}...A_{n-1,n})\), as one may check by using equation 5.1. If \(\omega\in H_{n-2}\) is defined by \(\omega^{-1}=(\tilde{A}_{1,3}\tilde{A}_{2,3})...(\tilde{A}_{1,n}...\tilde{A}_{ n-1,n})\), then by using combinatorial notation we have \[\left\{\begin{array}{l}\varphi(A_{1,2})=\Delta\cdot\omega,\text{ and }\\ \varphi(A_{i,j})=\tilde{A}_{i,j}\text{ if }\{i,j\}\neq\{1,2\}.\end{array}\right.\] **Definition 5.3**.: We say a point \([\chi]\in S(H_{n-2})\) belongs to a \(P_{3}\)-circle \(\widetilde{C}_{i,j,k}\) for some \(1\leq i<j<k\leq n\) iff there are two numbers \((p,q)\neq(0,0)\) such that the following equations are valid: \[\widetilde{C}_{i,j,k}:\left\{\begin{array}{l}\text{if }\{i,j\}=\{1,2\}: \left\{\begin{array}{l}\chi(\tilde{A}_{i,k})=p,\\ \chi(\tilde{A}_{j,k})=q,\\ \chi(\tilde{A}_{r,s})=0,\text{ for }\{r,s\}\not\subset\{i,j,k\}\end{array} \right.\\ \text{if }\{i,j\}\neq\{1,2\}:\left\{\begin{array}{l}\chi(\tilde{A}_{i,k})=p, \\ \chi(\tilde{A}_{j,k})=q,\\ \chi(\tilde{A}_{i,j})=-(p+q),\\ \chi(\tilde{A}_{r,s})=0,\text{ for }\{1,2\}\neq\{r,s\}\not\subset\{i,j,k\} \end{array}\right.\end{array}\right.\] Similarly, we say a point \([\chi]\in S(H_{n-2})\) belongs to a \(P_{4}\)-circle \(\widetilde{C}_{i,j,k,l}\) for some \(1\leq i<j<k<l\leq n\) iff there are two numbers \((p,q)\neq(0,0)\) such that the following equations are valid: \[\widetilde{C}_{i,j,k,l}:\left\{\begin{array}{l}\text{if }\{i,j\}=\{1,2\}: \left\{\begin{array}{l}\chi(\tilde{A}_{i,k})=\chi(\tilde{A}_{j,l})=p,\\ \chi(\tilde{A}_{i,l})=\chi(\tilde{A}_{j,k})=q,\\ \chi(\tilde{A}_{k,l})=-(p+q),\\ \chi(\tilde{A}_{r,s})=0,\text{ for }\{r,s\}\not\subset\{i,j,k,l\}\end{array} \right.\\ \text{if }\{i,j\}\neq\{1,2\}:\left\{\begin{array}{l}\chi(\tilde{A}_{i,k})=\chi( \tilde{A}_{j,l})=p,\\ \chi(\tilde{A}_{j,l})=\chi(\tilde{A}_{j,k})=q,\\ \chi(\tilde{A}_{i,j})=\chi(\tilde{A}_{k,l})=-(p+q),\\ \chi(\tilde{A}_{r,s})=0,\text{ }\{1,2\}\neq\{r,s\}\not\subset\{i,j,k,l\} \end{array}\right.\end{array}\right.\] Proof of Theorem 1.1.: Since \(\mathbb{Z}_{2}\) is finite, \(S(\mathbb{Z}_{2})\) must be empty and the map \(\pi_{1}^{*}\) of Proposition 2.2 is a homeomorphism. Then \(\Sigma^{1}(P_{n+1}(\mathbb{S}^{2}))^{c}=\pi_{1}^{*}(\Sigma^{1}(H_{n-2})^{c}) \cup\pi_{2}^{*}(\Sigma^{1}(\mathbb{Z}_{2})^{c})=\Sigma^{1}(H_{n-2})^{c}\). First, let \(n=3\). Then, \(H_{n-2}=\pi_{1}(\mathbb{D}^{2}\setminus\{x_{1},x_{2}\})\simeq F_{2}\) is free. Since finitely generated free groups have empty \(\Sigma\)-invariant ([35, Section A2.1a, Example 3]), we have \(\Sigma^{1}(P_{n+1}(\mathbb{S}^{2}))=\emptyset\). Now, let \(n\geq 4\). Since a \(\Sigma^{1}\) invariant is mapped bijectively onto a \(\Sigma^{1}\) invariant under the induced map of a group isomorphism (see Section B1.2a of [35]), we have \[\Sigma^{1}(H_{n-2}\times\mathbb{Z})^{c}=(\varphi^{-1})^{*}(\Sigma^{1}(P_{n})^{ c})=\{[\chi\circ\varphi^{-1}]\ |\ [\chi]\in\Sigma^{1}(P_{n})^{c}\}.\] Now, since \(\chi\circ\varphi^{-1}(\tilde{A}_{i,j})=\chi(A_{i,j})\) for all generators \(\tilde{A}_{i,j}\) of \(H_{n-2}\), it follows that the image under \((\varphi^{-1})^{*}\) of a \(P_{3}\)-circle \(\mathcal{C}_{i,j,k}\) (respectively, of a \(P_{4}\)-circle \(\mathcal{C}_{i,j,k,l}\)) of \(S(P_{n})\) is simply obtained by deleting the first coordinate \(\chi(A_{1,2})\) of the circle \(\mathcal{C}_{i,j,k}\) (resp. \(\mathcal{C}_{i,j,k,l}\)) - whether it is zero or not - and adding a zero last coordinate \(\chi\circ\varphi^{-1}(\Delta)=\chi(\Delta)=0\). So, \(\Sigma^{1}(H_{n-2}\times\mathbb{Z})^{c}\) is the union of these new circles. Finally, again by Proposition 2.2 we have \[\Sigma^{1}(H_{n-2}\times\mathbb{Z})^{c}=\pi_{1}^{*}(\Sigma^{1}(H_{n-2})^{c}) \cup\pi_{2}^{*}(\Sigma^{1}(\mathbb{Z})^{c})=\pi_{1}^{*}(\Sigma^{1}(H_{n-2})^{c}),\] which means \(\Sigma^{1}(H_{n-2})^{c}\) is obtained by deleting the last coordinate \(\chi(\Delta)=0\) of all the circles of \(\Sigma^{1}(H_{n-2}\times\mathbb{Z})^{c}\). If we do this, it is easy to see that we obtain exactly the \(P_{3}\)-circles and \(P_{4}\)-circles of Definition 5.3. It is straightforward to check that all these circles are pairwise disjoint. This completes our proof. ### Computation of \(\Sigma^{1}(P_{n}(\mathbb{T}))\) and \(\Sigma^{1}(P_{n}(\mathbb{K}))\) This subsection is dedicated to present the BNS invariant for the pure braid groups of the torus and the Klein bottle by proving Theorem 1.3. The proof will be by induction on \(n\), for \(n\geq 2\), and for the sake of shortness, we will simultaneously deal with both cases \(M=\mathbb{T}\) and \(M=\mathbb{K}\), for both inductions turned out to have very similar aspects. In order to start the induction process, we first deal with the case \(n=2\), as one can see in the following. The case of the torus is immediate since \(P_{2}(\mathbb{T})\) is a direct product; nonetheless, the case of the Klein bottle requires some more effort. **Lemma 5.4**.: _For \((p,q)\in\mathbb{R}^{2}\setminus\{(0,0)\}\), define \([\chi_{p,q}]\in S(P_{2}(\mathbb{T}))\) by \(\chi_{p,q}(a_{1})=\chi_{p,q}(a_{2}^{-1})=p\) and \(\chi_{p,q}(b_{1})=\chi_{p,q}(b_{2}^{-1})=q\). Then,_ \[\Sigma^{1}(P_{2}(\mathbb{T})))^{c}=\big{\{}[\chi_{p,q}]\,|\,(p,q)\in\mathbb{R} ^{2}\setminus\{(0,0)\}\big{\}}\cong\mathbb{S}^{1}.\] Proof.: The proof follows directly from Proposition 3.11 (i) and Theorem 2.2. **Lemma 5.5**.: _Define \([\chi]\in S(P_{2}(\mathbb{K}))\) by \(\chi(b_{1})=\chi(b_{2}^{-1})=1\). Then,_ \[\Sigma^{1}(P_{2}(\mathbb{K}))^{c}=\{[\chi],[-\chi]\}.\] Proof.: First of all, since an isomorphism of groups induces a natural homeomorphism between the corresponding character spheres [35, Section B1.2a], we now consider \(P_{2}(\mathbb{K})\) as the group \(G_{2}(\mathbb{K})\) with presentation given by Proposition 3.13. We choose the set of generators \(X=\{x,y,a,b\}\) for the Cayley graph of \(P_{2}(\mathbb{K})\). By the isomorphism \(P_{2}(\mathbb{K})\simeq G_{2}(\mathbb{K})\) of Equation (3.7), we must have \(\chi(x)=0=\chi(a)\), \(\chi(y)=-1\) and, since \(b=b_{2}b_{1}\) (under the identification), we have \(\chi(b)=\chi(b_{2}b_{1})=\chi(b_{2})+\chi(b_{1})=-1+1=0\). Hence, for any vertex \(g\) in \(\Gamma(P_{2}(\mathbb{K}),X)\), the number \(-\chi(g)\) is the sum of the powers of \(y\) in \(g\). Furthermore, the points \([\chi],[-\chi]\) are the only ones in \(S(P_{2}(\mathbb{K}))\) that vanish the center \(Z(P_{2}(\mathbb{K}))=\langle b^{2}\rangle\)[26, Proposition 5.2]. Then, it follows directly from Proposition 2.1 that \(\Sigma^{1}(P_{2}(\mathbb{K}))^{c}\subset\{[\chi],[-\chi]\}\). Let us show \(\{[\chi],[-\chi]\}\subset\Sigma^{1}(P_{2}(\mathbb{K}))^{c}\). Suppose first by contradiction that \([\chi]\in\Sigma^{1}(P_{2}(\mathbb{K}))\). Then, in particular, there is a path \(p\) from the vertex \(1\) to \(yxy^{-1}\) inside \(\Gamma_{\chi}\). From now on, we will use \(p\) to construct a path on the Cayley graph of the free group \(F_{2}\) which cannot exist because \(\Sigma^{1}(F_{2})=\emptyset\) ([35], A2.1a, item (3)). We will also use a straightforward normal form for \(P_{2}(\mathbb{K})\), which comes from its semidirect product structures: any element \(g\in P_{2}(\mathbb{K})\) can be uniquely written as \(g=\omega a^{n}b^{m}\), for \(\omega\in F_{2}=F(x,y)\) and \(n,m\in\mathbb{Z}\). Note that \(\chi(g)=\chi(\omega)\). For every such \(g\), let us describe the normal form of \(gz\), \(z\in X^{\pm 1}\), on the right side of equations below. One can straightforwardly check that 1. \(\omega a^{n}b^{m}a=\omega a^{n+(-1)^{m}}b^{m}\); 2. \(\omega a^{n}b^{m}b=\omega a^{n}b^{m+1}\); 3. \(\omega a^{n}b^{m}x=\omega x^{(-1)^{m}}a^{n}b^{m}\); 4. \(\omega a^{n}b^{m}y=\left\{\begin{array}{ll}\omega x^{2n}ya^{n}b^{m},&m\text{ even}\\ \omega x^{2n+1}yxa^{n}b^{m},&m\text{ odd}\end{array}\right.\) 5. \(\omega a^{n}b^{m}a^{-1}=\omega a^{n+(-1)^{m+1}}b^{m}\); 6. \(\omega a^{n}b^{m}b^{-1}=\omega a^{n}b^{m-1}\); 7. \(\omega a^{n}b^{m}x^{-1}=\omega x^{(-1)^{m+1}}a^{n}b^{m}\); 8. \(\omega a^{n}b^{m}y^{-1}=\left\{\begin{array}{ll}\omega y^{-1}x^{-2n}a^{n}b^{ m},&m\text{ even}\\ \omega x^{-1}y^{-1}x^{-2n-1}a^{n}b^{m},&m\text{ odd}\end{array}\right.\) The path \(p\) provides us with a sequence \(g_{0},g_{1},...,g_{k}\in P_{2}(\mathbb{K})\) such that \(g_{0}=1\), \(g_{k}=yxy^{-1}\) and \(g_{i+1}=g_{i}z_{i}\) for some \(z_{i}\in X^{\pm 1}\), for every \(0\leq i<k\). Furthermore, \(p\) to be inside \(\Gamma_{\chi}\) gives us \(\chi(g_{i})\geq 0\) for \(0\leq i\leq k\). By writing \(g_{i}=\omega_{i}a^{n_{i}}b^{m_{i}}\) for \(1\leq i\) we get \(\chi(\omega_{i})=\chi(g_{i})\geq 0\). By uniqueness of the normal form, we have a sequence \(\omega_{0},\omega_{1},...,\omega_{k}\in F_{2}\) with \(\omega_{0}=1\), \(\omega_{k}=yxy^{-1}\) and \(\chi(\omega_{i})\geq 0\). For us to obtain a path on the Cayley subgraph \(\Gamma(F_{2},\{x,y\})_{\chi|_{F_{2}}}\), we will connect, for every \(i\geq 0\), the vertices \(\omega_{i}\) and \(\omega_{i+1}\) inside this subgraph. Denote for a moment \(\omega_{i}=\omega\), \(\omega_{i+1}=\omega^{\prime}\) and \(z_{i}=z\). Since \(g_{i+1}=g_{i}z\), \(z\in X^{\pm 1}\), there are 8 possibilities for \(z\). For each of them, we will use one of the 8 normal forms above to create a path \(q\) from \(\omega\) to \(\omega^{\prime}\) inside \(\Gamma(F_{2})_{\chi}=\Gamma(F_{2},\{x,y\})_{\chi|_{F_{2}}}\). One can then easily check that equation \(\chi(x)=0\) guarantees that \(\nu_{\chi}(q)\geq 0\), so the paths \(q\) below are inside \(\Gamma(F_{2})_{\chi}\). * Case \(z\in\{a,a^{-1},b,b^{-1}\}\): in this case, \(\omega^{\prime}=\omega\) and \(q\) is the constant (or trivial) path; * Case \(z=x\) (resp. \(z=x^{-1}\)): in this case, \(\omega^{\prime}=\omega x^{(-1)^{m}}\) (resp. \(\omega^{\prime}=\omega x^{(-1)^{m+1}}\)); therefore, the path \(q=(\omega,x^{(-1)^{m}})\) (resp, \(q=(\omega,x^{(-1)^{m+1}})\)) connects \(\omega\) to \(\omega^{\prime}\) inside \(\Gamma(F_{2})_{\chi}\); * Case \(z=y\), \(m\) even (resp. \(m\) odd): in this case, \(\omega^{\prime}=\omega x^{2n}y\) (resp. \(\omega^{\prime}=\omega x^{2n+1}yx\)). If \(n\geq 0\), the path \(q=(\omega,x...xy)\) (resp. \(q=(\omega,x...xyx)\)), with direction \(x\) being travelled \(2n\) (resp \(2n+1\)) times, connects \(\omega\) to \(\omega^{\prime}\) inside \(\Gamma(F_{2})_{\chi}\). If \(n<0\), \(q\) can be taken similarly as \(q=(\omega,x^{-1}...x^{-1}y)\) (resp. \(q=(\omega,x^{-1}...x^{-1}yx)\)); * Case \(z=y^{-1}\), \(m\) even (resp. \(m\) odd): in this case, \(\omega^{\prime}=\omega y^{-1}x^{-2n}\) (resp. \(\omega^{\prime}=\omega x^{-1}y^{-1}x^{-2n-1}\)). If \(n<0\), the path \(q\) can be taken as \(q=(\omega,y^{-1}x...x)\) (resp. \(q=(\omega,x^{-1}y^{-1}x...x)\)), with direction \(x\) being travelled \(-2n\) (resp. \(-2n-1\)) times. If \(n\geq 0\), \(q\) can be similarly taken as \(q=(\omega,y^{-1}x^{-1}...x^{-1})\) (resp \(q=(\omega,x^{-1}y^{-1}x^{-1}...x^{-1})\)). Thus, we obtain a path \(p^{\prime}\) from 1 to \(yxy^{-1}\) in \(\Gamma(F_{2})_{\chi}\). By removing possible backtrackings of the form \(zz^{-1}\) in \(p^{\prime}\), we can also assume \(p^{\prime}\) to be a geodesic. This is a contradiction, for \(\Gamma(F_{2})\) is known to be a tree and the only geodesic from 1 to \(yxy^{-1}\) is the path \((1,yxy^{-1})\), which is not inside \(\Gamma(F_{2})_{\chi}\), for \(\chi(y)=-1<0\). It follows that \([\chi]\notin\Sigma^{1}(P_{2}(\mathbb{K}))\). In a similar way, by using the element \(y^{-1}xy\), one shows that \([-\chi]\notin\Sigma^{1}(P_{2}(\mathbb{K}))\). **Remark 5.6**.: We could have obtained a much shorter proof of the fact that \(\{[\chi],[-\chi]\}\subset\Sigma^{1}(P_{2}(\mathbb{K}))^{c}\) on Lemma 5.5 by using [26], as follows: from [26, Theorem 5.4] one can conclude that \(\gamma_{2}(P_{2}(\mathbb{K}))=(P_{2}(\mathbb{K}))^{\prime}\) is not finitely generated. By [35, Theorem A4.1], either \([\chi]\) or \([-\chi]\) are outside \(\Sigma^{1}\). Then, by Corollary 4.3, we conclude that \(\{[\chi],[-\chi]\}\subset\Sigma^{1}(P_{2}(\mathbb{K}))^{c}\), as desired. However, with [35, Theorem A4.1], and our proof of Lemma 5.5 we get a new and independent proof for the fact that the commutator subgroup \((P_{2}(\mathbb{K}))^{\prime}\) of \(P_{2}(\mathbb{K})\) is not finitely generated. **Corollary 5.7**.: _Let \(M\) be the torus or the Klein bottle, \(n\geq 3\) and \([\chi]\in S(P_{n}(M))\). If \(\chi(a_{i})=\chi(a_{j}^{-1})=p\) and \(\chi(b_{i})=\chi(b_{j}^{-1})=q\), for \(M=\mathbb{T}\) (resp. \(\chi(b_{i})=\chi(b_{j}^{-1})=1\), for \(M=\mathbb{K}\)) and \(\chi(a_{k})=\chi(b_{k})=0\) for \(1\leq k\leq n\), \(k\neq i,j\), then \([\chi]\notin\Sigma^{1}(P_{n}(M))\)._ Proof.: Consider the projection \(\beta_{i,j}:F_{n}(M)\to F_{2}(M)\), \((x_{1},\dots,x_{i},\dots,x_{j},\dots,x_{n})\mapsto(x_{i},x_{j})\), and the induced homomorphism \(\beta_{i,j_{*}}:P_{n}(M)\to P_{2}(M)\), which is a composition of some of the homomorphisms from the Fadell-Neuwirth short exact sequence (3.6). Geometrically, each \(n\)-braid is sent to a \(2\)-braid, by deleting all but the \(i\)-th and \(j\)-th strings. The result follows then directly from [35, Corollary B1.8] and Lemmas 5.4 and 5.5. It turns out that, since the sphere \(S(P_{n}(\mathbb{T}))\) has higher dimension than \(S(P_{n}(\mathbb{K}))\), the induction step for the case \(M=\mathbb{T}\) needed some additional lemmas about the particular cases \(n=3,4\) to work. We deal with them in what follows. **Lemma 5.8**.: _Let \(p,q>0\) and \([\chi]\in S(P_{3}(\mathbb{T}))\), such that_ \[\chi(b_{i})=\chi(b_{j}^{-1})=q,\quad\chi(a_{\tau(i)})=\chi(a_{\tau(j)}^{-1})=p,\quad\chi(b_{k})=\chi(a_{\tau(k)})=0,\] _with \(\tau\in S_{3}\), \(\tau(k)\neq k\) and \(\{i,j,k\}=\{1,2,3\}\). Then, \([\chi]\) belongs to \(\Sigma^{1}(P_{3}(\mathbb{T}))\)._ Proof.: First of all, by Corollary 4.4, we can put the coordinates \((\chi(a_{1}),\chi(a_{2}),\chi(a_{3}))\) in ascending order and reduce the possibilities and consider only the cases where \((\chi(a_{1}),\chi(a_{2}),\chi(a_{3}))\times(\chi(b_{1}),\chi(b_{2}),\chi(b_{3}))\) equals one of the following: 1. \((-p,0,p)\times(-q,q,0)\); 2. \((-p,0,p)\times(q,-q,0)\); 3. \((-p,0,p)\times(0,q,-q)\). Also, we will consider the isomorphism \(\Phi\) between the group \(G_{3}(\mathbb{T})\) and \(P_{3}(\mathbb{T})\) given in (3.8), in the proof of Proposition 3.11 (ii), and use the Geometric Criterion given in Theorem 2.3 to show that \([\chi]\in\Sigma^{1}(P_{3}(\mathbb{T}))\), by constructing the paths \(p_{z}\) satisfying \(\nu_{\chi}(p_{z})>\nu_{\chi}(1,z)\), for \(z\in Z=\{a^{\pm},b^{\pm},x^{\pm},y^{\pm},u^{\pm},v^{\pm},w^{\pm}\}\), in each of the cases above. To prove (a) (resp. (b)), notice that the only generators of \(G_{3}(\mathbb{T})\) that have a non-null image are \(x,u,y\), with \[\chi(x)=\chi(u)=p>0\quad\text{and}\quad\chi(y)=q>0\quad(\text{resp.}\,\chi(y)= -q<0).\] Fix \(t=x\). If \(z\in\{a^{\pm},b^{\pm},u^{\pm},w^{\pm}\}\) then \(t\) commute with \(z\), and we can choose the trivial path \(p_{z}=(t,z)\). To construct the paths \(p_{z}\) for \(z\in\{y^{\pm},v^{\pm}\}\), we will use the relations in Proposition 3.11 (ii) and Remark 3.12. It is straightforward to check that the following relations are valid in \(G_{3}(\mathbb{T})\), by writing the elements on the right side on their normal forms, coming from the semidirect product structure. For example, the normal form of \(x(w^{1}v^{-1}yu^{-1}vuvy^{-1})\) is \(vx\). \[x^{-1}vx =w^{1}v^{-1}yu^{-1}vuvy^{-1}, \tag{5.3}\] \[x^{-1}yx =ux^{-1}yw^{-1}v^{-1}u^{-1}vx,\] (5.4) \[x^{-1}vx =vwy^{-1}u^{-1}vuvyw^{-1}v^{-1}w^{-1},\] (5.5) \[x^{-1}yx =x^{-1}vuyw^{-1}v^{-1}xu^{-1}. \tag{5.2}\] Relations (5.2) and (5.3) (resp. (5.4) and (5.5)) give us the paths \(p_{v}\) and \(p_{y}\) to prove (a) (resp. (b)). By taking the inverse of each relation, we obtain the paths \(p_{v^{-1}}\) and \(p_{y^{-1}}\). To prove (c) (resp. (d)), notice that the only generators that have a non-null image are \(x,u,v\), with \[\chi(x)=\chi(u)=p>0\quad\text{and}\quad\chi(v)=q>0\quad(\text{resp.}\,\chi(v)=-q<0),\] This time we fix \(t=v\) (resp. \(t=v^{-1}\)) to obtain the paths \(p_{z}\). Now, if \(z\in\{a^{\pm},b^{\pm},y^{\pm}\}\) then \(t\) commutes with \(z\) and we can choose the trivial path \(p_{z}=(t,z)\). To construct the paths \(p_{z}\) for \(z\in\{u^{\pm},x^{\pm},w^{\pm}\}\), we will use again the relations in Proposition 3.11 (ii) and Remark 3.12. It is straightforward to check that the following relations are valid in \(G_{3}(\mathbb{T})\), by writing the elements on the right side on their normal forms, coming from the semidirect product structure. \[v^{-1}uv =y^{-1}xv^{-1}ux^{-1}vy, \tag{5.7}\] \[v^{-1}xv =wxu^{-1}y^{-1}uw^{-1}y,\] (5.8) \[v^{-1}wv =uw^{-1}v^{-1}yxu^{-1}wvx^{-1}y^{-1}vuw^{-1}v^{-1}u^{-1},\] (5.9) \[vuw^{-1} =yxvux^{-1}y^{-1}v^{-1},\] (5.10) \[vxv^{-1} =xu^{-1}yuy^{-1},\] (5.11) \[vwv^{-1} =uvwy^{-1}xu^{-1}wv^{-1}x^{-1}yv^{-1}uvwu^{-1}. \tag{5.6}\] Relations (5.6), (5.7) and (5.8) (resp. (5.9), (5.10) and (5.11)) give us the paths \(p_{u}\), \(p_{x}\) and \(p_{w}\) to prove (c) (resp. (d)). By taking the inverse of each relation, we similarly obtain the remaining paths \(p_{u^{-1}}\), \(p_{x^{-1}}\) and \(p_{w^{-1}}\). **Lemma 5.9**.: _Let \(p,q>0\) and \([\chi]\in S(P_{4}(\mathbb{T}))\). Then, \([\chi]\) belongs to \(\Sigma^{1}(P_{4}(\mathbb{T}))\) if_ \[(\chi(a_{1}),\chi(a_{2}),\chi(a_{3}),\chi(a_{4}))\times(\chi(b_{1}),\chi(b_{2 }),\chi(b_{3}),\chi(b_{4}))\] _is equal to one the following cases:_ 1. \((-p,0,0,p)\times(0,-q,q,0)\)_;_ 2. \((-p,0,0,p)\times(0,q,-q,0)\)_._ Proof.: We will use the Geometric Criterion, given in Theorem 2.3 and the isomorphism \(\Psi\) between \(G_{4}(\mathbb{T})\) and \(P_{4}(\mathbb{T})\) given in (3.9), in the proof of Proposition 3.11 (iii). To show that \([\chi]\in\Sigma^{1}(P_{4}(\mathbb{T}))\) in case (a) (resp. case (b)) we fix \(t=v\) (resp. \(t=v^{-1}\)). Notice that the only generators of \(G_{4}(\mathbb{T})\) that have a non-null image are \(x,u,\bar{u},v\), with \[\chi(x)=\chi(u)=\chi(\bar{u})=p>0\quad\text{and}\quad\chi(v)=q>0\quad(\text{ resp.}\,\chi(v)=-q<0),\] so we use the same path from case (c) (resp. case (d)) from the proof of Lemma 5.8 if \(z\in\{a^{\pm},b^{\pm},x^{\pm},y^{\pm},u^{\pm},v^{\pm},w^{\pm}\}\). Relation (f) in Proposition 3.11 (iii) gives us the remaining path \(p_{z}\) for \(z\in\{\bar{u}^{\pm},\bar{v}^{\pm},w_{2}^{\pm},w_{3}^{\pm}\}\), since \(\chi(\bar{v})=\chi(w_{2})=\chi(w_{3})=0\). Now, we are finally able to show Theorem 1.3. Proof of Theorem 1.3.: We prove by induction on \(n\geq 2\). If \(n=2\), the theorem is valid by Lemmas 5.4 and 5.5. Let \(n\geq 3\) and suppose, therefore, that the theorem is true for \(n-1\) Denote by \(A_{n}(M)\) the following set \[A_{n}(M)=\left\{\begin{array}{ll}\{[\chi_{i,j,p,q}]\ |\ 1\leq i,j\leq n,\,i\neq j,\,(p,q)\neq(0,0)\},&M=\mathbb{T};\\ \{[\chi_{i,j}]\ |\ 1\leq i,j\leq n,\,i\neq j\},&M=\mathbb{K}.\end{array}\right.\] It follows from Corollary 5.7 that \(A_{n}(M)\subset\Sigma^{1}(P_{n}(M))^{c}\). We are then left to show that \(\Sigma^{1}(P_{n}(M))^{c}\subset A_{n}(M)\). Consider \([\chi]\notin A_{n}(M)\) and let us show that \([\chi]\in\Sigma^{1}(P_{n}(M))\). Note that, if \([\chi]\in S(P_{n}(M))\) then \(\chi(C_{i,j})=0\) for all \(i,j\), by Remark 3.7. If \(M=\mathbb{K}\), we also have that \(\chi(a_{i})=0\) for \(i=1,\ldots n\), by Remark 3.7. In the following, we will use that \(Z(P_{n}(M))\) is generated by \((a_{1}\cdots a_{n})\) and \((b_{1}\cdots b_{n})\), if \(M=\mathbb{T}\)[32, Propostion 4.2] (resp. by \((b_{n}\cdots b_{1})^{2}\), if \(M=\mathbb{K}\)[26, Proposition 5.2]). If \([\chi]\in S(P_{n}(M))\) is such that \(\sum_{i=1}^{n}\chi(a_{i})\neq 0\) or \(\sum_{i=1}^{n}\chi(b_{i})\neq 0\), it follows that \([\chi]\in\Sigma^{1}(P_{n}(M))\) by Proposition 2.1. Suppose, from now on, that \[\sum_{i=1}^{n}\chi(a_{i})=0\quad\text{and}\quad\sum_{i=1}^{n}\chi(b_{i})=0. \tag{5.12}\] Thanks to Corollaries 4.3 and 4.4, we can assume without loss of generality that \[\chi(b_{1})\leq\chi(b_{2})\leq\cdots\leq\chi(b_{n}). \tag{5.13}\] We will analyze the following cases (which cover all possibilities) and show that \([\chi]\in\Sigma^{1}(P_{n}(M))\) on each one. Notice that cases (iv) and (v) can only occur if \(M=\mathbb{T}\) and, in cases (ii) and (iii), we have \(\chi(b_{n})>0\) and \(\chi(b_{1})<0\). 1. \(\chi(b_{j})=\chi(a_{j})=0\), for some \(1<j<n\); 2. \(|\chi(b_{1})|>|\chi(b_{n})|\); 3. \(|\chi(b_{1})|<|\chi(b_{n})|\); 4. \(|\chi(b_{n})|=|\chi(b_{1})|\) and there exists \(1\leq j\leq n\) such that \(\chi(a_{j})+\chi(a_{k})<0\), for \(1\leq k\leq n\); 5. \(|\chi(b_{n})|=|\chi(b_{1})|\) and there exists \(1\leq j\leq n\) such that \(\chi(a_{j})+\chi(a_{k})>0\), for \(1\leq k\leq n\); 6. \(|\chi(b_{n})|=|\chi(b_{1})|\) and all cases above do not hold. On case (i), consider the induced homomorphim \(p_{j_{*}}:P_{n}(M)\to P_{n-1}(M)\) of the projection defined in (11) of the Fadell-Neuwirth short exact sequence (3.6). We have \([\chi]=[\tilde{\chi}\circ p_{j_{*}}]\) for some \([\tilde{\chi}]\in S(P_{n-1}(M))\). Since \([\chi]\notin A_{n}(M)\) we have \([\tilde{\chi}]\notin A_{n-1}(M)\), so \([\tilde{\chi}]\in\Sigma^{1}(P_{n-1}(M))\) by the induction hypothesis. Since the kernel \(\pi_{1}(M\setminus\{n-1\ \text{pts}\})\) is finitely generated, it follows from [35, Corollary B1.8] that \([\chi]\in\Sigma^{1}(P_{n}(M))\). On case (ii), first notice that, by (5.13), we have \[\chi(b_{1}^{-1})+\chi(b_{j}^{-1})>0,\ \text{for all}\ 2\leq j\leq n.\] Let us use the Geometric Criterion, given in Theorem 2.3, to show that \([\chi]\in\Sigma^{1}(P_{n}(M))\). Since \(\chi(b_{1}^{-1})>0\), fix \(t=b_{1}^{-1}\) and remember that, by Theorem 3.1, \[Z=\left\{a_{i}^{\pm 1},b_{i}^{\pm 1},C_{j,k}^{\pm 1}\,|\,1\leq i\leq n,\ 1\leq j <k\leq n\right\}\] is a set of generators for \(P_{n}(M)\) with their inverses. If \(z=a_{j}\) with \(1<j\leq n\), we construct the path \(p_{z}\) satisfying Theorem 2.3 as follows: by Proposition 3.2, relation \((S_{2})\) with \(i=1\) we have \(b_{1}a_{j}b_{1}^{-1}=a_{j}C_{1,j}^{-1}C_{2,j}\). Operating by \(b_{1}^{-1}\) on the left we obtain \(a_{j}b_{1}^{-1}=b_{1}^{-1}a_{j}C_{1,j}^{-1}C_{2,j}\), which implies the path \(p_{z}=(b_{1}^{-1},a_{j}C_{1,j}^{-1}C_{2,j})\) satisfies Theorem 2.3, for \[\nu_{\chi}(p_{z})=\chi(b_{1}^{-1})+\chi(a_{j})>\chi(a_{j})=\nu_{\chi}(1,a_{j}).\] Similarly, for \(z=a_{j}^{-1}\), we do the following: again by Proposition 3.2, relation (\(S_{2}\)) with \(i=1\) we have \(b_{1}a_{j}^{-1}b_{1}^{-1}=C_{2,j}^{-1}C_{1,j}a_{j}^{-1}\), which implies \(a_{j}^{-1}b_{1}^{-1}=b_{1}^{-1}C_{2,j}^{-1}C_{1,j}a_{j}^{-1}\). Then, the path \(p_{z}=(b_{1}^{-1},C_{2,j}^{-1}C_{1,j}a_{j}^{-1})\) satisfies Theorem 2.3, for \(\nu_{\chi}(p_{z})=\chi(b_{1}^{-1})+\chi(a_{j}^{-1})>\chi(a_{j}^{-1})=\nu_{ \chi}(1,a_{j}^{-1})\). With the same strategy as above, for \(z=C_{1,j}^{\pm 1}\), \(1<j\leq n\), we can easily obtain the desired path \(p_{z}\) satisfying Theorem 2.3 by using relation (\(S_{4}\)). The fact \(\chi(b_{1}^{-1})+\chi(b_{j}^{-1})>0\) gives us that \(\nu_{\chi}(p_{z})>0=\nu_{\chi}(1,z)\). For \(z=b_{j}^{\pm 1}\), \(1<j\leq n\) (the case \(j=1\) is trivial), we obtain \(p_{z}\) satisfying Theorem 2.3 by using relation (\(S_{5}\)). For \(z=C_{j,k}^{\pm 1}\), \(1<j<k\leq n\), we have that \(z\) and \(b_{1}\) commute by relation (8) from Theorem 3.1, so we easily obtain such \(p_{z}\). Finally, for \(z=a_{1}^{\pm 1}\), by relation (5) from Theorem 3.1, we have \[b_{1}a_{1}^{-1}b_{1}^{-1}=\begin{cases}a_{1}^{-1}(\prod_{j=2}^{n}C_{1,j}^{-1} C_{2,j}),&\quad M=\mathbb{T},\\ \\ (\prod_{j=2}^{n}C_{1,j}C_{2,j}^{-1})a_{1},&\quad M=\mathbb{K},\end{cases}\] which gives the desired \(\chi\)-positive path \(p_{z}\). Then \([\chi]\in\Sigma^{1}(P_{n}(M))\), as we wanted. On case (iii), we fix \(t=b_{n}\) to use Theorem 2.3. Notice that, by (5.13), the paths \(p_{i}=(b_{n},b_{n-1}\cdots b_{i})=(b_{n},\beta_{n-1,i})\) are \(\chi\)-positive for all \(2\leq i<n\). Thus, we obtain a path \(p_{z}\) from \(b_{n}\) to \(zb_{n}\) in the Cayley graph by using relation (\(P_{1}\)) from Lemma 3.5 if \(z=a_{i}^{\pm 1}\) if \(1\leq i\leq n\), relation (6) (resp. relation (8)) from Theorem 3.1 if \(z=b_{i}^{\pm 1}\) if \(1\leq i<n\) (resp. \(z=C_{i,j}^{\pm 1}\), \(1\leq i<j<n\)). And if \(z=C_{i,n}^{\pm 1}\), for \(1\leq i<n\), then by Proposition 3.2 (\(S_{4}\)), the equality \[b_{n}^{-1}C_{i,n}b_{n}=\begin{cases}\prod_{j=1}^{n-i}b_{n}^{-1}C_{n-j+1,n}^{-1 }C_{n-j,n}b_{n}=\prod_{j=1}^{n-i}b_{n-j}C_{n-j,n}C_{n-j+1,n}^{-1}b_{n-j}^{-1}, &\quad M=\mathbb{T},\\ \prod_{j=1}^{n-i}b_{n}^{-1}C_{n-j+1,n}^{-1}C_{n-j,n}b_{n}=\prod_{j=1}^{n-i}b_{ n-j}(C_{n-j,n}C_{n-j+1,n}^{-1})^{-1}b_{n-j}^{-1},&\quad M=\mathbb{K},\end{cases}\] provides us with a \(\chi\)-positive path, since \(\chi(b_{n})+\chi(b_{j})>0\), for all \(1\leq j\leq n-1\). Then \([\chi]\in\Sigma^{1}(P_{n}(M))\), as desired. On cases (iv) (resp. (v)), we can prove that \([\tilde{\chi}]\in\Sigma^{1}(P_{n}(\mathbb{T}))\), where \(\tilde{\chi}(a_{\tau(i)})=\chi(a_{i})\) and \(\tilde{\chi}(b_{\tau(i)})=\chi(b_{i})\), with \(\tau\in S_{n}\) such that \[\tilde{\chi}(a_{1})\leq\tilde{\chi}(a_{2})\leq\cdots\leq\tilde{\chi}(a_{n}),\] and by Corollary 4.4 it will follow that \([\chi]\in\Sigma^{1}(P_{n}(M))\) as well. Notice that \(\tau(j)=1\) (resp. \(\tau(j)=n\)), and \(\tilde{\chi}(a_{1}^{-1})+\tilde{\chi}(a_{k}^{-1})>0\) (resp. \(\tilde{\chi}(a_{n})+\tilde{\chi}(a_{k})>0\)), for all \(1\leq k\leq n\). Now, this case is very similar to case (ii) (resp. (iii)), providing us paths for \([\tilde{\chi}]\) by using \(t=a_{1}^{-1}\) (resp. \(t=a_{n}\)). On case (iv), the relations (1), (5) and (8) of Theorem 3.1, and also relations \((\mathit{S}_{1})\) and \((\mathit{S}_{3})\) of Proposition 3.2 give us all paths satisfying Theorem 2.3. On case (v), the paths satisfying Theorem 2.3 are given by relations (1) and (8) of Theorem 3.1, by relation \((\mathit{P}_{3})\) of Proposition 3.5, and also by the following relation, which is a consequence of Proposition 3.2\((\mathit{S}_{3})\). \[a_{n}^{-1}C_{i,n}a_{n}=\prod_{j=i}^{n-1}a_{n}^{-1}C_{j,n}C_{j+1,n}^{-1}a_{n}= \prod_{j=i}^{n-1}a_{j}C_{j+1,n}^{-1}C_{j,n}a_{j}^{-1}.\] On case (vi), we must look more carefully. If \(n=3\), then \(\chi(b_{2})=0\). So, if \(\chi(a_{2})=0\), we are on case (i). Therefore, if \(M=\mathbb{K}\) we have nothing else to analyse and, if \(M=\mathbb{T}\), all the possibilities were analysed on cases (i), (iv), (v) and on Lemma 5.8, which shows that \([\chi]\in\Sigma^{1}(P_{n}(M))\). If \(n\geq 4\), first, we have to analyse whether the path \[\tilde{p}=(b_{n},b_{n-1}\cdots b_{3}\cdot b_{1}\cdot b_{n-1}\cdots b_{2})=(b_ {n},\beta_{n-1,3}b_{1}\beta_{n-1,2})\] is \(\chi\)-positive or not. The path \(\tilde{p}\) is the concatenation of the paths \(p_{3}=(b_{n},\beta_{n-1,3})\), \(p_{1}=(\beta_{n,3},b_{1})\) and \(p_{2}=(\beta_{n,3}b_{1},\beta_{n-1,2})\). Now, it follows from (5.13) that \(p_{3}\) is \(\chi\)-positive. Also, \(\nu_{\chi}(p_{1})=\min\{\chi(\beta_{n,3}),\chi(\beta_{n,3})+\chi(b_{1})\}\), so \(\nu_{\chi}(p_{1})>0\) iff \(\chi(\beta_{n,3})+\chi(b_{1})>0\). In addition, from (5.12) and the fact that \(\chi(b_{n}b_{1})=0\) (we are on case (vi)) we have that \(\chi(\beta_{n-1,2})=0\), so \(\nu_{\chi}(p_{2})=\chi(\beta_{n,3})+\chi(b_{1})+\nu_{\chi}(1,\beta_{n-1,2})= \chi(\beta_{n,3})+\chi(b_{1})\). Since \(\nu_{\chi}(\tilde{p})=\min\{\nu_{\chi}(p_{3}),\nu_{\chi}(p_{1}),\nu_{\chi}(p_ {2})\}\), it follows that \(\nu_{\chi}(\tilde{p})>0\) iff \(\chi(\beta_{n,3})+\chi(b_{1})>0\). In particular, if \(\nu_{\chi}(\tilde{p})\leq 0\) then \(-\chi(b_{2})=\chi(\beta_{n,3})+\chi(b_{1})\leq 0\). Since \(\chi(b_{2})\leq 0\), we conclude that \(\chi(b_{2})=0\) and, consequently, \(\chi(b_{k})=0\) for all \(2\leq k\leq n-1\), by (5.13). We will use this information below. If \(\tilde{p}\) is \(\chi\)-positive, then fix \(t=b_{n}\) and \(\tilde{p}=p_{z}\) gives a path for \(z=C_{i,n}^{\pm 1}\) if \(1\leq i<n\), by using Lemma 3.5\((\mathit{P}_{2})\). For the other paths, we can use the fact that the paths are also \(\chi\)-positive for all \(2\leq i<n\), and we choose the same paths \(p_{z}\) from \(b_{n}\) to \(zb_{n}\) as in case (iii) above, if \(z\in\left\{a_{i}^{\pm 1},b_{i}^{\pm},C_{j,k}^{\pm 1}\,|\,1\leq i\leq n,\;1 \leq j<k<n\right\}\). Then \([\chi]\in\Sigma^{1}(P_{n}(M))\), as desired. If \(\tilde{p}\) is not \(\chi\)-positive, then \(\chi(b_{j})=0\), for all \(2\leq j\leq n-1\), as we showed above. There are two possibilities. The first one is \(\chi(a_{j})=0\) for some \(2\leq j\leq n-1\), in which case it follows by (i) that \([\chi]\in\Sigma^{1}(P_{n}(M))\). The second possibility is if \(\chi(a_{j})\neq 0\) for all \(2\leq j\leq n-1\) (this case can only occur if \(M=\mathbb{T}\)). Then, by Corollary 4.4, we can analyse the element \([\tilde{\chi}]\) insted of \([\chi]\), where \(\tilde{\chi}(a_{\tau(j)})=\chi(a_{j})\) and \(\tilde{\chi}(b_{\tau(j)})=\chi(b_{j})\), with \(\tau\in S_{n}\) such that \[\tilde{\chi}(a_{1})\leq\tilde{\chi}(a_{2})\leq\cdots\leq\tilde{\chi}(a_{n}),\] and the result for \([\tilde{\chi}]\) implies the same for \([\chi]\). We have \(|\tilde{\chi}(a_{1})|=|\tilde{\chi}(a_{n})|\); otherwise, we would be either on case (iv) or (v). So, \(\tilde{\chi}(a_{1}^{-1})=\tilde{\chi}(a_{n})>0\). Now, consider the path \[\tilde{q}=(a_{n},a_{n-1}\cdots a_{3}\cdot a_{1}\cdot a_{n-1}\cdots a_{2})=(a_ {n},\alpha_{n-1,3}a_{1}\alpha_{n-1,2})\] This path is analogous to the path \(\tilde{p}\) and we proceed in a similar manner. If \(\tilde{q}\) is \(\tilde{\chi}\)-positive, then fix \(t=a_{n}\) and \(\tilde{q}=p_{z}\) gives a path for \(z=C_{i,n}^{\pm 1}\) if \(1\leq i<n\), by using Lemma 3.5\((\mathit{P}_{4})\). For the other paths, we can use the fact that the paths \(q_{i}=(a_{n},a_{n-1}\cdots a_{i})=(a_{n},\alpha_{n-1,i})\) are also \(\tilde{\chi}\)-positive for all \(2\leq i<n\), and we choose the path \(p_{z}\) satisfying Theorem 2.3 given by relations (1) for \(z=a_{i}^{\pm}\), \(1\leq i<n\) (resp. relation (8) for \(z=C_{i,j}^{\pm}\), \(1\leq i<j<n\)) of Theorem 3.1, and by relation \((\mathit{P}_{3})\) of Proposition 3.5 for \(z=b_{i}^{\pm}\), \(1\leq i\leq n\); therefore, \([\tilde{\chi}]\in\Sigma^{1}(P_{n}(M))\). If \(\tilde{q}\) is not \(\tilde{\chi}\)-positive, then we can conclude that \(\tilde{\chi}(a_{k})=0\) for all \(2\leq k\leq n-1\) by the same arguments used before for the path \(\tilde{p}\). Thus, at most, we have two elements, namely \(a_{1}\) and \(a_{n}\) that do not vanish \(\tilde{\chi}\). But, since \(\tilde{\chi}\) is only a permutation of coordinates of \(\chi\) and \(\chi(a_{j})\neq 0\) for all \(2\leq j\leq n-1\), the number \(l\) of nonvanishing coordinates \(\tilde{\chi}(a_{j})\) is at least \(n-2\). So, \(n-2\leq l\leq 2\), which implies \(n\leq 4\). Since we are on the case \(n\geq 4\), we conclude that \(n=4\), and the result follows by Lemma 5.9. Finally, it is easy to see that the circles defining \(\Sigma^{1}(P_{n}(\mathbb{T}))^{c}\) are pairwise disjoint. This completes our proof. ## 6. Applications The computation of the BNS invariant of a group is relevant on its own, but it may also bring some information about the finite generation of normal subgroups with abelian quotient [35, Corollary B1.8] and on twisted conjugacy [16]. We finish with some immediate consequences of our work. First, we see that the commutator subgroups of some pure braid groups are not finitely generated. It is worth noticing that the result below can be also known via other methods (see [17, 18, 26]). **Corollary 6.1**.: _Let \(M=\mathbb{T}\), or \(M=\mathbb{K}\), or \(M=\mathbb{S}^{2}\), and \(n\geq 1\). Then the commutator subgroup \(\gamma_{2}(P_{n}(M))\) is finitely generated if and only if \((M,n)\in\{(\mathbb{T},1),(\mathbb{K},1),(\mathbb{S}^{2},1),(\mathbb{S}^{2},2),(\mathbb{S}^{2},3)\}\)._ Proof.: As we observed in the beggining of Section 5, the groups \(P_{1}(\mathbb{T})\simeq\mathbb{Z}\times\mathbb{Z}\) and \(P_{1}(\mathbb{K})\simeq\mathbb{Z}\rtimes\mathbb{Z}\) are well known to have full BNS invariants and, therefore, finitely generated commutator subgroups [35, Theorem A4.1]. By [13], the groups \(P_{n}(\mathbb{S}^{2})\) for \(n=1,2,3\) are finite, so must have finitely generated commutator subgroups. For the other groups, by Theorems 1.3 and 1.1, \(\Sigma^{1}(P_{n}(M))^{c}\) is not empty. The corollary then follows directly from Theorem A4.1 of [35]. Another motivation to study the BNS invariant for surface braid groups is that \(\Sigma\)-theory can be used on the investigation of the \(R_{\infty}\) property for groups [16]. Recall that an automorphism \(\varphi\in Aut(G)\) induces an equivalence relation called _twisted conjugacy_ on \(G\), given by \(x\sim_{\varphi}y\iff\exists\ z\in G:\ zx\varphi(z)^{-1}=y\). The number of equivalence classes (or Reidemeister classes) is denoted by \(R(\varphi)\), and we say \(G\) has property \(R_{\infty}\) if \(R(\varphi)\) is infinite for every \(\varphi\in Aut(G)\). Twisted conjugacy has many connections with other areas of mathematics, including topological fixed point theory [27]. We refer to the introduction of the paper [10] for a discussion of the historical context and development of \(R_{\infty}\). The first proof of \(R_{\infty}\) for the pure Artin braid groups \(P_{n}\), \(n\geq 3\), was published in 2021 [9] and, in 2022, an alternative proof was obtained in [7]. This raises the question of whether pure braid groups of other closed surfaces \(M\) have property \(R_{\infty}\). This is an open problem that has been currently investigated and to which we give below a small contribution for the case of the Klein bottle, by using the \(\Sigma^{1}\) invariant. Proof of Corollary 1.4.: This is a straightforward consequence of Theorem 1.3 and the proof of Corollary 3.4 of [16]. In fact, since the complement of \(\Sigma^{1}\) is invariant under automorphisms, there is a natural action by bijections \(Aut(P_{n}(\mathbb{K}))\curvearrowright\Sigma^{1}(P_{n}(\mathbb{K}))^{c}\), which induces a homomorphism \(Aut(P_{n}(\mathbb{K}))\to S_{k}\), where \(k=2\binom{n}{2}\). Let \(H\) be the kernel of this homomorphism. Since the characters of \(\Sigma^{1}(P_{n}(\mathbb{K}))^{c}\) are all discrete, \(R(\varphi)\) is infinite for every \(\varphi\in H\) (see [16, Corollary 3.4]). The fact \(|Aut(P_{n}(\mathbb{K})):H|\leq|S_{k}|=\big{(}2\binom{n}{2}\big{)}!\) is a direct consequence of the First Isomorphism Theorem. Corollary 1.4 suggests a possibility of property \(R_{\infty}\) to hold for the pure braid groups \(P_{n}(\mathbb{K})\). One reason is that a similar situation occurred in the year of 2019 for the Artin pure braid groups \(P_{n}\): the second author, together with the authors of [29], had the knowledge of a subgroup \(H\) of index 2 of \(Aut(P_{n})\) such that \(R(\varphi)\) was infinite for every \(\varphi\in H\). One year later, as we said, the first proof of \(R_{\infty}\) for \(P_{n}\) was published in [9].
2310.08165
COVID-19 detection using ViT transformer-based approach from Computed Tomography Images
In here, we introduce a novel approach to enhance the accuracy and efficiency of COVID-19 diagnosis using CT images. Leveraging state-of-the-art Transformer models in computer vision, we employed the base ViT Transformer configured for 224x224-sized input images, modifying the output to suit the binary classification task. Notably, input images were resized from the standard CT scan size of 512x512 to match the model's expectations. Our method implements a systematic patient-level prediction strategy, classifying individual CT slices as COVID-19 or non-COVID. To determine the overall diagnosis for each patient, a majority voting approach as well as other thresholding approaches were employed. This method involves evaluating all CT slices for a given patient and assigning the patient the diagnosis that relates to the thresholding for the CT scan. This meticulous patient-level prediction process contributes to the robustness of our solution as it starts from 2D-slices to 3D-patient level. Throughout the evaluation process, our approach resulted in 0.7 macro F1 score on the COV19-CT -DB validation set. To ensure the reliability and effectiveness of our model, we rigorously validate it on the extensive COV-19 CT dataset, which is meticulously annotated for the task. This dataset, with its comprehensive annotations, reinforces the overall robustness of our solution.
Kenan Morani
2023-10-12T09:37:56Z
http://arxiv.org/abs/2310.08165v2
COVID-19 detection using ViT transformer-based approach from Computed Tomography Images ###### Abstract In here, we introduce a novel approach to enhance the accuracy and efficiency of COVID-19 diagnosis using CT images. Leveraging state-of-the-art Transformer models in computer vision, we employed the base ViT Transformer configured for 224x224-sized input images, modifying the output to suit the binary classification task. Notably, input images were resized from the standard CT scan size of 512x512 to match the model's expectations. Our method implements a systematic patient-level prediction strategy, classifying individual CT slices as COVID-19 or non-COVID. To determine the overall diagnosis for each patient, a majority voting approach as well as other thresholding approaches were employed. This method involves evaluating all CT slices for a given patient and assigning the patient the diagnosis that relates to the thresholding for the CT scan. This meticulous patient-level prediction process contributes to the robustness of our solution as it starts from 2D-slices to 3D-patient level. Throughout the evaluation process, our approach resulted in 0.7 macro F1 score on the COV19-CT -DB validation set. To ensure the reliability and effectiveness of our model, we rigorously validate it on the extensive COV-19 CT dataset, which is meticulously annotated for the task. This dataset, with its comprehensive annotations, reinforces the overall robustness of our solution. _Keywords-- COVID-19 Diagnosis, ViT Base Transformer, CT Images, Macro F1 Score_ ## 1 Introduction The severity of the COVID-19 pandemic has spurred a global effort to develop innovative and effective solutions for mitigating the spread of the virus and its early detection. In this context, medical imaging, particularly the use of computed tomography (CT) images, has emerged as a valuable tool for aiding in the diagnosis of COVID-19. The ability to visualize and analyze the impact of the virus on the human respiratory system has played a crucial role in understanding and responding to the pandemic. CT images provide rich visual data that can potentially expedite the identification of affected individuals and inform clinical decision-making, making them an essential asset in the battle against COVID-19 [1]. As the field of medical imaging evolves, so does the demand for state-of-the-art solutions that can effectively analyze and interpret the wealth of visual data available. Vision Transformers (ViTs) have garnered significant attention as a groundbreaking approach in the domain of computer vision and image analysis. Their remarkable success in various vision tasks, including the detection of illnesses and abnormalities, underscores their potential as a transformative technology in the context of COVID-19. By harnessing the power of Vision Transformers, we can enhance the accuracy and efficiency of COVID-19 detection, potentially reducing the spread of the virus and enabling early intervention to improve patient outcomes [2]. In this academic paper, we present a comprehensive study that leverages the advanced capabilities of a base model pretrained ViT Transformer. This state-of-the-art model is tailored to the specific demands of illness detection from CT images and operates with an input image size of 224 [3]. Our research endeavors to push the boundaries of COVID-19 diagnosis and management by harnessing the potential of Vision Transformers in a medical context, marking a significant step forward in the fight against the pandemic. ## 2 Methodology ### The Dataset The dataset utilized in this study is an extension of the COV19-CT-DB database, encompassing annotated CT scans from a total of 1,650 COVID and 6,100 Non-COVID cases. The meticulous annotation process was carried out by a panel of experts, each possessing over two decades of experience, with four experts in total. Notably, every CT scan comprises a variable number of slices, ranging from 50 to 700. For the purpose of this research, we retained the original training set and a subset of the initial validation set, resulting in a validation set of 368 COVID cases and 469 Non-COVID cases, as shown in Table 1. This partitioning strategy ensures that the dataset maintains its integrity while facilitating rigorous evaluation of the proposed methodology. Access to this dataset is made available through the "ECCV 2022: 2nd COV19D Competition". It is important to highlight that the outcomes presented in this paper were derived after the successful submission of an extended version of the database for the IEEE ICASSP 2023: AI-enabled Medical Image Analysis Workshop and Covid-19 Diagnosis Competition (AI-MIA-COV19D). Interested parties may request access to the COV19-CT database directly from the organizers of the workshop [4-5-6-7-8-9-10]. ### The Model Architecture At the core of the ViT model, the Transformer-based architecture excels in capturing both local and global dependencies within an image, specifically tailored for the 'vit_base_patch16_224' variant. The essential concept of a ViT remains unchanged: the input image is divided into fixed-size non-overlapping patches, treated as tokens, and processed through a series of Transformer layers. This patch-based strategy offers scalability and adaptability to handle images of varying sizes. The introduction of positional embeddings ensures the encoding of spatial information, allowing the model to understand relative patch positions. The ViT architecture, comprising multiple transformer blocks, each housing attention mechanisms and feedforward neural networks, collaboratively processes visual data to extract meaningful features. Moreover, ViTs often integrate a classification head for making predictions based on learned features, enhancing their versatility across various vision tasks. The Transformer, specifically the 'vit_base_patch16_224' variant, stands as a cutting-edge architecture, particularly well-suited for medical image analysis, including the detection of illnesses like COVID-19 from CT images. Classification Head: The final layer, the classification head, produces predictions, configured for binary outcomes in the case of medical image analysis, distinguishing COVID and Non-COVID cases [11-12]. The transformer model's application in medical image analysis represents a state-of-the-art approach, demonstrating potential in enhancing the accuracy and efficiency of illness detection, such as COVID-19, from CT images. In our study, input images underwent critical preprocessing to align seamlessly with the 'vit_base_patch16_224' architecture. Original images from the COV19-CT database were uniformly resized to a resolution of 224x224 pixels. This resizing ensures compatibility with the Transformer's patch-based architecture, facilitating effective visual data processing. It is crucial to note that, apart from resizing, no further modification or extensive data augmentation was applied to the input images. This approach preserves their essential characteristics and information, emphasizing the utilization of the transformer's inherent power to capture valuable patterns and features within the CT images. The output of the Transformer was tailored to address the specific objectives of our study, which involved the binary classification of COVID-19 cases and Non-COVID cases. In alignment with this task, the model's final classification head was configured to \begin{table} \begin{tabular}{|c|c|c|} \hline **Annotation** & **Training** & **Validation** \\ \hline COVID-19 cases & 687 & 215 \\ \hline Non-COVID cases & 867 & 269 \\ \hline \end{tabular} \end{table} Table 1: Distribution of cases in training and validation partitions make precise binary predictions. This binary classification setting is instrumental in facilitating the diagnosis of COVID-19, a pivotal objective of our research, as highlighted in our GitHub repository dedicated to this endeavor. The model was compiled via pytorch platform with the following tuning: * Learning Rate (0.001): The learning rate is set to control convergence speed without causing instability. * Number of Epochs (20): Training for 20 epochs allows the model to converge to an optimal solution. * Batch Size (32): A batch size of 32 balances training speed and stability. * Number of Classes (2): The binary classification task involves two classes. * Loss Function (Cross-Entropy Loss): Cross-entropy loss is suitable for classification tasks. * Optimizer (Adam): Adam optimizer is efficient for training deep models. The equipment used is GNU/Linux operating system on 64GiB System memory with Intel(R) Xeon(R) W-2223 CPU @ 3.60GHz processor. ### Patient Level Predictions In order to make patient-level predictions, a systematic approach was implemented, involving the following steps: * Data Iteration: The code iterates through the image files within the specified folder path, which contains a collection of CT images. Each folder within the main directory corresponds to an individual patient's CT scan. * Prediction at Slice Level:For each CT scan, the code loads and preprocesses the CT images. These images are then passed through the previously trained Transformer model to obtain predictions. The predictions provide a binary classification of each image slice as either COVID-19 or non-COVID, based on the computed class probabilities. * Majority Voting: The code accumulates the predictions for all slices within a patient's CT scan. For each patient, the code tallies the count of predicted COVID-19 and non-COVID slices. * Patient Label Determination: The patient's label is determined based on the majority of predictions within their CT scan. If the count of predicted COVID-19 slices is higher than that of non-COVID slices, the patient is categorized as COVID-19 positive. Conversely, if the count of predicted non-COVID slices is greater, the patient is labeled as non-COVID. This majority voting method at the patient level enables a robust diagnosis of the patient's COVID-19 status. By implementing this approach, the code achieves patient-level predictions that take into account the collective information from all slices within a patient's CT scan, providing an effective method for COVID-19 diagnosis based on CT image data. ### Performance Evaluation The proposed model was evaluated via the COV19-CT-DB database using accuracy, macro F1 score and confidence interval. The accuracy is calculated as in Equation 1: \[Accuracy\ =\ \frac{True\ Positive\,+\,True\ Negatives}{True\ Positive\,+\,False\ Positive\,+\,True\ Negatives\,+\,False\ Negatives} \tag{1}\] Where positive and negative cases refer to COVID and Non-COVID cases. The macro F1 score was calculated after averaging precision and recall matrices as in Equation 2: \[Macro\ F1\ =\ \frac{2\times average\,precision\times average\,recall}{average\, precision\,+\,average\,recall} \tag{2}\] Furthermore, to report the confidence intervals of the results obtained, the Binomial proportion confidence intervals for macro F1 score are used. The confidence intervals were used to check the range variance of the reported results. The residuals of the interval can be calculated as in Equation 3[24]. \[Radius\ of\ Interval=z\ \times\sqrt{\frac{macro\ F1\times(1-macro\ F1)}{n}} \tag{3}\] where z is the number of standard deviations from the Gaussian distribution and n is the number of samples. ## 3 Results ### Results on the validation partition Table 3 shows the training performance. To calculate the confidence interval for the resulting accuracy, equation 3 was used. In the equation, z is taken as z=1.96 for a significance level of 95%. By that we can calculate the confidence interval for the macro F1 score (approximately 0.78) as in Equation 4: \[interval=1.96\ \times\ \sqrt{\frac{0.75\ (1-0.75)}{106378}}\ \approx\ 0.00013 \tag{4}\] The number of samples (slices) in the validation set is 106,378. The result from the last equation shows sufficient confidence in the resulting accuracy. ### Results on the test partition Our looked at slices of each CT scan belonging to one patient and decided whether it is a COVID case or Non-COVID based on different voting methods called thresholds. Fig. 1 shows the method's performance results for different thresholding methods. To explain one voting method, we may choose 0.4 thresholds. In here, if the number of COVID slices in one CT scans is greater than 20% of the number of Non-COVID slices in the same CT scan, then the patient is diagnosed with the disease. Else the patient will be healthy. A threshold of 40% gave the highest Validation accuracy and weighted macro F1 score among the tried thresholds \begin{table} \begin{tabular}{|l|l|} \hline **Performance metric** & **Score** \\ \hline Average training accuracy & 75.45\% \\ \hline Average recall & 75.84\% \\ \hline Average precision & 75.33\% \\ \hline \end{tabular} \end{table} Table 2: Performance results of the training The following are the results when using 40% thresholding voting method. Confusion matrix at patient level is shown in Table 3. Precision, recall and accuracy at patient level are shown in Table 4. Table 5 shows the classe weights in the validation set. Our approach has achieved a macro F1 score that not only exceeds the established baseline but also surpasses the scores achieved by a multitude of other competitors on the validation set of the data. This remarkable performance underscores the effectiveness and superiority of our method in the context of COVID-19 diagnosis using CT images, making it a promising and competitive solution in the field. Table 6 compares our results to some other results presented in the competition [13-14-15]. ## 4 Conclusion In this study, we have presented a robust and effective method for COVID-19 diagnosis utilizing CT images. By harnessing the capabilities of transformer models, a cutting-edge technology in computer vision, we have demonstrated the power of deep learning in medical image analysis. Our systematic approach for patient-level predictions, based on individual CT slice classifications and majority voting, extends on our previous work. The results of our study exhibit advancements in terms of showing the performance using a vision transformer. Future work to improve the performance of the methodology may include, processing the images before inputting them to the transformer. Image processing techniques, such as lung area segmentation may help increase the focus on the interest areas of the CT slices. In addition, different types of transformers can be tested. Replacing the current ViT transformer with a transformer such as Swin transformers or even trying a larger/smaller version of the same transformer may improve the results ## Acknowledgement Acknowledgement goes to the medical staff which worked on annotating COV19-CT-DB database and other members who shared the dataset with IDU-CVLab team. ## Declarations **Funding statement.** No funding was provided for this study. **Conflict of interest.** The author declares no conflict of interest. **Additional information.** The code related to this study can be found on Github at [https://github.com/IDU-CVLab/COV19D_4th](https://github.com/IDU-CVLab/COV19D_4th)
2303.10612
Bangla Grammatical Error Detection Using T5 Transformer Model
This paper presents a method for detecting grammatical errors in Bangla using a Text-to-Text Transfer Transformer (T5) Language Model, using the small variant of BanglaT5, fine-tuned on a corpus of 9385 sentences where errors were bracketed by the dedicated demarcation symbol. The T5 model was primarily designed for translation and is not specifically designed for this task, so extensive post-processing was necessary to adapt it to the task of error detection. Our experiments show that the T5 model can achieve low Levenshtein Distance in detecting grammatical errors in Bangla, but post-processing is essential to achieve optimal performance. The final average Levenshtein Distance after post-processing the output of the fine-tuned model was 1.0394 on a test set of 5000 sentences. This paper also presents a detailed analysis of the errors detected by the model and discusses the challenges of adapting a translation model for grammar. Our approach can be extended to other languages, demonstrating the potential of T5 models for detecting grammatical errors in a wide range of languages.
H. A. Z. Sameen Shahgir, Khondker Salman Sayeed
2023-03-19T09:24:48Z
http://arxiv.org/abs/2303.10612v1
# Bangla Grammatical Error Detection Using T5 Transformer Model ###### Abstract This paper presents a method for detecting grammatical errors in Bangla using a Text-to-Text Transfer Transformer (T5) Language Model [2], using the small variant of BanglaT5 [1], fine-tuned on a corpus of 9385 sentences where errors were bracketed by the dedicated symbol $[2]. The T5 model was primarily designed for translation and is not specifically designed for this task, so extensive post-processing was necessary to adapt it to the task of error detection. Our experiments show that the T5 model can achieve low Levenshtein Distance in detecting grammatical errors in Bangla, but post-processing is essential to achieve optimal performance. The final average Levenshtein Distance after post-processing the output of the fine-tuned model was 1.0394 on a test set of 5000 sentences. This paper also presents a detailed analysis of the errors detected by the model and discusses the challenges of adapting a translation model for grammar. Our approach can be extended to other languages, demonstrating the potential of T5 models for detecting grammatical errors in a wide range of languages. Bangla, Grammatical Error Detection, Machine Learning, T5 ## I Introduction In an increasingly digital world, the ability to communicate effectively in written form has become a crucial skill. With the rise of digital communication platforms, such as email, instant messaging, and social media, written communication has become more pervasive than ever before. However, with this increased reliance on written communication comes a new set of challenges, including the need for accurate and effective grammar usage. Grammar errors can impede effective communication and have serious consequences, especially in professional and academic settings where clarity and precision are paramount. Grammar errors can also impact the credibility of the writer and create confusion for the reader. In recent years, the development of deep learning models for grammar error detection (GED) and grammar error correction (GEC) has become an increasingly important area of research. One product of this extensive research is -- Grammarly. It is one of the most ubiquitous grammar correction tools available today, with millions of users around the world. This tool uses the GECToR model [3] for error detection and correction. This model implements a tagging-based approach for error detection using an encoder and then uses a generative approach to correct that error based on the detection using a seq2seq model. This approach achieves state-of-the-art results on canonical GEC evaluation datasets based on F-score results. This makes it a valuable resource for individuals and organizations that rely on written communication. However, it is important to note that Grammarly and other similar tools are currently only available for a limited number of languages, primarily English. Some research work has been done in GED and GEC in Bangla [4][5] but to the best of our knowledge, no work leveraging transformer models has yet been done in Bangla. As mentioned before, GEC in English has already reached a commercially viable stage and notable progress has been achieved using both seq2seq [6][7] and BERT-based models [3]. Both deliver comparable performance [3] but the seq2seq models are easier to train albeit with much slower inference. We ultimately decided on using the T5 model [8], pre-trained on a large Bangla corpus [1]. We tested both the base (220M parameters) and the small (60M parameters) variants of BanglaT5 and found the smaller model to perform slightly better within our computing budget. T5 or Text-to-Text Transfer Transformer [8], is a Transformer based architecture that uses a text-to-text approach. It adds a causal decoder to the bidirectional architecture of BERT [9]. The difference with the basic encoder-decoder transformer architecture [10] is that t5 uses relative positional embedding and layer norm at the start of each block and the end of the last block. Other than that, t5 and the basic encoder-decoder transformers are the same in architecture. T5 was trained with the goal of unifying all NLP tasks with a single text-to-text model. By that goal, banglat5 [1] was trained on a massive Bengali pretraining corpus Bangla2B+ [11], sized 27.5GB. This allows banglat5 to achieve state-of-the-art results on most Bengali text generation tasks. Therefore, leveraging transfer learning from an enormous Bengali text corpus -- banglat5 is an ideal candidate to consider for Bengali GED and GEC tasks. ## II Methodology ### _Model Selection_ Currently, BERT (Bidirectional Encoder Representations from Transformers) and its variants are the best-performing models on tasks such as token classification [12] and sentence classification [13]. BanglaBERT reproduced this finding when trained specifically on Bangla corpus [11]. Although GED can be formulated as either a token classification problem or a sentence classification problem, both possess several challenges. When presented as a token classification task, punctuation becomes a particular issue since most punctuations represent a pause and are hard to distinguish. Another challenge is tokens which are missing altogether. It can be hypothesized that BERT does have the ability to detect the logical inconsistency in a sentence that arises from missing tokens due to its deep encoder architecture but marking the position of missing tokens is a challenge. On the other hand, when posed as a sentence classification problem, we find that BERT can classify sentences are either error-free or with errors well but cannot mark the erroneous section itself. Recently, sequence-to-sequence (seq2seq) models such as the T5 [2] have achieved state-of-the-art performance on standard Grammatical Error Correction (GEC) benchmarks [14]. Such models [6][7] have been trained specifically on synthetic GEC datasets (as opposed to general translation datasets). But since the model must generate the entire output sequence, including the parts which were correct to begin with, inference is slow. The BERT-based GECToR [3] presents another way for GEC - a token classification approach where errors are mapped to the 5000 error-correcting transformations (one for each token in vocabulary and some token independent transformations) which correct the errors algorithmically. The resulting model is up to 10 times faster than comparable seq2seq models but as before, this requires a synthetic pretraining corpus. For Bangla Grammar Error Detection we decided on the small variant of BanglaT5 [1] with 60M parameters. The smaller model allowed for larger batch sizes, faster experimentation and hyper-parameter tuning when compared to the standard BanglaT5 model with 220M parameters while delivering similar performance on our training set (9385 pairs). Experimentation on the larger T5 models using the full available dataset (19385 pairs) and evaluating a BERT-based approach similar to GECToR is left for future work. ### _Dataset Analysis_ The training set consisted of 19385 sentence pairs in total, containing both error-free sentences and sentences with errors. The major error types are: The major error types are: 1. Single word error 2. Multi-word error 3. Wrong Punctuation 4. Punctuation Omission 5. Merge Error 6. Form/Inflection Error 7. Unwanted space error 8. Hybrid The errors are each bracketed by a designated symbol $ and are not differentiated from each other. We used DataSetFold1 for the fine-tuning of the T5 model and both DataSetFold1 and DataSetFold2 for the crucial post-processing steps. ### _External Dataset_ We collected a word list of 311 archaic Bangla verb words which were consistently marked as errors in the training dataset. We collected said word list with the aid of Github Copilot. This data was used in our regular expression based approach to GED. ### _Pre-processing_ Not wanting to shift the distribution of the train set from the test set, we kept pre-processing to a minimum. The sentences were normalized and tokenized using the normalizer and tokenizer used in pretraining [1]. One notable point is that Fig. 1: Encoder based BERT architecture (left) vs Encoder-Decoder based text-to-text transformer architecture (right) (source: [https://jalammar.github.io/illustrated-transformer/](https://jalammar.github.io/illustrated-transformer/)) we omitted newline characters when present inside sentences since it interfere with the way the T5 model reads in sentences. ### _Training_ Through experimentation on an 80-20 split of DataSetFold1 between training and validation set and using an effective batch size of 128, we determined 120 epochs to be a good stopping point before the model starts to over-fit. Then we used the entirety of DataSetFold1 for 120 epochs of training. Since the task was to predict 5000 test sentences while training only on 9385 training pairs, we determined that keeping any significant segment for validation and early stopping would be detrimental to overall performance. A naive attempt to train the model on the combined DataSetFold1 and DataSetFold2 dataset did not improve our score on the test set. This is likely because introducing new data requires re-tuning the model hyper-parameters. For now, we leave this as future work. ### _Post-processing_ The T5 model was built on the paradigm of unifying all NLP tasks under text-to-text classification and on that front, T5 achieves state-of-the-art results on many GLUE tasks. However, this paradigm does have its shortcoming. Of particular importance in the task of GED when judged by Levensthein Distance is the tendency of T5 models to spell words differently or sometimes change entire words with a close synonym because reproducing the input sequence exactly is as important as marking the errors. This is a particular problem in Bangla GED since the language is still evolving and multiple spellings of the same word are in use concurrently. Furthermore, there exist several unicode characters representing the same Bangla alphabet or symbol, further complicating the reconciliation of the T5 output sequence with its input sequence. To transform the raw T5 output to a form as close as possible to the input sequence, we present two algorithms. As an optional post-processing of all outputs, we present a third algorithm that does simple error word detection that the model might have missed. The first is for repelling and correcting the T5 output by comparing it character by character with the input sequence. Beginning with an empty string as corrected output, if the next character is a $ symbol, it is appended to the corrected output string. If the next character of the input and output sentence match, it is appended. If they do not match, the next character of the output string is looked up in a table and if present, the value from the lookup table represents the correction and is appended. This lookup table has been constructed manually by observing common t5 errors. Constructing the table automatically is left for future work. If character-level corrections fail, the algorithm attempts to make word-level corrections by replacing entire words in the t5 output and then character-level correction is attempted again. The second algorithm is a regular expression-based approach to GED in case the first algorithm fails to correct the T5 output. Certain common errors are learned from the training dataset and identified in the test set using sub-string replacements. These two algorithms work in tandem to correct the t5 output. However, should a test sentence already be present in the training dataset, then the error-marked sentence is directly pulled from the training dataset using another lookup table. In a real-world scenario, having a lookup table of the most commonly mistaken sentences or phrases can significantly speed up GED since the need for a large deep learning model is bypassed entirely. The pseudo-code for the algorithms is in the Appendix. Fig. 2: Training Loss vs Epoch ## III Results and Discussion Training the banglat5 small model with 60M parameters on 9385 sentence pairs for 120 epochs with a batch size of 128 and learning rate of \(5\times 10^{-4}\) with AdamW Optimizer and a linear learning rate scheduler yielded a final Levensthein Score of 1.0394 on 5000 test sentences. The effect of the multiple post-processing steps is presented below, serving as a short ablation study of our methodology. Average Levenshtein distance data on the test dataset was collected from submissions to EEE DAY 2023 Datathon. The private and public scores are based on a 50-50 split of the 5000 test sentences. We calculated the total Aggregated distance by averaging the two. After character-level corrections, the T5 output still had a severe mismatch with the original input in 107 sentences. These arise mainly from mainly two causes, entire words replaced or sentences that exceed the maximum input token limit (256) of the model. Using only the regex-based algorithm yields a modest score of 1.1906. But using it to handle 107 sentences that couldn't be corrected resulted in a significant improvement (1.0866). Finally, the lookup table also modestly improves the Levenshtein score (1.0394) by looking up 253 sentences with exact matches in the training dataset. ## IV Conclusion and Future Work In conclusion, we trained a T5 model for Grammatical Error Detection and evaluated its performance using Levenshtein distance. Although it's difficult to compare our results to previous work that typically uses the F1 score, our model achieved good performance on the dataset we used. However, we acknowledge that we only used 50% of the dataset and the entire dataset may have improved our results. Additionally, using T5 base instead of T5 small may have improved our performance with hyperparameter tuning. We also noted that preprocessing could have rooted out spelling errors, leaving more difficult semantic errors for the T5 model to handle. Moreover, we identified that the post-processing step could be automated to improve the performance further. Looking forward, we suggest exploring a BERT-based approach like GECToR [3] for Grammatical Error Detection. Overall, our work demonstrates the potential of T5 models for Grammatical Error Detection and provides a foundation for future work in this field.
2301.11375
Neural networks learn to magnify areas near decision boundaries
In machine learning, there is a long history of trying to build neural networks that can learn from fewer example data by baking in strong geometric priors. However, it is not always clear a priori what geometric constraints are appropriate for a given task. Here, we consider the possibility that one can uncover useful geometric inductive biases by studying how training molds the Riemannian geometry induced by unconstrained neural network feature maps. We first show that at infinite width, neural networks with random parameters induce highly symmetric metrics on input space. This symmetry is broken by feature learning: networks trained to perform classification tasks learn to magnify local areas along decision boundaries. This holds in deep networks trained on high-dimensional image classification tasks, and even in self-supervised representation learning. These results begins to elucidate how training shapes the geometry induced by unconstrained neural network feature maps, laying the groundwork for an understanding of this richly nonlinear form of feature learning.
Jacob A. Zavatone-Veth, Sheng Yang, Julian A. Rubinfien, Cengiz Pehlevan
2023-01-26T19:43:16Z
http://arxiv.org/abs/2301.11375v3
# Neural networks learn to magnify areas near decision boundaries ###### Abstract We study how training molds the Riemannian geometry induced by neural network feature maps. At infinite width, neural networks with random parameters induce highly symmetric metrics on input space. Feature learning in networks trained to perform classification tasks magnifies local areas along decision boundaries. These changes are consistent with previously proposed geometric approaches for hand-tuning of kernel methods to improve generalization. ## 1 Introduction In a series of influential papers, Amari and Wu proposed that one could improve the generalization performance of support vector machine (SVM) classifiers through data-dependent transformations of the kernel to expand the Riemannian volume element near decision boundaries (Amari & Wu, 1999; Williams et al., 2007; Wu & Amari, 2002). This proposal was based on the idea that this local magnification of areas improves class discriminability (Amari & Wu, 1999; Burges, 1999; Cho & Saul, 2011). Over the past decade, SVMs have largely been eclipsed by neural networks, whose ability to flexibly learn features from data is believed to underlie their superior generalization performance (LeCun et al., 2015; Zhang et al., 2021). Previous works have explored some aspects of the geometry induced by neural networks feature maps with random parameters (Amari et al., 2019; Benfenati & Marta, 2023; Cho & Saul, 2009; 2011; Hauser & Ray, 2017; Poole et al., 2016; Zavatone-Veth & Pehlevan, 2022), but have not characterized data-dependent changes in representational geometry over training. In this work, we explore the possibility that neural networks learn to enhance local input discriminability automatically over the course of training. Our primary contributions are: * In SS4, we study general properties of the metric induced by shallow fully-connected neural networks. Next, in SS4.2, we compute the volume element and curvature of the metric induced by infinitely wide shallow networks with Gaussian weights and smooth activation functions, showing that it is spherically symmetric. * In SS5, we empirically show that training shallow networks on simple classification tasks expands the volume element along decision boundaries, consistent with the hand-engineered modifications proposed by Amari and Wu. In SS6, we provide evidence that deep residual networks trained on more complex tasks behave similarly. In total, our results provide a preliminary picture of how feature learning shapes local input discriminability. ## 2 Preliminaries We begin by introducing the basic idea of the Riemannian geometry of feature space representations. Our setup and notation largely follow Burges (1999), which in turn follows the conventions of Dodson & Poston (1991). ### Feature embeddings as Riemannian manifolds Consider \(d\)-dimensional data living in some submanifold \(\mathcal{D}\subseteq\mathbb{R}^{d}\). Let the _feature map_\(\mathbf{\Phi}:\mathbb{R}^{d}\rightarrow\mathcal{H}\) be a map from \(\mathbb{R}^{d}\) to some separable Hilbert space \(\mathcal{H}\) of possibly infinite dimension \(n\), with \(\mathbf{\Phi}(\mathcal{D})=\mathcal{M}\subseteq\mathcal{H}\). We index input space dimensions by Greek letters \(\mu,\nu,\rho,\ldots\in[d]\) and feature space dimensions by Latin letters \(i,j,k,\ldots\in[n]\). We use the Einstein summation convention; summation over all repeated indices is implied. Assume that \(\mathbf{\Phi}\) is \(\mathcal{C}^{k}\) for \(k\geq 3\), and is everywhere of rank \(r=\min\{d,n\}\). If \(r=d\), then \(\mathcal{M}\) is a \(d\)-dimensional \(\mathcal{C}^{k}\) manifold immersed in \(\mathcal{H}\). If \(k=\infty\), then \(\mathcal{M}\) is a smooth manifold. In contrast, if \(r<d\), then \(\mathcal{M}\) is a \(d\)-dimensional \(\mathcal{C}^{k}\) manifold submersed in \(\mathcal{H}\). The flat metric on \(\mathcal{H}\) can then be pulled back to \(\mathcal{M}\), with components \[g_{\mu\nu}=\partial_{\mu}\Phi_{i}\partial_{\nu}\Phi_{i}, \tag{1}\] where we write \(\partial_{\mu}\equiv\partial/\partial x^{\mu}\). If \(r=d\) and the pullback metric \(g_{\mu\nu}\) is full rank, then \((\mathcal{M},g)\) is a \(d\)-dimensional Riemannian manifold (Burges, 1999; Dodson & Poston, 1991). However, if the pullback \(g_{\mu\nu}\) is a degenerate metric, as must be the case if \(r<d\), then \((\mathcal{M},g)\) is a singular semi-Riemannian manifold (Benfenati & Marta, 2023b; Kupeli, 2013). In this case, if we let \(\sim\) be the equivalence relation defined by identifying points with vanishing pseudodistance, the quotient \((\mathcal{M}/\sim,g)\) is a Riemannian manifold (Benfenati & Marta, 2023b). Unless noted otherwise, our results will focus on the non-singular case. We denote the matrix inverse of the metric tensor by \(g^{\mu\nu}\), and we raise and lower input space indices using the metric. If we define the feature kernel \(k(\mathbf{x},\mathbf{y})=\Phi_{i}(\mathbf{x})\Phi_{i}(\mathbf{y})\) for \(\mathbf{x},\mathbf{y}\in\mathcal{D}\), then the resulting metric can be written in terms of the kernel as \(g_{\mu\nu}=(1/2)\partial_{x_{\mu}}\partial_{x_{\nu}}k(\mathbf{x},\mathbf{x})- [\partial_{y_{\mu}}\partial_{y_{\nu}}k(\mathbf{x},\mathbf{y})]_{\mathbf{y}= \mathbf{x}}\). This formula applies even if \(n=\infty\), giving the metric induced by the feature embedding associated to a suitable Mercer kernel (Burges, 1999). ### Volume element and curvature With this setup, \((\mathcal{M},g)\) is a Riemannian manifold, hence we have at our disposal a powerful toolkit with which we may study its geometry. We will focus on two geometric properties of \((\mathcal{M},g)\). First, the volume element is given by \[dV=\sqrt{\det g}\,d^{d}x, \tag{2}\] where the factor \(\sqrt{\det g}\) measures how local areas in input space are magnified by the feature map (Amari & Wu, 1999; Burges, 1999; Dodson & Poston, 1991). Second, we consider the intrinsic curvature of the manifold, which is characterized by the Riemann tensor \(R^{\mu}_{\nu\alpha\beta}\)(Dodson & Poston, 1991). If \(R^{\mu}_{\nu\alpha\beta}=0\), then the manifold is intrinsically flat. As a tractable measure, we focus on the Ricci curvature scalar \(R=g^{\beta\nu}R^{\alpha}_{\nu\alpha\beta}\), which measures the deviation of the volume of an infinitesimal geodesic ball in the manifold from that in flat space (Dodson & Poston, 1991). In the singular case, we can compute the volume element on \(\mathcal{M}/\sim\) at a given point by taking the square root of the product of the non-zero eigenvalues of the degenerate metric \(g_{\mu\nu}\) at that point (Benfenati & Marta, 2023b). However, the curvature in this case is generally not straightforward to compute; we will therefore leave this issue for future work. ### Shallow neural network feature maps In this work, we consider a particular class of feature maps: those given by the hidden layer representations of neural networks (Benfenati & Marta, 2023b; Cho & Saul, 2009; 2011; Hauser & Ray, 2017; LeCun et al., 2015; Lee et al., 2018; Matthews et al., 2018; Neal, 1996; Williams, 1997). We will mostly focus on shallow fully-connected neural networks, i.e., those with only a single hidden layer followed by readout. Concretely, such a feature map is of the form \[\Phi_{j}(\mathbf{x})=n^{-1/2}\phi(\mathbf{w}_{j}\cdot\mathbf{x}+b_{j}) \tag{3}\] for weights \(\mathbf{w}_{j}\), biases \(b_{j}\), and an activation function \(\phi\). For convenience, we abbreviate the Euclidean inner product on feature or input space by -, e.g., \(\mathbf{w}\cdot\mathbf{x}=w_{\mu}x_{\mu}\). In this case, the feature space dimension \(n\) is equal to the number of hidden units, and is referred to as the _width_ of the hidden layer. In (3), we scale the components of the feature map by \(n^{-1/2}\) such that the associated kernel \(k(\mathbf{x},\mathbf{y})=\Phi_{i}(\mathbf{x})\Phi_{i}(\mathbf{y})\) and metric (4) have the form of averages over hidden units, and therefore should be well-behaved at large widths (Neal, 1996; Williams, 1997). We will assume that \(\phi\) is \(\mathcal{C}^{k}\) for \(k\geq 3\), so that this feature map satisfies the smoothness conditions required in the setup above. We will also assume that the activation function and weight vectors are such that the Jacobian \(\partial_{\mu}\Phi_{j}\) is full-rank, i.e., is of rank \(\min\{d,n\}\). Then, the shallow network feature map satisfies the required conditions for the feature embedding to be a (possibly singular) Riemannian manifold. These conditions extend directly to deep fully-connected networks formed by composing feature maps of the form (3) (Benfenati & Marta, 2023b; Hauser & Ray, 2017). ## 3 Related works Having established the geometric preliminaries of SS2, we can give a more complete overview of related works. As introduced above, our hypothesis for how the Riemannian geometry of neural network representations changes during training is directly inspired by the work of Amari & Wu (1999). In that and subsequent works (Amari & Wu, 1999; Williams et al., 2007; Wu & Amari, 2002), they proposed to modify the kernel of an SVM as \(k(\mathbf{x},\mathbf{y})=h(\mathbf{x})h(\mathbf{y})k(\mathbf{x},\mathbf{y})\) for some positive scalar function \(h(\mathbf{x})\) chosen such that the magnification factor \(\sqrt{\det g}\) is large near the SVM's decision boundary. Concretely, they proposed to fit an SVM with some base kernel \(k\), choose \(h(\mathbf{x})=\sum_{\mathbf{v}\in\text{SV}(k)}\exp[-\|\mathbf{x}-\mathbf{v}\|^ {2}/2\tau^{2}]\) for \(\tau\) a bandwidth parameter and \(\text{SV}(k)\) the set of support vectors for \(k\), and then fit an SVM with the modified kernel \(\tilde{k}\). Here, \(\|\cdot\|\) denotes the Euclidean norm. This process could then be iterated, yielding a sequence of modified kernels. They found that this method can improve generalization performance relative to the original kernel (Amari & Wu, 1999; Williams et al., 2007; Wu & Amari, 2002). This approach is a hand-designed form of iterative feature learning. The geometry induced by common kernels was investigated in detail by Burges (1999), who established a broad range of technical results. He showed that translation-invariant kernels of the form \(k(\mathbf{x},\mathbf{y})=k(\|\mathbf{x}-\mathbf{y}\|^{2})\) yield flat, con stant metrics,1 and gave a detailed characterization of polynomial kernels \(k(\mathbf{x},\mathbf{y})=(\mathbf{x}\cdot\mathbf{y})^{q}\). Cho and Saul (2011) subsequently analyzed the geometry induced by arc-cosine kernels, i.e., the feature kernels of infinitely-wide shallow neural networks with threshold-power law activation functions \(\phi(x)=\max\{0,x\}^{q}\) and random parameters (Cho and Saul, 2009). Our results on infinitely-wide networks for general smooth activation functions build on these works. Footnote 1: It is interesting to note that this holds for the simplest form of a method for learning data-adaptive kernels recently proposed by Radhakrishnan et al. (2022); see Appendix F. The representational geometry of deep networks with random Gaussian parameters in the limit of large width and depth was studied by Poole et al. (2016), and in later work by Amari et al. (2019). These works tie into a broader line of research on infinite-width limits of deep neural networks in which inference and prediction is captured by a kernel machine (Bordelon and Pehlevan, 2022; Daniely et al., 2016; Lee et al., 2018; Matthews et al., 2018; Neal, 1996; Williams, 1997; Yang, 2019; Yang and Hu, 2021; Zavatone-Veth and Pehlevan, 2022; Zavatone-Veth et al., 2021). Our results on the representational geometry of wide shallow networks with smooth activation functions build on these ideas, particularly those relating activation function derivatives to input discriminability (Daniely et al., 2016; Poole et al., 2016; Zavatone-Veth and Pehlevan, 2021, 2022). Particularly closely related to our work are several recent papers that aim to study the curvature of neural network representations. Benfenati and Marta (2023b); Hauser and Ray (2017) discuss formal principles of Riemannian geometry in deep neural networks, but do not characterize how training shapes the geometry. Kaul and Lall (2020) aimed to study the curvature of metrics induced by the outputs of pretrained classifiers. However, their work is limited by the fact that they estimate input-space derivatives using inexact finite differences under the strong assumption that the input data is confined to a _known_ smooth submanifold of \(\mathbb{R}^{d}\). Recent works by Kuhnel et al. (2018); Shao et al. (2018); Wang and Ponce (2021) have studied the Riemannian geometry of the latent representations of deep generative models. Finally, in very recent work Benfenati and Marta (2023a) have used the geometry induced by the full input-output mapping to reconstruct iso-response curves of deep networks. In contrast, our work focuses on hidden representations, and seeks to characterize the representational manifolds themselves. ## 4 Representational geometry of shallow neural network feature maps We begin by studying general properties of the Riemannian metrics induced by shallow neural networks feature maps. ### Finite-width networks We first consider finite-width networks with fixed weights, assuming that \(n\geq d\). Writing \(z_{j}=\mathbf{w}_{j}\cdot\mathbf{x}+b_{j}\) for the preactivation of the \(j\)-th hidden unit, the general formula (1) for the metric yields \[g_{\mu\nu}=\frac{1}{n}\phi^{\prime}(z_{j})^{2}w_{j\mu}w_{j\nu}. \tag{4}\] This metric has the useful property that \(\partial_{\alpha}g_{\mu\nu}\) is symmetric under permutation of its indices, hence the formula for the Riemann tensor simplifies substantially (Appendix A). Then, using the Leibniz formula for determinants, we show in Appendix B that the determinant of the metric can be expanded as a sum over \(d\)-tuples of hidden units: \[\det g=\frac{1}{n^{d}d!}M_{j_{1}\cdots j_{d}}^{2}\phi^{\prime}(z_{j_{1}})^{2} \cdots\phi^{\prime}(z_{j_{d}})^{2}, \tag{5}\] where \[M_{j_{1}\cdots j_{d}}=\det\begin{pmatrix}w_{j_{1}1}&\cdots&w_{j_{1}d}\\ \vdots&\ddots&\vdots\\ w_{j_{d}1}&\cdots&w_{j_{d}d}\end{pmatrix} \tag{6}\] is the minor of the weight matrix obtained by selecting units \(j_{1},\ldots,j_{d}\). For the error function \(\phi(x)=\operatorname{erf}(x/\sqrt{2})\), \(\det g\) expands as a superposition of Gaussian bump functions, one for each tuple of hidden units (B.55). This is reminiscent of Amari and Wu's approach, which yields a Gaussian contribution to \(\sqrt{\det g}\) from each support vector (SS3). We can also derive similar expansions for the Riemann tensor and Ricci scalar. The resulting expressions are rather unwieldy, so we give their general forms only in Appendix B.3. However, in two dimensions the situation simplifies, as the Riemann tensor is completely determined by the Ricci scalar (Dodson and Poston, 1991; Misner et al., 2017) (Appendix B.1). In this case, we have the compact expression \[(\det g)^{2}R=-\frac{3}{n^{3}}M_{jk}^{2}M_{ij}M_{ik}\\ \times\phi^{\prime}(z_{i})^{2}\phi^{\prime}(z_{j})\phi^{\prime}(z_ {k})\phi^{\prime\prime}(z_{j})\phi^{\prime\prime}(z_{k}). \tag{7}\] This shows that in \(d=2\) the curvature acquires contributions from each triple of distinct hidden units, hence if \(n=2\) we have \(R=0\). This follows from the fact that the feature map is in this case a change of coordinates on the two-dimensional manifold (Dodson and Poston, 1991). ### Geometry of infinite shallow networks We now characterize the metric induced by infinite-width networks (\(n\to\infty\)) with Gaussian weights and biases \[\mathbf{w}_{j}\sim_{\text{i.i.d}}\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}_ {d});\quad b_{j}\sim_{\text{i.i.d.}}\mathcal{N}(0,\zeta^{2}), \tag{8}\] as commonly chosen at initialization (LeCun et al., 2015; Lee et al., 2018; Matthews et al., 2018; Poole et al., 2016; Yang, 2019; Yang & Hu, 2021). For such networks, the hidden layer representation is described by the neural network Gaussian process (NNGP) kernel (Lee et al., 2018; Matthews et al., 2018; Neal, 1996; Williams, 1997): \[k(\mathbf{x},\mathbf{y}) =\lim_{n\rightarrow\infty}n^{-1}\mathbf{\Phi}(\mathbf{x})\cdot \mathbf{\Phi}(\mathbf{y})\] \[=\mathbb{E}_{\mathbf{w},b}[\phi(\mathbf{w}\cdot\mathbf{x}+b)\phi (\mathbf{w}\cdot\mathbf{y}+b)]. \tag{9}\] This kernel also completely describes the representation after training for networks in the lazy regime (Bordelon & Pehlevan, 2022; Yang & Hu, 2021). In Appendix C, we show that the metric associated with the NNGP kernel, \(g_{\mu\nu}=\mathbb{E}_{\mathbf{w},b}[\phi^{\prime}(\mathbf{w}\cdot\mathbf{x} +b)^{2}w_{\mu}w_{\nu}]\), can be written more illuminatingly as \[g_{\mu\nu}=e^{\Omega(\|\mathbf{x}\|^{2})}[\delta_{\mu\nu}+2\Omega^{\prime}(\| \mathbf{x}\|^{2})x_{\mu}x_{\nu}], \tag{10}\] where the function \(\Omega(\|\mathbf{x}\|^{2})\) is defined via \[e^{\Omega(\|\mathbf{x}\|^{2})}=\sigma^{2}\mathbb{E}_{z\sim\mathcal{N}(0, \sigma^{2}\|\mathbf{x}\|^{2}+\zeta^{2})}[\phi^{\prime}(z)^{2}]. \tag{11}\] For these results, we must also assume that \(\phi\) and its (weak) derivatives satisfy suitable boundedness assumptions for \(\Omega\) to be twice-differentiable (Daniely et al., 2016). Therefore, like the metrics induced by other dot-product kernels, the NNGP metric has the form of a projection (Burges, 1999). Such metrics have determinant \[\det g=e^{\Omega d}(1+2\|\mathbf{x}\|^{2}\Omega^{\prime}) \tag{12}\] and Ricci scalar \[R=- \frac{3(d-1)e^{-\Omega}(\Omega^{\prime})^{2}\|\mathbf{x}\|^{2}}{ (1+2\|\mathbf{x}\|^{2}\Omega^{\prime})^{2}}\] \[\times\left[d+2+2\|\mathbf{x}\|^{2}\left((d-2)\Omega^{\prime}+2 \frac{\Omega^{\prime\prime}}{\Omega^{\prime}}\right)\right]. \tag{13}\] Thus, all geometric quantities are spherically symmetric, depending only on \(\|\mathbf{x}\|^{2}\). Thanks to the assumption of independent Gaussian weights, the geometric quantities associated to the shallow Neural Tangent Kernel and to the deep NNGP will share this spherical symmetry (Appendix D) (Lee et al., 2018; Matthews et al., 2018; Yang, 2019; Yang & Hu, 2021). This generalizes the results of Cho & Saul (2011) for threshold-power law functions to arbitrary smooth activation functions. The relation between Gaussian norms of \(\phi^{\prime}\) and input discriminability indicated by this result is consistent with previous studies (Daniely et al., 2016; Poole et al., 2016; Zavatone-Veth & Pehlevan, 2021, 2022). In short, unless the task depends only on the input norm, the geometry of infinite-width networks will not be linked to the task structure. ### Examples In Appendix C.2, we evaluate the geometric quantities of the NNGP for certain analytically tractable activation functions. The resulting expressions for \(\sqrt{\det g}\) and \(R\) are rather lengthy, so we discuss only their qualitative behavior here. For the error function \(\phi(x)=\mathrm{erf}(x/\sqrt{2})\), \(R\) is negative for all \(d>1\), and both \(R\) and \(\sqrt{\det g}\) are monotonically decreasing functions of \(\|\mathbf{x}\|\) for all \(\zeta\) and \(\sigma\). For monomials \(\phi(x)\propto x^{q}\) for integer \(q>1\), \(\sqrt{\det g}\) is a monotonically increasing function of \(\|\mathbf{x}\|^{2}\), while \(R\) is again non-positive. However, in this case the behavior of \(R\) depends on whether or not bias terms are present: if \(\zeta=0\), then \(R\) is a non-decreasing function of \(\|\mathbf{x}\|^{2}\) that diverges towards \(-\infty\) as \(\|\mathbf{x}\|^{2}\downarrow 0\), while if \(\zeta>0\), \(R\) may be non-monotonic in \(\|\mathbf{x}\|^{2}\). In Figure 1, we illustrate this behavior, and show convergence of the empirical geometry of finite networks with random Gaussian parameters to the infinite-width results. Figure 1: Convergence of geometric quantities for finite-width networks with Gaussian random parameters to the infinite-width limit. **a**. The magnification factor \(\sqrt{\det g}\) (_left_) and Ricci scalar \(R\) (_right_) as functions of the input norm \(\|\mathbf{x}\|\) for networks with \(\phi(x)=\mathrm{erf}(x/\sqrt{2})\). Empirical results for finite networks, computed using (5) and (B.15) are shown in blue, with solid lines showing the mean and shaded patches the standard deviation over \(25\) realizations of random Gaussian parameters. In all cases, \(\sigma=\zeta=1\). The infinite-width result is shown as a black dashed line. **b**. As in **a**, but for normalized quadratic activation functions \(\phi(x)=x^{2}/\sqrt{3}\). ## 5 Changes in shallow network geometry during training We now consider how the geometry of the pullback metric changes during training. Changes in the volume element and curvature during gradient descent training are challenging to study analytically, because models for which the learning dynamics are solvable--deep linear networks (Saxe et al., 2013)--trivially yield flat, constant metrics. More generally, the dependence of the metric on the instantaneous configuration of parameters makes it difficult to gain intuition for its evolution over training, even for two-dimensional inputs. ### Wide Bayesian neural networks We can make slightly more analytical progress for Bayesian neural networks at large but finite width. This setting is convenient because there is a fixed parameter posterior; one does not need to solve kernel or metric dynamics through time (Bordelon and Pehlevan, 2022). In Appendix E, we use recent results on perturbative feature-learning corrections to the NNGP kernel (Roberts et al., 2022; Zavatone-Veth et al., 2021) to compute corrections to the posterior mean of the volume element. In general, it is not possible to evaluate these corrections in closed form (Zavatone-Veth et al., 2021). For networks with monomial activation functions, no bias terms, and linear readout constrained to interpolate a single training example \((\mathbf{x}_{a},\mathbf{y}_{a})\), we can show that the correction to \(\sqrt{\det g}\) is maximal for \(\mathbf{x}\parallel\mathbf{x}_{a}\), and minimal for \(\mathbf{x}\perp\mathbf{x}_{a}\) (E.38). The sign of the correction is positive or negative depending on whether the second moment of the prior predictive is greater than or less than the norm of the output, respectively. For example, if we train on a single point from the XOR task, \((1,1)\mapsto 0\), \(\sqrt{\det g}\) will be contracted maximally along \(x_{1}=x_{2}\). This simple case Figure 2: Evolution of the volume element over training in a network trained to classify points separated by a sinusoidal boundary \(y=\frac{3}{5}\sin(7x-1)\) (single hidden layer with 5 hidden units (top), 20 hidden units (mid), and 250 hidden units (bottom)). Red lines indicate the decision boundaries of the network. See Appendix G.1 for experimental details. More hidden units offer better approximation to the sinusoid curve. shows how interpolating a single point shapes the network's global representational geometry. ### Changes in representational geometry for networks trained on two-dimensional toy tasks Thus, given the intractability of studying changes in geometry analytically, we resort to numerical experiments. For details of our numerical methods, see Appendix G. To build intuition, we first consider networks trained on simple two-dimensional tasks, for which we can directly visualize the input space. We first consider a simple two-dimensional binary classification task with sinusoidal boundary, inspired by that considered in the original work of Amari & Wu (1999). We train networks with sigmoidal activation functions of varying widths to perform this task, and visualize the resulting geometry over the course of training in Figure 2. At initialization, the peaks in the volume element lack a clear relation to the structure of the task, with approximate rotational symmetry at large widths as we would expect from SS4.2. As the network's decision boundary is gradually molded to conform to the true boundary, the volume element develops peaks in the same vicinity. At all widths, the final volume elements are largest near the peaks of the sinusoidal decision boundary. At small widths, the shape of the sinusoidal curve is not well-resolved, but at large widths there is a clear peak in the close neighborhood of the decision boundary. This result is consistent with the proposal of Amari & Wu (1999). In Appendix G, Figure G.5, we plot the Ricci curvature for these trained networks. Even for these small networks, the curvature computation is computationally expensive and numerically challenging. Over training, it evolves dynamically, with task-adapted structure visible at the end of training. However, the patterns here are harder to interpret than those in the volume element. ### Changes in representational geometry for shallow networks trained to classify MNIST digits We now provide evidence that a similar phenomenon is present in networks trained to classify MNIST images. We give details of these networks in Appendix G.2; note that all reach above 95% train and test accuracy within 200 epochs. In Figure 3, we plot the induced volume element at synthetic images generated by linearly interpolating between two input images (see Appendix G for details). We emphasize that linear interpolation in pixel space of course does not respect the structure of the image data, and results in unrealistic images. However, this approach has the advantage of being straightforward, and also illustrates how small Euclidean perturbations are expanded by the feature map (Novak et al., 2018). At initialization, the volume element varies without clear structure along the interpolated path. However, as training progresses, areas near the center of the path, which roughly aligns with the decision boundary, are expanded, while those near the endpoints defined by true training examples remain relatively small. This is again consistent with the proposal of Amari & Wu (1999). We provide additional visualizations of this behavior in Appendix G.2. To gain an understanding of the structure of the volume element beyond one-dimensional slices, in Figure 3 we also plot its value in the plane spanned by three randomly-selected example images, at points interpolated linearly within their convex hull. Here, we only show the end of training; in Appendix G.2 we show how the volume element in this plane changes over the course of training. The edges of the resulting ternary plot are one-dimensional slices like those shown in the top row of Figure 3, and we observe consistent expansion of the volume element along these paths. The volume element becomes large near the centroid of the triangle, where multiple decision boundaries intersect. Because of the computational complexity of estimating the curvature--the Riemann tensor has \(d^{2}(d^{2}-1)/12\) independent components (Dodson & Poston, 1991; Misner et al., 2017)--and its numerical sensitivity (Appendix G.1), we do not attempt to estimate it for this high-dimensional task. ## 6 Extensions to deep networks Thus far, we have focused on the geometry of the feature maps of single-hidden-layer neural networks. However, these analyses can also be applied to deeper networks, regarding the representation at each hidden layer as defining a feature map, and study how the geometry changes with depth (Benfenati & Marta, 2023b; Hauser & Ray, 2017). As a simple version of this, in Figure G.6 we consider a network with three fully-connected hidden layers trained on the sinusoid task. The metrics induced by the feature maps of all three hidden layers all show the same qualitative behavior as we observed in the shallow case in Figure 2: areas near the decision boundary are magnified. As one progresses deeper into the network, the contrast between regions of low and high magnification factor increases. As a more realistic example, we consider deep residual networks (ResNets) (He et al., 2016) trained to classify the CIFAR-10 image dataset (Krizhevsky, 2009). To make the feature map differentiable, we replace the rectified linear unit (ReLU) activation functions used in standard ResNets with Gaussian error linear units (GELUs) (Hendrycks & Gimpel, 2016). With this modification, we achieve comparable test accuracy (92%) with a ResNet-34--the largest model we can consider given computational constraints--to that obtained with ReLUs (Appendix G.3). Importantly, the feature map defined by the input-to-final-hidden-layer map ping of a ResNet-34 gives a submersion of CIFAR-10, as the input images have \(32\times 32\times 3=3072\) pixels, while the final hidden layer has 512 units. Empirically, we find that the Jacobian of this mapping is full-rank (Appendix G.3); we therefore consider the volume element on \((\mathcal{M}/\sim,g)\) defined by the product of the non-zero eigenvalues of the degenerate pullback metric (SS2, Appendix G.3). In Figures 4, we visualize the resulting geometry in the same way we did for networks trained on MNIST, along 1-D interpolated slices and in a 2-D interpolated plane (see Appendix G.3 for details and additional figures). In the 1-D slices, we see a clear trend of large volume elements near decision boundaries, as we observed for shallow networks. However, in two dimensions the picture is less clear. The decision boundaries are more complicated than for MNIST, reflecting the more complex structure of the task. This also highlights the deficiency of our approach of linear interpolation in pixel space, which we further discuss and illustrate in Appendix G.3. We observe some magnification of areas in the vicinity of decision boundaries, though here it is harder to interpret all forms of structure that are present. Thus, even in this more realistic setting, we observe shaping of geometry over training that appears consistent with the proposal of Amari and Wu (1999). ## 7 Discussion To conclude, we have shown that training on simple tasks shapes the Riemannian geometry induced by neural network representations by magnifying areas along decision boundaries, consistent with the proposal of Amari and Wu for geometrically-inspired kernel learning (Amari and Wu, 1999; Williams et al., 2007; Wu and Amari, 2002). Our results on the geometry induced by the NNGP kernel provide a preliminary picture of the geometric priors of neural networks, and our experimental results begin to show how representational geometry is shaped over the course of training. These results are relevant to the broad goal of leveraging non-Euclidean geometry in deep learning (Bronstein et al., 2021; Weber et al., 2020). We now discuss the limitations of our work, Figure 3: _Top panel_: \(\log_{10}(\sqrt{\det g})\) induced at interpolated images between 7 and 6 by a single-hidden-layer fully-connected network trained to classify MNIST digits. _Bottom panel_: Digit class predictions and \(\log_{10}(\sqrt{\det g})\) for the plane spanned by MNIST digits 7, 6, and 1 at the final training epoch (200). Sample images are visualized at the endpoints and midpoint for each set. Each line is colored by its prediction at the interpolated region and end points. As training progresses, the volume elements bulge in the middle (near the decision boundary) and taper off when travelling towards endpoints. See Appendix G.2 for experimental details and Figure G.7 for images interpolated between other digits. as well as directions for future study. Perhaps the most important limitation of our work is the fact that we focus either on toy tasks with two-dimensional input domains, or on low-dimensional slices through high-dimensional domains. This is a fundamental limitation of how we have attempted to visualize the geometry. We are also restricted by computational constraints (see in particular Appendix G.3); to characterize the geometry of state-of-the-art network architectures, more efficient and numerically stable algorithms for computing these quantities must be developed. An important question that we leave open for future work is whether expanding areas near decision boundaries generically improves generalization in deep neural networks, consistent with Amari & Wu (1999)'s original motivations. Indeed, it is easy to imagine a scenario in which the geometry is overfit, and the trained network becomes too sensitive to small changes in the input. This possibility is consistent with prior work on the sensitivity of deep networks Novak et al. (2018), and with the related phenomenon of adversarial vulnerability Goodfellow et al. (2014); Szegedy et al. (2013). Investigating these links will be an important objective for future work. Because of the smoothness conditions required by the definition of the pullback metric and the requirement that \((\mathcal{M},g)\) be a differentiable manifold Benfenati & Marta (2023); Hauser & Ray (2017), the approach pursued in this work does not apply directly to networks with ReLU activation functions, which are not differentiable. Deep ReLU networks are continuous piecewise-linear maps, with many distinct activation regions Hanin & Rolnick (2019);b). Within each region, the corresponding linear feature map will induce a flat metric on the input space, but the magnification factor will vary from region to region. It will be interesting to investigate the resulting geometry in future work. One possible area of application of our results is to the general problem of how to analyze and compare neural network representations Kornblith et al. (2019); Williams et al. (2021). Importantly, one could compute and plot the volume element induced by a feature map even when one Figure 4: _Top panel_: \(\log_{10}(\sqrt{\det g})\) induced at interpolated images between a horse and a frog by ResNet34 trained to classify CIFAR-10 digits. _Bottom panel_: Digits classification of a horse, a frog, and a car. The volume element is the largest at the intersection of several binary decision boundaries, and smallest within each of the decision region. The one-dimensional slices along the edges of each ternary plot are consistent with the top panel. See Appendix G.3 for experimental details, Figure G.12 for linear interpolation and plane spanned by other classes, and how the plane evolves during training. does not have access to explicit class labels. This could allow one to study networks trained with self-supervised learning, or even differentiable approximations to biological neural networks (Wang and Ponce, 2022). Exploring the rich geometry induced by these networks is an exciting avenue for future investigation. ## Acknowledgements We thank Alexander Atanasov and Blake Bordelon for helpful comments on our manuscript. JAZV, CP and this research were supported by a Google Faculty Research Award and NSF DMS-2134157. The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University.
2305.03286
Composite Motion Learning with Task Control
We present a deep learning method for composite and task-driven motion control for physically simulated characters. In contrast to existing data-driven approaches using reinforcement learning that imitate full-body motions, we learn decoupled motions for specific body parts from multiple reference motions simultaneously and directly by leveraging the use of multiple discriminators in a GAN-like setup. In this process, there is no need of any manual work to produce composite reference motions for learning. Instead, the control policy explores by itself how the composite motions can be combined automatically. We further account for multiple task-specific rewards and train a single, multi-objective control policy. To this end, we propose a novel framework for multi-objective learning that adaptively balances the learning of disparate motions from multiple sources and multiple goal-directed control objectives. In addition, as composite motions are typically augmentations of simpler behaviors, we introduce a sample-efficient method for training composite control policies in an incremental manner, where we reuse a pre-trained policy as the meta policy and train a cooperative policy that adapts the meta one for new composite tasks. We show the applicability of our approach on a variety of challenging multi-objective tasks involving both composite motion imitation and multiple goal-directed control.
Pei Xu, Xiumin Shang, Victor Zordan, Ioannis Karamouzas
2023-05-05T05:02:41Z
http://arxiv.org/abs/2305.03286v1
# Composite Motion Learning with Task Control ###### Abstract. We present a deep learning method for composite and task-driven motion control for physically simulated characters. In contrast to existing data-driven approaches using reinforcement learning that imitate full-body motions, we learn decoupled motions for specific body parts from multiple reference motions simultaneously and directly by leveraging the use of multiple discriminators in a GAN-like setup. In this process, there is no need of any manual work to produce composite reference motions for learning. Instead, the control policy explores by itself how the composite motions can be combined automatically. We further account for multiple task-specific rewards and train a single, multi-objective control policy. To this end, we propose a novel framework for multi-objective learning that adaptively balances the learning of disparate motions from multiple sources and multiple goal-directed control objectives. In addition, as composite motions are typically augmentations of simpler behaviors, we introduce a sample-efficient method for training composite control policies in an incremental manner, where we reuse a pre-trained policy as the meta policy and train a cooperative policy that adapts the meta one for new composite tasks. We show the applicability of our approach on a variety of challenging multi-objective tasks involving both composite motion imitation and multiple goal-directed control. Key words and phrases:character animation, physics-based control, motion synthesis, reinforcement learning, multi-objective learning, incremental learning, GAN + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: Footnote †: + [MISSING_PAGE_POST] + [MISSING_PAGE_POST] FootnoteFootnote †: [MISSING_PAGE_POST] FootnoteFootnote †: [MISSING_PAGE_POST] FootnoteFootnote †: Footnote †: Footnote †: Footnote †: Footnote †: [MISSING_PAGE_POST] Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: FootnoteFootnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: FootnoteFootnote †: [MISSING_PAGE_POST] FootnoteFootnote †: Footnote †: FootnoteFootnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: Footnote †: FootnoteFootnote †: FootnoteFootnote †: Footnote †: [MISSING_PAGE_POST] †: [MISSING_PAGE_POST] ootnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote †: FootnoteFootnote: FootnoteFootnote †: FootnoteFootnoteFootnote: FootnoteFootnote: FootnoteFootnote †: FootnoteFootnoteFootnote: FootnoteFootnote: mobile phone. To accomplish this with virtual characters, existing control approaches need to be extended to accommodate the ability to train with multiple objectives as a goal. Second, with limited exception, most current control frameworks rely on imitation with the style of a behavior being derived from reference motion examples. Our aim is to be able to combine examples automatically through what we call "composite motion control" to avoid the need to continuously seek new example motions for every new permutation of combined behaviors. We also explore the ability to add multiple task objectives to support our aim of multi-objective control. The core difference of our approach from existing imitation learning approaches is decoupling full-body control during training, turning imitation and goal-directed full-body training into a multi-objective learning framework. To this end, we propose a modification to generative adversarial networks (GANs) to accommodate multiple discriminators (for each subtask in the desired end behavior) and to incorporate the mixing of the behaviors as a part of the training. In this way, we sidestep the need to dictate weights for combining the subtasks as well as the need to shape careful reward functions manually for each new composite behavior. In addition, as we expect composite motions to often be augmentations from simpler behaviors, we introduce a method for learning composite motion control policies from existing policies through _incremental learning_. To this end, we train a meta policy, for example for walking, and then train a new policy to _cooperate_ with the meta policy, producing a composite motion control policy significantly faster than learning from scratch. Thus, we can quickly add on to walking new activities from reference data such as punching or waiving, even if we do not have examples of these activities being combined previously with the meta policy. One naive approach to produce the composite motions we target is to blend motion capture clips to produce a single new motion, and perform traditional imitation learning from there. This suggested technique may be plausible for simple composite behaviors, like waiving an arm while walking as the two behaviors do not use the same joints, nor do they influence each other greatly, and therefore the blending can be done by simple splicing in a way that is fixed over time. Even so, there is no guarantee of physical plausibility without subsequent training - and the approach does not scale for more complex behaviors which may have more complicated tradeoffs between body parts used, especially over time. In contrast, our approach offloads the need to create this weighting as it is produced automatically by the policy as a part of the dictated action. Likewise, the output of our system is automatically guaranteed to be physically valid. Finally, our approach also has the capability to add task-directed goals, such as walk to a specified location, which is not possible without significant manual effort being added to the naive approach described. Overall, this paper makes the following contributions: * We introduce a novel approach for physics-based character control that decouples full-body control in order to learn imitation and task goals from disparate sources and across distinct body parts. * To this end, we extend GAN-style reinforcement learning and introduce a multi-objective learning framework to support multiple discriminators and automatic weighting of imitation and goal-driven subtask rewards. * We propose an incremental learning scheme that uses a meta-policy from an existing behavior to augment the behavior with new subtasks, producing a composite motion control policy that can be learned significantly faster than learning from scratch. Our scheme automatically learns weights across the body that are state dependent in order to effectively mix the original behavior with a new subtask in a temporally dynamic fashion. ## 2. Background and Related Work ### Physics-Based Character Control Developing controllers for physically simulated humanoids has wide applications in computer graphics, robotics, and biomechanics. Over the years, a number of trajectory optimization approaches for physics-based control have been proposed that leverage heuristics or feedback rules (Coros et al., 2010; De Lasa and Hertzmann, 2009; Wampler et al., 2014; Ye and Liu, 2010; Zordan et al., 2014), including open-loop control schemes(Liu et al., 2015, 2010; Mordatch et al., 2012), close-loop feedback control (da Silva et al., 2017; Mordatch and Todorov, 2014) and model predictive control approached (Hamalainen et al., 2015; Kwon and Hodgins, 2010; Tassa et al., 2012, 2014). Given the difficulty in controller design, which often involves multiple optimization objectives, data-driven methods using demonstrations from real humans has also drawn a lot of attention (Da Silva et al., 2008; Kwon and Hodgins, 2017; Lee et al., 2010; Liu et al., 2016, 2012; Muico et al., 2009; Sok et al., 2007; Yin et al., 2007; Zordan and Hodgins, 2002). In recent years, with the advancement of machine learning techniques, deep reinforcement learning frameworks have gained a lot of popularity for training physics-based character controllers. While some works (Karpathy and Van De Panne, 2012; Won et al., 2018; Xie et al., 2020; Yu et al., 2018) purely rely on reward functions designed heuristically or using curriculum learning to perform control and encourage the character to act in an expected, human-preferred style, most recent works leverage motion capture data to perform imitation learning in order to generate high-fidelity, life-like motions. DeepLoc (Peng et al., 2017) employs a hierarchical controller to perform walking-style imitation in navigation tasks for a physically simulated character. DeepMimic (Peng et al., 2018) combines imitation learning with goal-conditioned learning, and enables a physics-based character to learn a motor skill from a reference motion collected by motion capture or handcrafted by artists. Chentanez et al. (2018) explore the training of recovery policies that would prevent the character from deviating significantly from the reference motion. While the aforementioned works rely on a phase variable to synchronize with the reference motion, DReCon (Bergamin et al., 2019) utilizes a motion matching technique to find the target pose from a collection of reference motions dynamically in response to user control input. Besides relying on direct tracking of reference motions, researchers have offered a number of ways to extend the use of reference data in various ways. For example, Park et al. (2019) leverage the kinematic characteristics of unorganized motions to generate target poses for the control policy to imitate. UniCon (Wang et al., 2020) adopts a similar strategy, where a high-level motion scheduler is employed to provide the target pose for the low-level character controller. MotionVAE (Ling et al., 2020) employs data-driven generative models using variational autoencoders to generate target motion poses for a reinforcement learning based controller. A similar model is employed by Won et al. (2022) and tested with various goal-directed downstream tasks. To ensure synthesis of desired motions, these approaches rely on carefully designed reward functions to assess the controlled character motion. Drawn from GAIL (Ho and Ermon, 2016; Merel et al., 2017), AMP (Peng et al., 2021) and ICCGAN (Xu and Karamouzas, 2021) avoid manually designing reward functions by exploiting the idea of generative adversarial network (GAN) and relying on a discriminator to obtain the imitation reward for training. Beyond the simple use of full-body motions, many works explore motion generation by combining together multiple basic motions with respect to different body parts (Alvarado et al., 2022; Jang et al., 2022, 2008; Liu and Hodgins, 2018; Soga et al., 2016; Starke et al., 2021; Yazaki et al., 2015). However, these works focus on the editing and synthesis of motion animation or using inverse kinematic solvers, and do not work well with current frameworks for controlling physically simulated characters using reinforcement learning. To date, existing works for physics-based character control solely focus on the learning of full-body motions. As complementary to such works, in this paper, we target composite motion learning from multiple references without needing to generate any target full-body motion for tasks involving both goal-directed control and imitation control. ### Training Efficiency Characters employed during physics-based control typically are highly articulated with many degrees of freedom defined in continuous action spaces. Given the vast feasible choices of action, controlling so many degrees of freedom is essentially ambiguous, resulting in control problems that are under specified and highly dimensional. A qualified control policy usually needs millions of samples for training. The time consumption depends on the exploited algorithms and the motion complexity, varying from tens of hours to several days. While some works such as (Yang and Yin, 2021) explore approaches to speed up the training by improving the reinforcement learning algorithm itself, a lot of attention has been recently drawn on sample-efficient training by reusing pretrained policies or action models for fast new motion learning. For example, many recent approaches employ mixture of experts (MoE) models (Peng et al., 2019; Won et al., 2020, 2021), where a batch of pre-trained expert policies are exploited to provide primitive actions that are combined by a newly trained policy to generate the final actions. Other approaches explore using pre-trained latent space models such as variational autoencoders (Ling et al., 2020; Won et al., 2022) and GAN-based models (Peng et al., 2022) to facilitate the training of a control policy. In such approaches, the latent space model encapsulates a variety of reference motions and is used by the control policy to generate motions for a specific task. The works in (Merel et al., 2019, 2020) combine MoE with a latent space model and rely on an encoder-decoder architecture to perform distillation for motion learning. Ranganath et al. (2019) utilize principal moment analysis to extract coactivations from reference motions and use them as the atomic actions for motor skill learning. Despite achieving impressive results, exploring the latent space or learning how to combine expert policies is not always easier compared to performing exploration directly in the original action space. We note that all of these works focus only on reusing models that provide full-body motions. In contrast, we propose an incremental learning approach that allows a newly trained policy to take only partial actions from a pre-trained policy, and add on that to generate composite motions. Our approach can largely reduce the training time for composite and multi-objective tasks involving multiple imitation and goal-directed objectives as compared to training from scratch. ### Multi-Objective Control In multi-objective character control, the reward function of the underlying optimization problem is expressed as the weighted sum of multiple, possibly competing, goals. Depending on the task in hand, we seek for objective terms that encourage the character to accomplish behavior goals, follow reference motion and/or style, adopt certain behavior characteristics such as low energy movement, attaining specified goals, etc., resulting in an extensive list of objective terms (see (Abe et al., 2007; Macchietto et al., 2009; Muico et al., 2009; Peng et al., 2018; Wu and Zordan, 2010; Ye and Liu, 2010; Ye and Liu, 2010; Yu et al., 2010) for some examples). But how we handle all these competing objectives to create coherent, natural, and coordinated control remains an open question. A common solution is to employ a manual weighting scheme based on intuition, experience, and trial and error. However, such approaches often require excessive, often tricky manual effort to obtain desired results. While prioritized-based schemes have been employed that optimize each term in the reward function based on a given priority (De Lasa and Hertzmann, 2009; De Lasa et al., 2010), such schemes cannot automatically address the problem of multiple competing objectives. This problem becomes worse within a reinforcement learning setting, as small changes in the reward function can have a significant impact on the resulting behavior. It may need laborious work to finetune the weight of each objective to ensure that the control policy can effectively balance the learning of multiple objectives in a desired way. For tasks with hierarchical objectives, hierarchical reinforcement learning with multiple controllers can be employed, where a different controller is selected at different task levels (Clegg et al., 2018; Nachum et al., 2019; Peng et al., 2017; Xie et al., 2020). However, such approaches cannot work for nonhierarchical tasks, where different objective terms need to simultaneously be optimized such as when the character has to perform composite motion imitation and goal-directed control as in our problem domain. In our approach, we propose the use of a multi-critic optimization scheme, where each objective is regarded as an independent task and is assigned a separate critic. By evaluating each objective independently, the contribution (gradient) of each objective can be normalized into the same scale, and, thus, the control policy will be updated toward each objective at the same pace. As such, we avoid scalarizing and weighting the rewards or priorities of multiple objectives. In addition, our approach provides a simple solution to adaptively balance the multiple objectives during policy updating without needing to find or estimate the Pareto front. ## 3. Overview Our approach enables a physically simulated character to perform composite motions through imitating partial-body motions from multiple reference sources directly and simultaneously. This scheme turns the full-body motion imitation task into a multi-objective optimization problem, to which we can further introduce extra objectives for goal-directed control. We refer to Fig. 2 for an overview of our proposed system for composite motion learning with task control. We employ a GAN-like structure combined with reinforcement learning to train the control policy imitating the given reference motions. As such, we do not have to manually design a reward function for imitation learning or explicitly track a target pose from the reference motions. To learn composite motions, we decouple the full-body motion into several partial-body groups each of which imitates its own references. Based on this GAN-like structure, we propose a multi-objective learning framework that exploits multiple critics at the same time to help the control policy learn from multiple objectives, involving both composite motion imitation and goal-directed task control in a balanced way (Section 4). To accelerate training, we further consider an optional incremental learning scheme that reuses a pre-trained policy as the meta policy and allows a cooperative policy to adapt the meta one for new composite tasks (Section 5). ## 4. Composite Motion Learning Given a physically simulated character, we seek to train a control policy \(\pi(a_{t}|st_{t},\xi_{t})\) that simultaneously imitates motions from multiple reference ones, each focusing on specific body parts, while possibly completing specific goal tasks. At each time step \(t\), the control policy takes the character state \(s_{t}\) and a dynamic goal state variable \(g_{t}\) as the input and outputs the control signal (action) \(a_{t}\). We let \(g_{t}\) be an empty variable if no goal-directed control is involved. In the following, we detail our proposed approach for training \(\pi\) that decouples full-body motion allowing imitation performance to be evaluated and improved with respect to specific body parts, and converts the underlying composite motion learning problem into a multi-objective optimization problem. ### Full-Body Motion Decoupling At each time step \(t\), we represent the character pose as \(\mathcal{P}_{t}:=\{(p_{l},q_{l},\dot{p}_{l},\dot{q}_{l})|t\}_{l=1}^{N_{\text{ link}}}\), where \(p_{l}\in\mathbb{R}^{3}\) and \(q_{l}\in\mathbb{R}^{4}\) are the position and orientation (measured in the unit of quaternion) of each body link respectively, and \(\dot{p}_{l}\in\mathbb{R}^{3}\) and \(\dot{q}_{l}\in\mathbb{R}^{3}\) are the linear and angular velocities respectively. Given the geometry model and joint constraints of the simulated character, this representation can be converted into a joint space one defined by the skeletal joints' local position and velocity and the root's global position and orientation. Let \(\mathcal{M}\supset\{\tilde{\mathcal{P}}_{t}\}_{t}\) be the collection of reference motions which may contain multiple clips of pose trajectories \(\{\tilde{\mathcal{P}}_{t}\}_{t}\) as the reference. To perform imitation learning, existing approaches either use a carefully designed reward function to compute the error between \(\mathcal{P}_{t+1}\) and \(\tilde{\mathcal{P}}_{t+1}\)(Bergamin et al., 2019; Chentanez et al., 2018; Park et al., 2019; Peng et al., 2018; Won et al., 2020), or employ an evaluator to assess the transfer \(\mathcal{P}_{t}\rightarrow\mathcal{P}_{t+1}\) without explicitly comparing to any specific poses in the reference motions (Merel et al., 2017; Peng et al., 2021; Xu and Karamouzas, 2021). The former approaches usually need a motion tracking or generation mechanism to retrieve \(\tilde{\mathcal{P}}_{t+1}\) from the reference motions. The latter typically build on the framework of adversarial generative networks (GANs) and rely on a discriminator to evaluate the transfer. Some approaches take poses from more than one frame during imitation performance evaluation in order to apply more constraints on the pose trajectory. Figure 2. Overview of the proposed system for composite motion learning with task control. Under the framework of reinforcement learning combined with a GAN-like structure for motion imitation, our approach employs a multi-critic architecture to train a physics-based controller involving multiple objectives. Based on this system, we further propose an optional incremental learning scheme that allows the control policy to fast learn new composite motions and tasks by reusing a pre-trained, meta policy. Nevertheless, all these approaches leverage the full-body character pose \(\mathcal{P}_{t}\) and reference pose \(\tilde{\mathcal{P}}_{t}\in\mathcal{M}\) to perform imitation learning, and thus intend to learn the full-body motions in \(\mathcal{M}\). To learn composite motions, ideally, we want the simulated character's partial body motions to come from different reference sources at a given time step \(t\), i.e., the transfer of pose trajectory \(\mathcal{P}^{i}_{t-n_{i}:t}\rightarrow\mathcal{P}^{i}_{t+1}\) should satisfy \[\{\mathcal{P}^{i}_{t-n_{i}},\cdots,\mathcal{P}^{i}_{t},\mathcal{P}^{i}_{t+1} \}\subset\mathcal{M}^{i}, \tag{1}\] where \(\mathcal{P}^{i}_{t}\subset\mathcal{P}_{t}\) is a partial-body pose from the simulated character, and \(\mathcal{M}^{i}\supset\{\tilde{\mathcal{P}}^{i}_{t}\}_{t}\) is the reference motion collection containing only poses of the partial body group \(i\). The full-body motion is constrained by using multiple \(\mathcal{M}^{i}\) at the same time. Here, we follow Xu and Karamouzas (2021) and use a pose trajectory having \(n_{i}+2\) frames for imitation performance evaluation. The larger \(n_{i}\) is, the stricter the evaluation will be, as an error occurring at an earlier time step would negatively influence the evaluation of the following steps. Typical partial body groups for a humanoid character would be the upper and lower body, arms, and torso. For example, we can let \(\mathcal{M}^{\text{upper}}\) be a collection of greeting motions involving the upper body (arms, hands, torso and head), and \(\mathcal{M}^{\text{lower}}\) be walking motions involving the lower body (pelvis, legs and feet). Then, the full body motion is expected to be the composite of \(\mathcal{M}^{\text{upper}}\) (greeting) and \(\mathcal{M}^{\text{lower}}\) (walking). To coordinate the motions from multiple body groups, we can let \(\mathcal{P}^{i}_{t}\) and some other partial-body poses \(\mathcal{P}^{j}_{t}\) share some common body link states. For example, let \(\mathcal{P}^{i}_{t}\) and \(\mathcal{P}^{\text{lower}}_{t}\) share the state of one leg to avoid ipsilateral walking. Correspondingly, the leg state should be included in both \(\mathcal{M}^{\text{upper}}\) and \(\mathcal{M}^{\text{lower}}\) for the control policy to learn. We refer to Sections 6 and 7 for body splitting schemes used in our experiments, including typical upper and lower body decoupling schemes and more tailored ones for specific tasks such as juggling while walking. After decoupling the character's full-body motion into multiple sets of \(\{\mathcal{P}^{i}_{t}\}_{t}\), we perform imitation learning with respect to each body group independently, where the control policy is expected to explore how to combine partial-body motions by itself without needing any full-body, composite motions to be provided as the reference. ### Imitation Learning To perform imitation learning, we build our approach off of GAN-like frameworks (Ho and Ermon, 2016; Merel et al., 2017), which utilize a discriminator to evaluate imitation performance and generate reward signals for policy optimization using reinforcement learning algorithms. However, instead of using only one discriminator to perform full-body imitation performance evaluation, we employ multiple discriminators simultaneously, each of which deals with a body part group \(i\) associated with a collection of partial-body reference motions \(\mathcal{M}^{i}\). Based on this framework, we can avoid designing reward functions to compute the imitation error for each specific body part group. Furthermore, each discriminator can take only its own interested body link states as input during training. Therefore, the provided \(\mathcal{M}^{i}\) can still be a collection of full-body motions, but there is no need to explicitly generate any partial-body motions during preprocessing. To stabilize the adversarial training process, we introduce a hinge loss (Lim and Ye, 2017), gradient penalty term (Gulrajani et al., 2017), and an ensemble technique for training of discriminators as proposed in (Xu and Karamouzas, 2021). Following the literature, given \(o^{i}_{t}\) as the observation sampled from the simulated character and \(\tilde{o}^{i}_{t}\) as that sampled from the reference motions \(\mathcal{M}^{i}\), the \(i\)-th ensemble of \(N\) discriminators, \(D^{i}=\{D^{i}_{n}|n=1,\cdots,N\}\) is trained using the loss function: \[\mathcal{L}_{D^{i}}=\frac{1}{N}\sum_{n=1}^{N}\left(\mathds{E}_{t} \left[\max(0,1+D^{i}_{n}(o^{i}_{t}))\right]+\mathds{E}_{t}\left[\max(0,1-D^{i} _{n}(\tilde{o}^{i}_{t}))\right]\right.\] \[\left.+\lambda^{\text{GP}}\mathds{E}_{t}\left[(||\nabla_{\tilde{ o}^{i}_{t}}D^{i}_{n}(\tilde{o}^{i}_{t})||_{2}-1)^{2}\right]\right) \tag{2}\] where \(\tilde{o}^{i}_{t}=\alpha o^{i}_{t}+(1-\alpha)\tilde{o}^{i}_{t}\) with \(\alpha\sim\textsc{Uniform}(0,1)\) and \(\lambda^{\text{GP}}\) is gradient penalty coefficient. According to Eq. 1, we define the observation space of a discriminator as \[o^{i}_{t}\coloneqq\{\mathcal{P}^{i}_{t-n_{i}},\cdots,\mathcal{P}^{i}_{t}, \mathcal{P}^{i}_{t+1}\}. \tag{3}\] In principle, the discriminator relies on \(o^{i}_{t}\) to evaluate the control policy's performance during the state-action-state transition \((s_{t},a_{t},s_{t+1})\). The observation space theoretically should satisfy \(o^{i}_{t}\subseteq\{s_{t},s_{t+1}\}\). Otherwise, the discriminator may rely on features unknown to the control policy, and thus it cannot effectively evaluate the policy's performance. Given that the control policy \(\pi\) in our formulation is still a full-body control policy, we simply define \(s_{t}\) as a full-body motion state: \[s_{t}:=\{\mathcal{P}_{t-n},\cdots,\mathcal{P}_{t}\} \tag{4}\] where \(n\geq n_{i}\) for all \(i\). We refer to the Appendix in the supplementary material for more details about the state and observation representation. The hinge loss function provides a linear evaluation between \([-1,1]\) to measure the similarity of a given pose trajectory sample \(o^{i}_{t}\) to any sample in the reference motions. Therefore, we define the reward term that evaluates the policy's imitation performance with respect to \(\mathcal{M}^{i}\) for the body part group \(i\) at time \(t\) as: \[r^{D^{i}}_{t}(s_{t},a_{t},s_{t+1})=\frac{1}{N}\sum_{n=1}^{N}\textsc{Clip}\left(D ^{i}_{n}(o^{i}_{t}),-1,1\right). \tag{5}\] It must be noted that even though \(o^{i}_{t}\) and \(\tilde{o}^{i}_{t}\) in Eq. 2 have the same subscript \(t\), they are paired only for the gradient penalty computation (last term in Eq. 2). The discriminator ensemble here only evaluates the pose trajectory \(o^{i}_{t}\) independently, rather than comparing it against any specific target trajectory. Therefore, \(\tilde{o}^{i}_{t}\) can be randomly sampled from the reference motions by interpolation. Overall, by employing multiple discriminator ensembles at each time step \(t\), we will have a set of rewards, \(\{r^{D^{i}}_{t}\}_{D^{i}}\), to evaluate the policy's performance of controlling the character to perform composite motions, i.e. simultaneously imitating different sets of reference motions corresponding to specific partial body parts. By doing so, we convert the task of composite motion learning to a multi-objective optimization problem under the framework of reinforcement learning. ### Multi-Objective Learning We consider policy optimization of a typical on-policy policy gradient algorithm by maximizing \[\mathcal{L}_{\pi}=\mathrm{E}_{t}[A_{t}\log\pi(\mathrm{a}_{t}|\mathrm{s}_{t}, \mathrm{g}_{t})], \tag{6}\] where \(\mathrm{s}_{t}\) and \(\mathrm{g}_{t}\) are the given character's and goals' state variables respectively, and \(A_{t}\) is the advantage which is typically estimated by \(\{r_{t}\}_{\tau\geq t}\). In the common actor-critic architecture, a separate network (critic) is updated in tandem with the policy network (actor). The critic is employed to provide state-dependent value estimation, \(V(s_{t})=\mathrm{E}_{\pi}[\sum_{\tau\geq t}\nu^{\tau-t}r_{\tau}]=\mathrm{E}_{ \pi}[r_{t}+\gamma V(s_{t+1})]\), based on which \(A_{t}\) can be estimated with less variance, where \(\gamma\) is the discount factor regulating the importance of the contribution from future steps. To stabilize the training, standardization is often applied on \(A_{t}\) where the standardized advantage \(\bar{A}_{t}\) is used in place of \(A_{t}\) for policy updating. A typical solution for multi-objective tasks in reinforcement learning is to simply add together all objective-related reward terms, \(r_{t}^{k}\), with some weights \(\omega_{k}\), i.e., \(r_{t}=\sum_{k=1}^{K}\omega_{k}r_{t}^{k}\) for a \(K\)-objective problem. In such a way, we still have a scalar reward that can be used with Eq. 6 for policy updating. In practice, though, given that conflicts may exist among the different reward terms, manually tuning the values of \(\omega_{k}\) to balance the composite objective of the character is not an intuitive task. For example, we may need the policy to put more effort into learning a difficult partial-body motion, instead of even with a trade-off in learning other motions, rather than only focusing on the easy ones to keep achieving a higher associated reward. In addition, our proposed approach performs reward estimation by employing multiple discriminators simultaneously, which are modeled by neural networks. This scheme brings a lot of uncertainty, as the reward distributions from different discriminators may differ a lot depending on the given reference motions, which could be unpredictable before training. Such a problem would deteriorate if we further introduce a set of goal-directed tasks, each having its own associated reward term which may compete against the imitation reward terms. To balance the contributions of multiple objectives during policy updating, we propose to model the multi-objective learning problem as a multi-task one, where each objective is taken into account as an _independent task_ and has a fixed importance during policy updating. To do so, instead of using \(r_{t}=\sum_{k}\omega_{t}r_{t}^{k}\), we compute the advantage of \(A_{t}^{k}\) with respect to \(\{r_{t}^{k}\}_{\tau\geq t}\) independently. Then, the optimization process becomes maximizing \[\mathcal{L}_{\pi}=\sum_{k=1}^{K}\mathrm{E}_{t}\left[\omega_{k}\bar{A}_{t}^{k} \log\pi(\mathrm{a}_{t}|\mathrm{s}_{t},\mathrm{g}_{t})\right], \tag{7}\] where \(\sum_{k}\omega_{k}=1\) and \(\bar{A}_{t}^{k}\) is the standardization of \(A_{t}^{k}\), i.e. \[\bar{A}_{t}^{k}=\frac{A_{t}^{k}-\mathrm{E}_{t}[A_{t}^{k}]}{\sqrt{\mathrm{Var }_{t}[A_{t}^{k}]}}. \tag{8}\] This optimization process is equal to updating the policy with respect to each objective independently but always at the same scale proportional to \(\omega_{k}\). The introduction of \(\omega_{k}\) gives us more flexibility to adjust the contributions toward each objective when conflicts occur during policy updating. However, under our testing, a simple choice of \(\omega_{k}=1/K\), which means each objective is equally important, works well for most cases. We refer to the Appendix in the supplementary material for the choice of \(\omega_{k}\) in our tested composite tasks. During implementation, we can rewrite Eq. 7 as \[\mathcal{L}_{\pi}=\mathrm{E}_{t}\left[\left(\sum_{k}\omega_{k}\bar{A}_{t}^{k} \right)\log\pi(\mathrm{a}_{t}|\mathrm{s}_{t},\mathrm{g}_{t})\right] \tag{9}\] such that the policy update can be done through backward propagation in one pass. From this equation, we can see that the nature of our approach is to introduce a dynamic coefficient constrained by the standard deviation of \(\{A_{t}^{k}\}_{t}\) for each objective \(k\). As such, the policy will be updated with respect to each objective adaptively. This separation of objectives leads to a single-policy multi-critic architecture. In Fig. 2, for example, we have two imitation related reward terms (yellow and green) for upper and lower body imitation respectively, and two goal-directed task reward terms (red and blue). Accordingly, we employ four critics denoted by \(\textsc{Critic}_{k}\) in the figure. Each \(\textsc{Critic}_{k}\) only participates in the estimation of \(A_{t}^{k}\), and takes the reward associated with the objective \(k\), i.e. \(\{r_{t}^{k}\}_{t}\), for training. Though the policy update is balanced through the proposed multi-critic architecture, the state values, which are decided by \(\{r_{t}^{k}\}_{t}\), could differ still drastically with respect to each objective depending on the difficulty of given reference motions or the reward distributions of the goal-related tasks. To mitigate this issue and stabilize the training of critics, we introduce the value normalization scheme of _PopArt_(van Hasselt et al., 2016). The value target under this scheme is normalized by the moving average and standard deviation for the critic network training. The output of a critic is unnormalized before joining the process of advantage estimation. Besides maintaining a normalizer for value targets, _PopArt_ is designed to preserve the output precisely. Namely, with _PopArt_, the output of a critic is identical before and after the normalizer updates given the same input state \(s_{t}\) and \(\mathrm{g}_{t}\). Such a design is to prevent the normalization from affecting the value state estimation, thereby stabilizing the policy training. In our implementation, each critic \(\textsc{Critic}_{k}(s_{t},\mathrm{g}_{t})\) has its own normalizer with a scalar scale and shift estimated independently with respect to its associated objective \(k\). As we show in Section 6.6, the introduction of _PopArt_ helps improve the policy performance as also demonstrated by previous works (van Hasselt et al., 2016; Yu et al., 2021). ## 5. Incremental Learning Besides being able to perform a range of composite motions, humans typically learn such motions in an incremental manner. For example, if we know how to walk, we should be able to quickly learn how to hold our phone while walking. There is no need to relearn walking from scratch. Based on this intuition, we propose an incremental learning scheme for fast composite motion learning. Instead of training a policy completely from scratch, we reuse a pre-trained policy as a meta policy \(\pi^{\text{meta}}\) that allows the simulated character to perform a basic set of motions (walking in the previous example). Given \(\pi^{\text{meta}}\), we train a new policy \(\pi\) to cooperate with the meta policy, performing new composite motions by action addition (holding a phone + walking). Formally, let \(\pi(a_{t}|s_{t},g_{t}):=\mathcal{N}(\mathbf{\mu}_{t},\mathbf{\sigma}_{t}^{2})\) denote a Gaussian-based policy. By introducing a meta policy \(\pi^{\text{meta}}\), we define the policy, which is trained to cooperate with \(\pi^{\text{meta}}\) for new composite motions as \[\begin{split}\pi(a_{t}|s_{t},g_{t},a_{t}^{\text{meta}})& :=\mathcal{N}\left(\mathbf{\mu}_{t},\mathbf{\sigma}_{t}^{2}\right)+\text{w}_{t }\text{Strop}\left(a_{t}^{\text{meta}}\right)\\ &=\mathcal{N}\left(\mathbf{\mu}_{t}+\text{w}_{t}\text{Strop}\left(a_{ t}^{\text{meta}}\right),\mathbf{\sigma}_{t}^{2}\right),\end{split} \tag{10}\] where the weight vector \(\text{w}_{t}\) has the same dimension with \(a_{t}^{\text{meta}}\), and \(a_{t}^{\text{meta}}\sim\pi^{\text{meta}}(\cdot|s_{t}^{\text{meta}},s_{t}^{ \text{meta}})\) is drawn from the meta policy. \(\text{w}_{t}\) are defined as a set of weights each of which is associated with a DoF in the action space of the meta policy. In our implementation, \(\text{w}_{t}\), \(\mathbf{\mu}_{t}\) and \(\mathbf{\sigma}_{t}\) are obtained by a neural network taking \(s_{t}\) and \(g_{t}\) as input, and thus are learnable. We put a "gradient stop" operator, \(\text{Strop}(\cdot)\), on \(a_{t}^{\text{meta}}\), which means that the meta policy is fixed and will not be updated with \(\pi\). Using this incremental learning scheme, the new, cooperative policy adds its own action to the meta action \(a_{t}^{\text{meta}}\). The weight vector \(\text{w}_{t}\) decides the reliance of \(\pi\) on the meta policy \(\pi^{\text{meta}}\) with respect to each DoF in the action space. The bigger an element in \(\text{w}_{t}\) is, the more the cooperative policy relies on the meta policy to control the corresponding DoF. As such, \(\pi\) is trained incrementally to learn new composite motions by reusing the meta policy partially. This scheme does not require that \(a_{t}^{\text{meta}}\) and \(a_{t}\) must have exactly the same dimension, as we can assume zero values for the missing dimensions in \(a_{t}^{\text{meta}}\) or ignore the extra, uninteresting dimensions in \(a_{t}^{\text{meta}}\). Compared to a mixture-of-experts (MoE) model, where the action is obtained by a linear combination of the actions from multiple expert policies, our approach focuses on reusing partial-body motions from the meta policy. It would be very difficult for a MoE model to keep, for example, only the lower-body motion of one expert and replace the upper-body motion with that of another expert through a linear combination of the experts' full-body motions. With the introduction of \(\pi^{\text{meta}}\), we can replace \(\pi(a_{t}|s_{t},g_{t})\) in Eq. 7 with \(\pi(a_{t}|s_{t},g_{t},a_{t}^{\text{meta}})\), and perform composite motion learning with goal-directed control under our proposed multi-objective learning framework. We refer to Algorithm 1 for the outline of the proposed multi-objective learning framework with incremental learning. To train a composite policy completely from scratch without using incremental learning, we can simply ignore \(\pi^{\text{meta}}\) and use \(\pi(a_{t}|s_{t},g_{t})\) solely in Algorithm 1. ## 6. Experiments In this section, we experimentally evaluate our approach on multiple challenging composite motion learning tasks. We show that our approach can effectively let motor control policies learn composite motions from multiple reference motions directly without manually generating any full-body motion as reference. Besides evaluating the imitation performance, we also apply our approach on several goal-directed control tasks combined with composite motion learning from unstructured reference data. The results demonstrate that our proposed approach can successfully tackle complex tasks balancing the learning of multiple objectives involving both partial-body motion imitation and goal-directed control. Finally, we perform ablation studies on our proposed multi-objective learning framework and incremental learning scheme. ### Implementation Details We run physics-based simulations using IsaacGym (Makoviychuk et al., 2021), which supports simulation with a large number of instances simultaneously by leveraging GPU. The simulated humanoid character has 15 body links and 28 DoFs, where the hands are fixed with the forearms and are uncontrollable. In the tasks involving a tennis player, we add 3 DoFs on the right wrist joint such that the character can control the racket more agilely, though the racket is fixed on the right hand. The simulation runs at 120Hz and the control policy at 30Hz. Differing from the previous works that employ a stable PD controller [12] for character control [13, 14, 15, 16, 17, 18, 19] we employ a normal, linear PD servo for faster simulation. We use PPO [11] as the base reinforcement learning algorithm for policy training and Adam optimizer [10] to perform policy optimization. To embed the character state \(\mathrm{s}_{t}\) and the discriminator observation \(\mathrm{o}_{t}^{i}\) sequentially, we employ a gated recurrent unit (GRU) [12] with a 256-dimension hidden state to process these temporal inputs. The embedded character state feature is concatenated with the dynamic goal state \(\mathrm{g}_{t}\) if goal-directed control is involved, and then passed through a multilayer perceptron with two full-connected (FC) layers. The control policy is constructed as Gaussian distributions with independent components. The output of the policy network includes the mean \(\mathbf{\mu}_{t}\) and standard deviation \(\mathbf{\sigma}_{t}\) parameters of the policy distribution as well as a weight vector \(\mathrm{w}_{t}\) when incremental learning is exploited. The multiple critics in our multi-objective learning framework are modeled by a multi-head neural network. Similarly to the critic networks, we model a discriminator ensemble using a multi-head network. The outputs are averaged by Eq. 5 to produce the reward signal. All the network structures are shown in Fig. 3, in which we assume that there are \(K\) objectives in total. We refer to the Appendix in the supplementary material for the representation of \(\mathrm{g}_{t}\) in our designed goal-directed tasks, and all hyperparameters used for policy training. All the tested policies were trained on a machine equipped with an Nvidia V100 GPU. It typically takes about 1.5h to train a policy using a fixed budget of 20M samples (environment steps), for a pure composite motion imitation task. For complex tasks involving goal-directed control, it takes about 15 to 30 hours and requires about \(2\times 10^{8}\) to \(4\times 10^{8}\) samples to train a policy from scratch. By exploiting our incremental learning scheme to reuse a pre-trained meta policy, we can shorten the training time to about 30 minutes to 2 hours depending on the difficulty of the tasks. ### Data Acquisition All the motion data used for training are obtained from the LAFAN1 dataset [11] and other commercial and publicly available motion capture datasets recorded at 30Hz. For single-clip imitation, we synthesize short reference motion clips of 1-3 seconds long (cf. Table 1). For tasks with goal-directed control, we extract several collections of motions (cf. Table 2), each of which contains multiple clips of reference motions with lengths varying from about 15 to 70 seconds. The juggling motion involves a single trial of a subject performing juggling while standing on a skate, while the collection of tennis swing motions contains four trials of forehand swings captured from different subjects. We retarget the local joint position from those motion data to our character model without extra manual reprocessing. We demonstrate that policies trained with our approach can perform motion synthesis from unstructured data for goal-directed control, and can explore how to perform composite motions by combining the partial-body motions from the reference motions without needing any manual processing for motion blending. ### Imitation Performance In Fig. 4, we highlight motion pose snapshots captured from some of our trained policies for composite motion learning. Each composite motion is learned based on two reference motion clips, one for the upper body and the other one for the lower body. From top to bottom, the names of corresponding motions are listed in Table 1. Overall, policies trained with our approach can perform very challenging composite motor skills by using the character's upper and lower body part groups at the same time. For example, in the motion combination of chest open and jumping jack (1st row), the control policy must keep the character's body balanced to perform the chest-open motion during jumping in the air, which is a pretty challenging task even for humans. Similar challenges arise when doing squats with the chest open (3rd row) and lunges with waist twisting (4th row). Besides simply following the two partial-body reference motions at the same time, the control policies must master how the partial motions could be combined such that the full-body motion is physically plausible. In the 4th row, for example, it is impossible for the character to keep twisting its waist while doing lunges at quite different frequencies. Similarly, in the motion combination of punch and walk (6th row) and that of punch and run (7th row), the character's foot has to contact the ground first in order to perform the punch action with the torso leaning forward. The control policy, thereby, must know when the punch action is doable and arrange the motion combination by itself, rather than strictly following the reference motions. Our approach does not require the given reference motions to be perfectly synchronized. The control policies take the character state as input and perform composite motions accordingly. Furthermore, the proposed dynamic sampling rate (see Appendix) allows the control policy to adjust the motion speed within an acceptable range for better motion combining. To quantitatively evaluate the imitation performance, following previous literature [11, 14, 15], we leverage the technique of fast dynamic time warping (DTW) and measure the imitation error as Figure 3. Network structures. \(\oplus\) denotes the concatenation operator and \(\oplus\) denotes the average operator. follows: \[e_{t}=\frac{1}{N_{\text{link}}^{i}}\sum_{l=1}^{N_{\text{link}}^{i}}||p_{l}-\tilde {p}_{l}||, \tag{11}\] where \(N_{\text{link}}^{i}=|\{\tilde{p}_{l}^{i}\}|\) is the number of interesting body links in the \(i\)-th body part group, \(p_{l}\in\mathbb{R}^{3}\) is the position of the body link \(l\) in the world space at the time step \(t\), and \(\tilde{p}_{l}\) is the body link's position in the reference motion. The evaluation results are shown in Table 1. Our approach can imitate the reference motions closely and balance the imitation of the two partial-body motions well. As can be seen, there is no big gap between the two imitation errors in a given composite motion combination, which means that policies trained with our approach do not just follow only one reference motion and ignore the other one. In contrast, without using our proposed multi-objective learning framework, the policy could prefer to track only one reference motion that is easy to follow. We refer to Section 6.6 for the related ablation study. ### Goal-Directed Motion Synthesis To test our approach with more complex tasks involving both composite motion learning and goal-directed control, we designed five goal-directed tasks, as shown in Figs. 5 and 6. In the _Target Heading_ and _Target Location_ tasks illustrated in Figs. 5a and 5b, the character is asked to respectively go along a target heading direction and toward a target location at a preferred speed. Besides the goal-directed objective, two motion imitation objectives are employed: one is for the lower-body and the other one is for the upper body. Differing from the examples shown in Fig. 4 where the walking and running motions are just single, short clips containing only one gait cycle, here we use a collection of unstructured walking and running motions as the reference for the lower body, as listed in Table 2. In the three examples shown in Fig. 5a, the upper body motions are learned from single reference motion clips, which are chest open, jumping jack, and punch respectively, as depicted by the small snapshots in the figure. In the examples shown in Fig. 5b, we use the motion collection of tennis footwork as the reference \begin{table} \begin{tabular}{r|c c} \hline \hline **Composite Motion** & **Length** [s] & **Imitation Error** [m] \\ \hline \hline Chest Open & 2.10 & \(0.11\pm 0.02\) \\ Front Jumping Jack (lower) & 1.80 & \(0.16\pm 0.03\) \\ \hline Front Jumping Jack (upper) & 1.80 & \(0.30\pm 0.03\) \\ Walk In-place & 2.10 & \(0.29\pm 0.02\) \\ \hline Chest Open & 2.10 & \(0.10\pm 0.01\) \\ Squad & 1.67 & \(0.09\pm 0.01\) \\ \hline Waist Twist & 3.37 & \(0.15\pm 0.04\) \\ Leg Lunge & 3.67 & \(0.13\pm 0.02\) \\ \hline Hand Waving & 1.80 & \(0.06\pm 0.03\) \\ Walk & 1.10 & \(0.09\pm 0.02\) \\ \hline Punch & 1.30 & \(0.11\pm 0.02\) \\ Walk & 1.10 & \(0.10\pm 0.01\) \\ \hline Punch & 1.30 & \(0.17\pm 0.03\) \\ Run & 0.76 & \(0.14\pm 0.01\) \\ \hline \hline \end{tabular} \end{table} Table 1. Imitation performance when learning composite motions from single clips of reference motions. Figure 4. Composite motions learned from multiple single-clip reference motions. The two snapshots shown on the left side of each row are the reference motions for the upper and lower body respectively. for the control policy to learn how to hold the racket. This task is relatively harder, as the reference motions for both the upper and lower body are unstructured. While following the reference motions closely, the control policies trained with our approach can effectively coordinate the character's upper and lower body poses to perform the composite motions during goal-steering navigation. In the task of _Tennis Swing_, the character is expected to hit the ball successfully with a forehand. The provided collection of tennis swing motions contains four trials, where the subject performs forehand swings while standing still. The tennis ball in our implementation is generated randomly in a small region near the character. As such, the control policy has to rely on the lower-body footwork motions to properly adjust the pose and position of the character relative to the tennis ball, while it relies on the upper body swing motions to swing effectively and on time. We note that the goal-directed reward in our design only evaluates the effectiveness of hitting based on the ball's outgoing speed and destination. The motion otherwise is decided completely by the control policy, which leverages two discriminator ensembles to perform imitation learning for the upper and lower body respectively. The _Tennis Swing_ task is challenging, as it is easy for the controlled character to solely hit the ball, but instead it is asked to do so by combining the motions from the reference collection (tennis swing for the upper body and tennis footwork for the lower body). The policy needs some exploration before finding a way to utilize poses from the reference motions to perform swings. In this process, imitation learning would fail if the policy simply tries to pursue a higher reward by simply hitting the ball. However, when the policy is trained using our proposed multi-objective learning framework, it can balance the imitation and goal-directed objectives, and perform forehand swings in the style of the reference motions. Additionally, while we provide only a small set of upper and lower body motions as the reference (cf. Table 2), the control policy successfully learns \begin{table} \begin{tabular}{r|c c} \hline \hline **Motion Collection** & **\# of Clips** & **Length [s]** \\ \hline \hline Crouch & 4 & 88.87 \\ Walk & 8 & 334.07 \\ Run & 4 & 282.87 \\ Tennis Fotowrk & 2 & 31.67 \\ Tennis Swing & 4 & 13.33 \\ Aiming & 2 & 48.77 \\ Juggling & 1 & 24.63 \\ \hline \hline \end{tabular} \end{table} Table 2: Motion collections used for goal-directed control. Figure 5: Motion synthesis with composite motion learning and goal-directed control. Pose snapshots shown in the small windows are captured from the reference motions. how to combine the motions automatically to finish the task. In contrast, if we just leverage full-body reference motions, extra work is needed to generate various motions for the policy to learn. In addition, there are not enough demonstrations for the policy to perform tennis swings correctly in a human-like style by utilizing, for example, only standing swing motions without footwork. Figure 4(b) shows another challenging composite task: _Target Location_ while _Juggling_, where the character needs to juggle three balls while walking to the target location. This composite task involves four objectives: two imitation objectives and two goal-directed tasks of juggling and locomotion. In our experiment, when a ball is relatively close to a hand, it is assumed to be caught by and attached to that hand. The ball is automatically detached from hand at a fixed interval of 20 frames. In order to perform juggling successfully and successively, after a hand releases its ball, it must catch in time a flying target ball which was thrown by the other hand. This task is very challenging, as the control policy must explore how to perform ball throwing and catching in concert with the location-targeting task. Besides the difficulty of throwing and catching balls, the juggling reference motion involves a subject balancing on a skateboard with the body swaying from side to side 1. This increases the difficulty of composite motion learning to generate normal walking poses. Differing from the other examples that use a lower and upper-body split, here we decouple the body parts into two groups, where one group consists of the character's arms to imitate the juggling motion and the other group includes the rest of the body parts (torso, head, pelvis, and legs) taking the collection of walking motions as reference data. In such a way, our approach can effectively eliminate the body swings in the juggling reference motion, and generate composite motions with the upper body moving naturally during goal-steering navigation. Footnote 1: FreeMoCap Project: [https://github.com/freemocap/freemocap](https://github.com/freemocap/freemocap) The other goal-directed task explored in this study is _Aiming_, in which the character holds a toy weapon in its right hand and is expected to aim it toward a specific direction. In our experiments, that task is designed mainly to demonstrate the effectiveness of our proposed incremental learning scheme, which will be elaborated in the next section. We refer to the Appendix for the details of the setup of all of our goal-directed tasks, and the supplementary video for related animation results. ### Incremental Learning In Fig. 6, we show tasks used to test our proposed incremental learning scheme. The first row depicts three meta policies of locomotion, which are trained for the _Target Location_ task completely from scratch using our proposed multi-objective learning framework. In contrast to previous examples, there is only one imitation objective about the full-body during training here, as shown by the snapshots on the top-left corner of the figure. In the 2nd row of the figure, we show the cooperative policies that are trained by incremental learning, while reusing the pre-trained, meta policies. In addition to the _Target Location_ task, a new goal-directed task of _Aiming_ is introduced during training the cooperative policies. The controlled character in this task needs to adjust its right forearm and let the toy pistol aim toward a goal direction specified dynamically. The goal of this experiment is to demonstrate that the cooperative policies can properly exploit the meta policies to perform styled locomotion behaviors while quickly learning upper-body motions from the newly provided aiming reference motions, which also involve a new goal-directed task that is never seen by the meta policies. In Fig. 7, we visualize the weight vector \(\text{w}_{t}\) (cf. Eq. 10) for each DoF Fig. 6. Demonstration of incremental learning tasks, where goal-directed aiming motions are added to various locomotion behaviors from the meta policies. by coloring the associated body link. The first three examples show the results obtained when we add the aiming motions to the meta policies of locomotion. The fourth example shows the corresponding result of adding the crouch motion to the meta policy of aiming and walking. As opposed to the previous meta policies, this meta policy has four objectives: two imitation objectives for the upper (aiming) and lower (walking) body respectively, one _Target Location_ task and one _Aiming_ task. As shown in the figure, in the three Aiming+Locomotion tasks where the meta policies are pre-trained for locomotion, the cooperative policies rely more on the meta policy for lower-body actions and control the upper-body parts for aiming primarily by themselves. In contrast, in Crouch+AimingWalk, we want the cooperative policy to replace the walking motions from the meta policy with crouching while keeping the upper-body motion of aiming. Here, as can be seen in the fourth case of the figure, the cooperative policy exploits the meta policy to perform aiming actions but performs crouching mainly on its own. In Fig. 8, we also plot the distribution of weights based on the collection of 5,000 consecutive frames from the Aiming+Crouch and Crouch+AimingWalk tasks. The statistical results are consistent with the above studied cases. As an additional experiment, in Fig. 9, we show that control policies trained with our approach can support the interactive control scheme proposed by Xu and Karamouzas (2021). In this experiment, we let the character perform a variety of locomotion styles by switching the three trained Aiming+Locomotion policies interactively in response to external control signal provided by the user, and navigate to and aim at the target directions specified by the user dynamically. ### Ablation Studies We refer to the previous literature of ICCGAN (Xu and Karamouzas, 2021) for ablation studies with respect to each component in the employed GAN-like structure for motion imitation, and to (Peng et al., 2021; Xu and Karamouzas, 2021) for related analyses on the robustness of control policies trained using GAN-like structures combined with reinforcement learning. Here, we focus on the studies of the proposed multi-objective learning framework and incremental learning scheme. In Fig. 10, we compare the performance of our proposed multi-objective (MO) learning framework to two baselines using three composite motion learning tasks from Section 4.1. The first baseline leverages our MO learning framework but does not make use of _PopArt_ to normalize the value targets of each critic (w/o PopArt). The second baseline simply adds the rewards from the two discriminators together and models the composite motion learning task as a typical reinforcement learning problem (w/o MO). Both baselines are trained with our motion decoupling scheme described in Section 4.1 and simultaneously leverage two discriminators, one for the upper-body motion and one for the lower body. As can be seen from Fig. 8. Distributions of the incremental learning weights \(\mathrm{w}_{t}\) for the tasks of Aiming+Crouch and Crouch+AimingWalk (cf. Fig. 7). The x-axis depicts the learned weights and the y-axis shows the corresponding distribution density, normalized by the total number of samples per body part grouping. The color saturation binds the weight range for higher distribution density, with brighter colors highlighting weights greater than 0.5. In the first task, the lower body is mainly controlled by the meta Crouch policy (high weights), while in the second task the AimingWalk meta policy mainly influences the upper body. Fig. 7. Visualization of the incremental learning weight \(\mathrm{w}_{t}\) (cf. Eq.10). The azure character shows the behavior from the meta policy. The colored character is controlled by the cooperative policy. The body link color identifies the weight for the associated DoF. The redder color represents higher weights, which means that the cooperative policy relies more on the meta policy to control the corresponding body parts of the character. The bluer color represents lower weights, which means that the cooperative policy mainly relies on itself to control the related body parts. the figure, it is hard for "w/o MO" to balance the learning of the two reference motions. For example, in the ChestOpen+JumpingJack task, as the upper-body (ChestOpen) imitation error goes down, the lower-body (JumpingJack) error increases; in the Punch+Run task, the policy almost gives up on learning how to run, focusing on punching without too much success. In contrast, when leveraging our MO framework either with or without _PopArt_, the imitation errors of the upper and lower body show similar and stable trends, keep decreasing as the training goes on. Additionally, the introduction of _PopArt_ typically facilitates better training, allowing for faster convergence speed, lower imitation error, and more robust training achieving similar performance across different trials. Figure 11 shows the performance of our MO approach with and without exploiting the proposed incremental learning scheme. We also provide comparisons with the "w/o MO" baseline. The tested tasks have four objectives, as described in Section 6.5: two imitation objectives for the upper and lower body respectively, one _Target Location_ task for the locomotion and one _Aiming_ task. In the cases using incremental learning, we employed a pre-trained, locomotion policy as the meta one. Consistent with the previous ablation study, we can see that the "w/o MO" baseline struggles to balance the different objective terms. Here, the character quickly achieves a high reward for the goal-directed _Aiming_ task (3rd row) but fails to complete other objectives, and in particular to account for the motion style provided by the imitation reward terms. For example, the controlled character holds the toy pistol in an unnatural Fig. 11: Learning performance on three composite tasks where each task combines learning from two partial motions while accomplishing two goal objectives. Multi-objective learning in an incremental manner leads to sample-efficient training allowing for high-fidelity composite motion synthesis with goal-directed control. Colored regions denote mean values \(\pm\) one standard deviation based on 10 trials. Fig. 10: Learning performance on tasks of composite motion learning from two single-clip reference motions, which are illustrated in Fig. 4. “MO” stands for the proposed multi-objective learning framework detailed in Section 4.3. Colored regions denote mean values \(\pm\) a standard deviation based on 10 trials. Fig. 9: Interactive control of switching between walking, crouching and running for location targeting while aiming. way compared to the demonstrations in the provided reference motions as indicated by the high imitation error (1st row). While such issues are successfully resolved by our proposed MO framework, learning in a non-incremental way leads to sample inefficient training as compared to learning by leveraging a meta policy. Besides slow speed of convergence, non-incremental training can be time consuming for challenging multi-objective tasks. For example, in the Aiming+Run task, while the case with incremental learning only needs 1.5 hours to finish the training by using about 20 million samples, the non-incremental cases need about 20 more hours for training and will consume about 300 million more samples to achieve a similar performance. ## 7. Limitations and Future Work We present a technique for training composite-motion controllers using a multi-objective learning framework that is capable of combining multiple reference examples and task goals to control a physically-simulated character. We demonstrate that our approach can generalize to a large number of examples based on the availability of reference data. Likewise, we show its ability to accomplish simultaneous goal-driven tasks such as aiming at specific targets and moving to a target location with different locomotion styles. Furthermore, we can interactively control such character's actions, pushing the boundary of what is capable for physics-based characters to date. Of course, there is still more to explore in this space. Our system is currently not well-equipped to handle behaviors which include multiple phases, as the imitation is not phase-locked in any fashion and our discriminators do not distinguish between different stages of an activity. Exploring the potential to add a state machine with state transitions could aid in this capacity (Starke et al., 2019). Another shortcoming of the approach presently is that we do not account for variation across the humans that recorded the motion clips. This implies that we are introducing bias in the imitation process that may degrade the final quality of the animation. As is, the system is able to make adjustments automatically as needed based on the physical characteristics of the behavior but it cannot distinguish errors that are more stylistic. In its current form, our system can not create new composite activities without performing additional training. A possible direction for future work is aimed at sidestepping this limitation to directly combine preexisting policies and greatly improve the scalability of trained controllers. That is, to train two (or more) policies independently and combine them at runtime to create a composite motion. Finally, in human motion, composite behaviors go beyond an anticipated split, e.g. the lower and upper body, which is one of the modest underlying assumptions in our current implementation. Instead, humans may enlist body parts and release them fluidly. For example, a well-trained martial artist changes the use of appendages quickly in fighting sequences. We wish to explore this direction in future investigations and believe that our proposed multi-objective learning framework can provide the foundation for such future endeavors. Although we employed an upper and lower body split in most of our experiments, there is nothing tied to this body decoupling scheme except that it is a practical general choice for deploying the limbs of the whole body. Currently, as long as the subtasks are compatible, our system is capable of combining motions along other body splits. For instance, in the _Juggling+TargetLocation_ example discussed in Section 6.4, the trained policy controls the arms for juggling and the rest of the body for walking. Our approach may fail if, for example, the lower limbs are separated due to the requirements of physical balance. As an example, in Fig. 12, we show a failure case where the body is bisected into a left/right split and asked to imitate walking and jumping motions respectively. Such a composite motion is not well-defined, even for humans. We can see that though not falling down, the simulated character cannot imitate the two motions accurately, and instead performs an in-between motion where the character neither jumps up nor walks in an expected fashion. In Fig. 12, we also show another failure case where running reference motions with an average speed of around \(3.5m/s\) are provided for the _Juggling+TargetLocation_ task. With the difficulty of juggling while moving at this higher speed, this example is significantly more challenging than the one shown in Fig. 5. Even though we are able to synthesize the composite motions, the simulated character cannot juggle the balls successfully under these conditions. Currently, our approach cannot identify if a composite motion is compatible on its own, and instead, it relies on a human to combine behaviors with some domain knowledge about the affinity of the mixing and the feasibility of associated goal-directed tasks. Automating this would be a great direction for future work. ## Acknowledgments This work was supported by the National Science Foundation under Grant No. IIS-2047632 and by Roblox. We would like to thank Rokoko 2 for providing mocap data for this project. Figure 12. Failure case study. Top: The character’s body is bisected into a left and right group, imitating walking and jumping respectively. Bottom: juggle while running.
2302.08374
Efficiency 360: Efficient Vision Transformers
Transformers are widely used for solving tasks in natural language processing, computer vision, speech, and music domains. In this paper, we talk about the efficiency of transformers in terms of memory (the number of parameters), computation cost (number of floating points operations), and performance of models, including accuracy, the robustness of the model, and fair \& bias-free features. We mainly discuss the vision transformer for the image classification task. Our contribution is to introduce an efficient 360 framework, which includes various aspects of the vision transformer, to make it more efficient for industrial applications. By considering those applications, we categorize them into multiple dimensions such as privacy, robustness, transparency, fairness, inclusiveness, continual learning, probabilistic models, approximation, computational complexity, and spectral complexity. We compare various vision transformer models based on their performance, the number of parameters, and the number of floating point operations (FLOPs) on multiple datasets.
Badri N. Patro, Vijay Srinivas Agneeswaran
2023-02-16T15:43:32Z
http://arxiv.org/abs/2302.08374v3
# Efficiency 360: Efficient Vision Transformers ###### Abstract Transformers are widely used for solving tasks in natural language processing, computer vision, speech, and music domains. In this paper, we talk about the efficiency of transformers in terms of memory (the number of parameters), computation cost (number of floating points operations), and performance of models, including accuracy, robustness of the model, and fair & bias-free features. We mainly discuss the vision transformer for the image classification task. Our contribution is to introduce an efficient 360 framework, which includes various aspects of the vision transformer, to make it more efficient for industrial applications. By considering those applications, we categorize them into multiple dimensions such as privacy, robustness, transparency, fairness, inclusiveness, continual learning, probabilistic models, approximation, computational complexity, and spectral complexity. We compare various vision transformer models based on their performance, the number of parameters, and the number of floating point operations (FLOPs) on multiple datasets. ## 1 Introduction Transformers such as attention all you need [23]and Bidirectional Encoder Representations from Transformers (BERT) [17] have recently become popular in the Machine Learning world for Natural Language Processing (NLP) tasks such as machine translation, text summarization, question answering, protein fold prediction, and even image processing tasks. ChatGPT [26] based transformer model and other large language models have garnered public attention in the last few weeks. They are used to assist humans in various ways, including answering questions, generating articles, and even as a coding assistant, which is helpful for both industrial applications and academia. This work focuses on Vision transformers and their application in various industrial dimensions. A few dimensions that are important from an industry perspective of these advanced models include robustness, privacy, transparency, inclusiveness, and continual and distributed learning. In literature, Tay et al. [14] have discussed efficient transformers primarily by considering natural language processing-based transformers. In this article, we build on a survey of efficient transformers in the vision domain, which has a different characterization compared to the existing surveys. We include more recent work on advanced transformers (especially those published in 2021 and 2022) in our current survey. Interesting research directions open up as a result, which we discuss in later sections of this article. As we shall be discussing, the survey opens up research in inclusiveness and privacy. It also suggests that the advanced transformer models work on high-resolution data, opening up research in climate modeling and oceanography (wave-breaking kind of applications). The other contribution of this paper is the efficient 360 framework, which helps provide a holistic view of transformers across these dimensions. Figure 1: Efficient Vision Transformers: Venn diagram of the efficient transformer models (Efficiency-360). This includes the robustness of a model, privacy of the model, bias and fairness, transparency, inclusiveness, efficient learning, probabilistic models, model approximations, the computational complexity of a model, and spectral complexity techniques In this work, we start our survey by considering a classification task. The transformer model needs to classify a given image into 1000 pre-defined classes. To benchmark the model performance, the research community chooses the ImageNet-1K [4] dataset. We provide sample examples of the given image of a leopard and classify it into its classes as shown in figure-2. We select DeiT-base [12] as a pre-train transformer model and obtain its prediction index & probability scores. We visualize the Grad-CAM-based explanation for the prediction. We start with a discussion of how efficient DeiT [12] is over ViT [13]. Here we define efficiency in terms of model parameters, computational cost ( i.e., training, inference time,) and performance (Top-1 accuracy). Along with this, we discuss various dimensions for efficient models like bias-free models, robust features, privacy in the model, transparency in model, efficient way of learning, easily deploy-able (inclusiveness), and efficient architecture(self-Attention, MLP-Mixer, and Spectral models). In the figure-1, we have included the major categories of efficient transformers: computational complexity, spectral complexity, robustness, privacy, approximation, efficient learning, transparency, fairness, and inclusiveness. We review each in turn in the subsequent sections. Due to the page constraint, our high-resolution figures, plots, and details comparison of state-of-the-art models are available on our GitHub page. 1 Footnote 1: [https://github.com/badripatro/efficient360](https://github.com/badripatro/efficient360) We compare transformer models based on the position Embedding, token embedding, network Architecture, attention network, Hierarchical Type, and extra labels in table-1. We check whether position embedding is used in the transformer or not and the type of token embedding in the transformer model, like overlapping or non-overlapping. We consider various types of the core architecture of the transformer models like Multi-Headed Self Attention, MLP-mixer-based architecture, and Spectral Gating Networks( using learnable filter weights). Also, we check with types of attention networks like linear layer or Convolution Neural Networks for QKV. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Model & Position & Attention & Network & Attention & Structure & Extra \\ & Embedding & Type & Architecture & Network & Label \\ \hline ViT [13] & ✓ & Global & MSA & Linear & Isotropic & ✗ \\ DeiT [12] & ✓ & Global & MSA & Linear & Isotropic & ✗ \\ TNT [14] & ✓ & Global & MSA & Linear & Isotropic & ✗ \\ T2T [21] & ✓ & Global & MSA & Linear & Isotropic & ✗ \\ Cross-ViT [4] & ✓ & Global & MSA & Linear & Isotropic & ✗ \\ PVT [15] & ✓ & Global & MSA & Linear & Pyramid & ✗ \\ Swin [11] & ✓ & Local & MSA & Linear & Pyramid & ✗ \\ Twin [16] & ✓ & LG & MSA & Linear & Pyramid & ✗ \\ CiT [17] & ✓ & Global & MSA & Linear & Isotropic & ✗ \\ CSwin [13] & ✓ & LG & MSA & Linear & Pyramid & ✗ \\ LV-ViT [18] & ✓ & Global & MSA & Linear & Isotropic & ✓ \\ WaveViT [19] & ✓ & Global & MSA & Linear & Isotropic & ✓ \\ CMT [12] & ✓ & LG & MSA & Linear & Isotropic & ✗ \\ RegionViT [4] & ✓ & LR & MSA & Linear & Pyramid & ✗ \\ CoaT [11] & ✓ & Local & MSA & Linear & Pyramid & ✗ \\ CvT [12] & ✗ & LG & MSA & Convolution & Pyramid & ✗ \\ UniFormer [10] & ✓ & LG & MHRA & Linear & Pyramid & ✗ \\ MLP-Mixer [10] & ✓ & Global* & Mixer & ✗ & Isotropic & ✗ \\ AS-MLP [14] & ✓ & Local* & Mixer & ✗ & Isotropic & ✗ \\ CycleMLP [4] & ✓ & Global* & Mixer & ✗ & Isotropic & ✗ \\ PoolFormer [11] & ✓ & Global* & Pooler & ✗ & Isotropic & ✗ \\ GFNet [15] & ✓ & Global* & SGN & ✗ & Isotropic & ✗ \\ GFNet-H [16] & ✓ & Global* & SGN & ✗ & Pyramid & ✗ \\ AFNO [1] & ✓ & Global* & SGN & ✗ & Isotropic & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: We compare transformer models based on position Embedding, token embedding, network Architecture, attention network, Hierarchical Type, and extra labels. to comparison. SGN stands to be a Spectral Gating Network, and MSA stands for a Multi-headed Self Attention network. MHRA stands for Multi-Head Relation Aggregator. LG stands for Local + Global. LR stands for Local + Regional. “*” indicate that it’s not an attention-type network. Figure 2: Inference on efficient transformer model (DeiT). Hierarchical architecture is used in the transformer model. We also compare with models that used extra data for training like Wave-ViT [22] and LV-ViT [14]. SGN tends to Spectral Gating Network, and MSA tends to a Multi-headed Self Attention network. Figure-3 shows the general architecture of transformer models in the vision domain. It contains two main parts Token mixing and channel mixing. The figure shows various efficient token-mixing techniques, Such as attention-type token mixing, MLP-mixture-based token mixing, pooling-based token mixing, and spectral mixing techniques. The channel mixing techniques are also similar for all types of transformers. ## 2 Efficient 360 Framework The growing size of the neural networks results in improved model performance. As model size increases, which results in an increase in memory consumption and computational requirements (for storing weights, activations, and gradients) while training, those models also increase. Now the challenge is, how do we design efficient transformer models for a visual domain? ### Computational complexity These transformers address the \(O(N^{2})\) computational complexity of transformers in various ways. One of the critical issues in a transformer is its quadratic complexity concerning the input sequence length--along dimensions relating to both computation and memory. The implication is that one has to compute the \(N\times N\) attention matrix for every layer and attention head. Various approaches have been tried to reduce this \(O(N^{2})\) complexity, including the use of caching architectures. The sparse transformer is one of the popular methods to address this complexity. Each output position computes weights from a subset of input positions. If the subset is \(\sqrt{(}N)\), then the complexity of the transformer reduces to \(O(N\times\sqrt{(}N))\), allowing it to handle long-range dependencies. We start with Vision Transformer (ViT) [13] model considers the image a 16 x16 word and is used to classify the image into predefined categories. In the ViT model, each image is split into a sequence of tokens of fixed length and then applied to multiple transformer layers to capture the global relationship across the token for the classification task. However, the performance of ViT is lower than CNNs on the ImageNet dataset when we train from scratch. Yuan et al. [20] find the reason for the performance degradation of the ViT model as 1. The importance of local structures, such as lines and edges among the neighboring pixels, is not captured by the simple tokenization of the input image. This leads to low training sample efficiency. 2. The redundancy of the attention module of the ViT leads to limited feature richness and limited training samples for a fixed computation budget. To overcome these issues of the ViT model, Yuan et al. [20] proposed a Tokens-to-token ViT method for training vision transformers from scratch on ImageNet. The proposed method includes layer-wise tokens to the token transformation that structures the image to the token by recursively aggregating neighbor Figure 3: This figure shows a comparison of various transformer model architectures. In general, the transformer architecture divides two parts one token mixing and another one channel mixing. We show various token-mixing model architectures. It is an extended version of the diagram with spectral mixing shown in PoolFormer [23]. ing tokens to one token (Tokens-to-token). This captures the local structure represented by the surrounding tokens, and the token's length is also reduced. The method also includes a deep narrow structure as an efficient backbone for the vision transformer. Transformer iN Transformer (TNT) [1] finds an issue in the local patch to capture color information and complex object details in the vision transformer. The author has proposed the TNT method, which considers 16x16 words as a visual sentence and further divided into small patches of 4x4 as the visual words. This makes boosts transformer performance by 1.7% compared to the state-of-the-art method. Touvron et al. Touvron _et al._ [2021] proposed an efficient transformer model based on distillation technique (DeiT). It uses a teacher-student strategy that relies on a distillation token to ensure that a student learns from a teacher through attention. Bao et al. Bao _et al._ [2021] have proposed a masked image model task for a pretrained vision transformer. The author proposes a self-supervision-based vision representation model, Bidirectional Encoder representation from Image Transformers (BEiT), which follows the BERT [17] method developed for the Natural Language Processing area. In this method, each image is considered as two views: One of the image patches of size 16 x 16 pixels and the other of discrete visual tokens. The original image is tokenized into visual tokens, with some image patches randomly masked and then fed to the backbone pre-trained transformer. After training BEiT, the model can be fine-tuned for the downstream tasks. Touvron et al. Touvron _et al._ [2021] have studied architectural optimization in the image transformer model. The author builds a Class attention image Transformer(CaiT) and optimizes a deeper transformer network for the image classification task. The CaiT method has two main contributions; one is LayerScale, which is the multiplication of a diagonal matrix with one output of each residual block, and the second one is Class attention, which is a set of layers that compile a collection of patch embedding into class embedding and fed to the linear classifier. Chen et al. have proposed a transformer model Cross-ViT [3] to combine image patches, i.e., tokens in transformer, of different sizes to produce strong feature representation using dual branch transformers. One transformer branch handles small patch tokens, and the other transformer handles large patch tokens with two separate branches of different computational complexity. Single token for each branch as a query to exchange information with other branches using effective token fusing module on cross attention. It is efficient in terms of a number of FLOPS and model parameters. Wang et al. Wang _et al._ (2021) have proposed a Pyramid vision Transformer (PVT) for dense prediction without convolutions. Vision-based transformers encounter difficulties while porting these transformers to dense prediction tasks. The PVT overcomes this issue. PVT is helpful for pixel-level dense predictions without convolution and non-maximal suppression, such as object detection methods. It is easy to port transformers using progressive shrinking pyramids and spatial reduction attention. Finally, PVT is evaluated on various tasks such as image classification, object detection, instance, and semantic segmentation tasks. Liu et al. Liu _et al._ (2021) have discussed the issue of adapting transformers from the language domain to the visual domain in ways that encompass a large variance of visual entities and high-resolution pixels of images compared to words in the text. To address this issue, the author proposed Swin Transformer [13], a hierarchical transformer method whose representation is computed using shifted windows. This technique overcomes the issue of non-overlapping local windows of self-attention more efficiently. Transformer models provide high representation power compare to CNN-based models. But these models are not good at dense prediction due to excessive memory, computational cost, and feature, which can influence the irreverent part of the image. To avoid these issues, the sparse attention method proposed in PVT [21] and Swin [13] transformers are data agnostic and limit the long-range relationship. To solve this issue deformable self-attention method is proposed, where the position of the value and key pair is selected in a data-dependent way. The technique is called Deformable Attention Transformer (DAT) [14]. Chu et al. Chu _et al._ (2021) have discussed the importance of spatial attention for success in the transformer's performance on various tasks. The authors proposed two simple and efficient architectures such as Twins-PCPVT and Twins-SVT. This paper uses a separable depth-wise convolution attention mechanism known as spatial-separable self-attention (SSSA). SSSA uses two types of attention operations: Locally grouped self-attention (LSA) and globally sub-sampled attention (GSA). LSA deals with fine-grained and short-distance information, while GSA deals with long-distance sequences and global information. The second proposed method, Twins-SVT, uses LSA and GSA with matrix multiplication. The author compares Twins-PCPVT with the similar architecture PVT [21], and Twins-SVT with similar architecture Swin [13] transformer. This makes it more efficient in terms of performance. Figure 4: This figure shows the performance of Various models across different model architectures like Tiny (T), Small (S), Base (B), and Large (L). This plot shows the variation of top-1 accuracy based on the various architectures like T, S, B, and L. We plot a number of parameters(M) for each architecture vs. its top-1 performance. CvT [21] is an efficient transformer model which introduces convolution token embedding and convolution transformer block in ViT architecture. The Convolution Neural Networks are good at shift, scale, and distortion invariant to ViT architecture, which maintains the merits of transformer architecture by using dynamic attention, global context, and better generalization. CvT gets all the benefits from CNNs, like a local receptive field, shared weights, and spatial sub-sampling, along with keeping all the advantages of transformer models. It improves performance compared to CNNs-based models ResNet [14] and transformer-based models ViT [15] and DeiT [20]. CvT is also more efficient regarding the number of FLOPS and parameters. We have analyzed model architecture-wise comparison for CvT, DeiT, PVT, TNT, and T2T transformer models as shown in the figure-4. Xu et al. have proposed a transformer model CoaT [23] for image classification tasks trained with co-scale and conv-attention mechanisms. The co-scale mechanism allows learning representation at different scales to communicate effectively with each other. The conv-attention mechanism realizes relative position embedding with convolutions in the factorized attention module, which is computationally efficient. Gua et al. have proposed a CMT [1] transformer based on a hybrid network, which has the advantage of the transformer capturing long-range dependency and convolution neural network to extract local information. It is efficient in the number of FLOPS and the network's performance. MLP-mixer [16] model use multi-layer perceptions (MLPs) to mix input token without using CNNs or self-attention network. It uses two types of layers one MLP layer to combine the location feature of image patches and the other one for mixing spatial information. The MLP-mixer models achieve competitive scores on image classification benchmarks. Lian et al. proposed axial shifted MLP (AS-MLP) [17] network for vision tasks. ASD-MLP pays more attention to local feature interaction by using axial shifted channels of the feature maps. It captures local dependency by capturing the flow of the information in axial directions, as in CNNs. MLP-mixer [16] and ResMLP [20] flattened the input image patches and fed them into the transformer encoder to mix the input tokens linearly using MLP networks. It is hard to capture the spatial information in the image. To avoid this issue, Guo et al. have proposed a Hire-MLP [1] network which contains MLP architecture via Hierarchical rearrangements in two levels. The inner region rearrangement captures local information inside the spatial regions, and cross-region rearrangement enables the communication between two regions. It captures global context by circularly shifting all tokens along the spatial direction. MLP-mixer [16], ResMLP [20] and gMLP [18] architecture highly depend on the image size. These MLP models are in quadratic O(\(N^{2}\)) computational complexity in nature, which are infeasible for object detection and semantic segmentation task. To solve this issue, Chen et al. [2] has proposed a CycleMLP network, which mainly it can cope with variable image size and provides linear complexity with image size using local windows. Recently MLP based architec Figure 5: This figure shows the performance of various state-of-the-art vision transformer models across a number of parameters. We plot the number of parameters(M) of different transformer models vs. their top-1 performance on the ImageNet-1K dataset. From this figure, we conclude that those models which are towards **top-left** are the best efficient models. For example WaveViT-Létteyao2022wave is the best efficient model. \begin{table} \begin{tabular}{l|l l l l l l l} \hline Image & Method & Network & \#Param & FLOPS & ImageNet & Real top-1 (\%) & \begin{tabular}{l} Imagenet- \\ v2 \\ \end{tabular} \\ \hline \hline \multirow{3}{*}{\(224^{2}\)} & ResNet-50 [16] & C & 25 & 4.1 & 76.2 & 82.5 & 63.3 \\ & ResNet-101 [16] & C & 45 & 7.9 & 77.4 & 83.7 & 65.7 \\ & ResNet-152 [16] & C & 60 & 11 & **78.3** & **84.1** & **67.0** \\ \hline \multirow{3}{*}{\(224^{2}\)} & RedNet-50 [16] & I & 15.5 & - & 78.4 & - & - \\ & RedNet-101 [17] & I & 25.6 & - & 79.1 & - & - \\ & RedNet-152 [17] & I & 34 & - & **79.3** & - & - \\ \hline \multirow{3}{*}{\(224^{2}\)} & DeiT-S [19] & T & 22 & 4.6 & 79.8 & 85.7 & 68.5 \\ & DeiT-B [19] & T & 86 & 17.6 & 81.8 & **86.7** & **71.5** \\ \cline{2-7} & PVT-Small [20] & T & 25 & 3.8 & 79.8 & - & - \\ & PVT-Medium [20] & T & 44 & 6.7 & 81.2 & - & - \\ & PVT-Large [20] & T & 61 & 9.8 & 81.7 & - & - \\ \cline{2-7} & Cross-ViT-S [18] & T & 26.7 & 5.6 & 81.0 & - & - \\ & Cross-ViT-B [18] & T & 104.7 & 21.2 & 82.2 & - & - \\ \cline{2-7} & T2T-ViT-14 [21] & T & 22 & 6.1 & 80.7 & - & - \\ & T2T-ViT-19 [20] & T & 39 & 9.8 & 81.4 & - & - \\ & T2T-ViT-24 [20] & T & 64 & 15.0 & 82.2 & - & - \\ \cline{2-7} & TNT-S [19] & T & 24 & 5.2 & 81.3 & - & - \\ & TNT-B [19] & T & 66 & 14.1 & 82.8 & - & - \\ \cline{2-7} & CiT-Ti [19] & T & 6 & - & 75.3 & - & - \\ & CiT-S [19] & T & 22 & - & 82.0 & - & - \\ \cline{2-7} & Visformer [18] & T & 40.2 & 4.9 & 82.3 & - & - \\ & Swin-S [19] & T & 50 & 8.7 & 83.2 & - & - \\ & Swin-B [19] & T & 88 & 15.4 & 83.5 & - & - \\ \hline LLT-Ti [20] & T & 19 & 3.6 & 81.1 & 86.6 & 70.4 \\ LLT-S [20] & T & 27 & 4.1 & 81.5 & 86.4 & 70.4 \\ LLT-M [20] & T & 48 & 8.6 & 83.0 & 87.3 & 72.0 \\ LLT-B [20] & T & 86 & 15.0 & 83.4 & 87.6 & 72.8 \\ \hline LLTv2-S [21] & T & 28 & 3.7 & 82.0 & - & - \\ LLTv2-M [20] & T & 49 & 7.5 & 83.3 & - & - \\ LLTv2-B [22] & T & 87 & 13.2 & 83.6 & - & - \\ \hline Twins-PCPVT-B [21] & T & 43.8 & 6.7 & 82.7 & - & - \\ Twins-SVT-B [21] & T & 56 & 8.6 & 83.2 & - & - \\ Twins-PCPVT-I [21] & T & 60.9 & 9.8 & 83.1 & - & - \\ Twins-SVT-L [21] & T & 99.2 & 15.1 & 83.7 & - & - \\ \hline ViL-Small [20] & T & 24.6 & 4.9 & 82.4 & - & - \\ ViL-Medium [20] & T & 39.7 & 8.7 & 83.5 & - & - \\ ViL-Base [20] & T & 55.7 & 13.4 & 83.7 & - & - \\ \hline RegionViT-Ti+ [18] & T & 14.3 & 2.7 & 81.5 & - & - \\ RegionViT-M+ [18] & T & 42.0 & 7.9 & 83.4 & - & - \\ RegionViT-B [18] & T & 72.7 & 13.0 & 83.2 & - & - \\ RegionViT-B+ [18] & T & 73.8 & 13.6 & 83.8 & - & - \\ \hline DAT-T [21] & T & 29 & 4.6 & 82.0 & - & - \\ DAT-S [21] & T & 50 & 9.0 & 83.7 & - & - \\ DAT-B [21] & T & 88 & 15.8 & 84.0 & - & - \\ \hline SE-CoTNetD-50 [17] & T & 23.1 & 4.1 & 81.6 & - & - \\ SE-CoTNetD-101 [17] & T & 40.9 & 8.5 & 83.2 & - & - \\ SE-CoTNetD-152 [17] & T & 55.8 & 17.0 & 84.0 & - & - \\ \hline ViL-LS-Medium [20] & T & 39.8 & 8.7 & 83.8 & - & - \\ ViL-LS-Base [20] & T & 55.8 & 13.4 & 84.1 & - & - \\ \hline UniNet-B1 [19] & T & 11.5 & 1.1 & 80.8 & - & - \\ \hline CoaT-Lite-Mini [21] & CT & 10 & 6.8 & 80.8 & - & - \\ CoaT-Lite-Small [21] & CT & 20 & 4.0 & 81.9 & - & - \\ \hline CVT-13 [20] & CT & 20 & 4.5 & 81.6 & 86.7 & 70.4 \\ CvT-21 [21] & CT & 32 & 7.1 & 82.5 & 87.2 & **71.3** \\ \hline LV-ViT-\(S^{*}\)[22] & CT & 26 & 6.6B & 83.3 & 88.1 & - \\ LV-ViT-\(M^{*}\)[20] & CT & 56 & 16.0 & 84.1 & **88.4** & - \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of the various models on ImageNet1K [19], ImageNet Real [2] and ImageNet V2 matched frequency [18] with image size \(224^{2}\). We report these numbers from CVT [21] paper. C stands for Convolution, I stands for Involution, T tends for Transformer, and CT stands for Convolution Transformers. \(*\) means extra data used while training the model. ture based on fully connected layers of archives competing for performance like CNN and transformers. We split images as multiple tokens(patches) and directly aggregated them with fixed weights by neglecting varying semantic information of tokens from different images. To handle this issue, Tang et al. have proposed Wave-MLP [14] method in which each token has two parts, amplitude, and phase part. The amplitude is the original feature, and the phase terms are a complex value chaining according to the image's semantic content by modulating the relation between tokens and fixed weights. Recently the attention-based module of the transformer is replaced by the spatial MLPs, as shown in the figure-3. These MLP-based models are performing well compared to attention-based models. Generally, a transformer architecture-specific token mixing module is necessary for the model's performance. Yu et al. have proposed simple methods to replace the attention module of the transformer with a simple spatial pooling operator to conduct only basic token mixing, and this method is known as PoolFormer. TopFormer: As we know, the computational cost of a vision transformer is very high for dense prediction tasks like semantic segmentation tasks on mobile networks. Zhang et al. [14] have proposed a mobile-friendly architecture such as a Token Pyramid vision transformer (TopFormer) for the dense prediction task. The method consists of a Token pyramid module semantic extractor, injection module, and segmentation head on it. The token pyramid module takes an image as an image put and produces a token pyramid, which is fed to the semantic extractor to produce scale-aware semantics, which is injected into a token of the corresponding scale for augmenting representation using the injection module. Finally, The augmented token pyramid to perform the segmentation task using a segmentation head. Dong et al. have proposed a Cross-Shaped Window transformer model CSWin [1] to compute self-attention in horizontal and vertical strips in parallel to form cross-shape windows. The main difference from vanilla vision transformers are 1.) CSWin transformer replaces multi-headed self-attention with Cross Shaped Window Self Attention. 2.) it introduces local inductive bias using Locally-enhanced Positional Encoding (LePE), which is added parallel to the self-attention module. The CSWin transformer is efficient in terms of accuracy and good representation feature for downstream tasks. RegionViT [1] has proposed a new architecture that adapts a pyramid structure to capture regional-to-local attention rather than global attention for the vision transformer. The method first generates regional tokens and local tokens from images of different patch sizes. Regional \begin{table} \begin{tabular}{l|l l l l l l l} \hline \hline Image Size & Method & Network & \#Param & FLOPS & ImageNet & Real top- 1 (\%) & Imagenet-v2 \\ \hline \hline & CSwin-T [1] & T & 23 & 4.3 & 82.7 & - & - \\ & CSwin-S [1] & T & 35 & 6.9 & 83.6 & - & - \\ & CSwin-B [1] & T & 78 & 15.0 & 84.2 & - & - \\ \cline{2-7} & DaViT-Ti [1] & T & 28.3 & 4.5 & 82.8 & - & - \\ & DaViT-S [1] & T & 49.7 & 8.8 & 84.2 & - & - \\ & DaViT-B [1] & T & 87.9 & 15.5 & 84.6 & - & - \\ \hline \hline CaiT-M36 [15] & T & 271 & 53.7 & 85.1 & **89.3** & - \\ \hline MViT-T2-T [10] & T & 24 & 4.7 & 82.3 & - & - \\ MViTv2-S [1] & T & 35 & 7 & 83.6 & - & - \\ MViTv2-B [1] & T & 52 & 10.2 & 84.4 & - & - \\ MViTv2-L [1] & T & 218 & 42.1 & 85.3 & - & - \\ \hline Wave-ViT-S\({}^{*}\)[1] & T & 22.7 & 4.7 & 83.9 & - & - \\ Wave-ViT-\(B^{*}\)[1] & T & 33.5 & 7.2 & 84.8 & - & - \\ Wave-ViT-\(L^{*}\)[1] & T & 57.5 & 14.8 & **85.5** & - & - \\ \hline CMT-Ti [1] & CT & 9.5 & 0.6 & 79.1 & - & - \\ CMT-XS [1] & CT & 15.2 & 1.5 & 81.8 & - & - \\ CMT-S [1] & CT & 25.1 & 4 & 83.5 & - & - \\ CMT-B [1] & CT & 45.7 & 9.3 & 84.5 & - & - \\ CMT-L [1] & CT & 74.7 & 19.5 & 84.8 & - & - \\ \hline MaxViT-T [1] & CT & 31 & 5.6 & 83.62 & - & - \\ MaxViT-S [1] & CT & 69 & 11.7 & 84.45 & - & - \\ MaxViT-B [1] & CT & 120 & 23.4 & 84.95 & - & - \\ MaxViT-L [1] & CT & 212 & 43.9 & 85.17 & - & - \\ \hline UniFormer-S [1] & CT & 22 & 3.6 & 82.9 & - & - \\ UniFormer-B [1] & CT & 50 & 8.3 & 83.9 & - & - \\ UniFormer-L [1] & CT & 100 & 12.6 & **85.6** & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: This is an extension of table- 2, which shows performance of the various models on ImageNet1K [1], ImageNet Real [1] and ImageNet V2 matched frequency [12] with image size \(224^{2}\). We report these numbers from CvT [13] paper. C stands for Convolution, I stands for Involution, T tends for Transformer, and CT stands for Convolution Transformers. \(*\) means extra data used while training the model. self-attention captures global information among all regional tokens, and local self-attention exchanges the information among one regional token and associates with the local token via self-attention. UniFormer [11] model is a more generalizing model considering both Convolution and Self-attention for the Visual Recognition task. As CNNs based models are more efficient for decreasing local redundancy by convolution within a small neighborhood, the limited receptive field makes it hard to capture global dependency. Whereas transformer-based models can effectively capture long-range dependency via self-attention while finding similarity comparisons among all the tokens lead to high redundancy. The UniFormer model uses Multi-Head Relation Aggregator(MHRA) blocks, which are modeled with local and global tokens affinity in the shallow and deep layer, respectively, to reduce redundancy and provide an efficient representing feature. The main difference in the UniFormer model compare with the transformer is it uses MHRA, while the transformer uses MSA (Multi-headed Self Attention). The scalability of self-attention is restricted to the image size. That is, the computation complexity increase as image size increases. To avoid this issue, Tu et al. proposed the MaxVit [13] method to efficiently scale self-attention. The technique contains two concepts, one block local and dilated global attention. The model provides local-global spatial interaction on various input image resolutions with linear complexity. MViTv2 [11] has improved the performance of Multi-scale Vision Transformers for Classification and Detection tasks. The MViTv2 method decomposed location distance to inject the position information to the transformer blocks and residual pooling connections to composite the effect of pooling in the attention computation. Pan et al. have proposed an efficient method (LIT) [14], which pays less attention to the self-attention \begin{table} \begin{tabular}{c|l l l l l l l} \hline \hline Image Size & Method & Network & \#Param & FLOPS & ImageNet & Real top- 1 (\%) & Imagenet- \\ \hline \hline & Mixer-B/16 [16] & M & 59 & 12.7 & 76.4 & - & - \\ \cline{2-7} & gMLP-S [11] & M & 20 & 4.5 & 79.6 & - & - \\ & gMLP-B [11] & M & 73 & 15.8 & 81.6 & - & - \\ \cline{2-7} & ResMLP-S12 [15] & M & 15 & 3.0 & 76.6 & - & - \\ & ResMLP-S24 [15] & M & 30 & 6.0 & 79.4 & - & - \\ & ResMLP-B24 [15] & M & 116 & 23.0 & 81.0 & - & - \\ \hline \(S^{2}\)-MLP-wide [15] & M & 71 & 14.0 & 80.0 & - & - \\ \(S^{2}\)-MLP-deep [15] & M & 51 & 10.5 & 80.7 & - & - \\ \hline ViP-Small/7 [16] & M & 25 & 6.9 & 81.5 & - & - \\ ViP-Medium/7 [16] & M & 55 & 16.3 & 82.7 & - & - \\ ViP-Large/7 [16] & M & 88 & 24.4 & 83.2 & - & - \\ \hline CycleMLP-B1 [1] & M & 15 & 2.1 & 78.9 & - & - \\ CycleMLP-B2 [1] & M & 27 & 3.9 & 81.6 & - & - \\ CycleMLP-B3 [1] & M & 38 & 6.9 & 82.4 & - & - \\ CycleMLP-B4 [1] & M & 52 & 10.1 & 83.0 & - & - \\ CycleMLP-B5 [1] & M & 76 & 12.3 & 83.2 & - & - \\ \hline AS-MLP-T [11] & M & 28 & 4.4 & 81.3 & - & - \\ AS-MLP-S [11] & M & 50 & 8.5 & 83.1 & - & - \\ AS-MLP-B [11] & M & 88 & 15.2 & 83.3 & - & - \\ \hline Wave-MLP-T [17] & M & 1 & 2.4 & 80.6 & - & - \\ \(224^{2}\) & Wave-MLP-S [17] & M & 30 & 4.5 & 82.6 & - & - \\ Wave-MLP-M [17] & M & 44 & 7.9 & 83.4 & - & - \\ Wave-MLP-B [17] & M & 63 & 10.2 & 83.6 & - & - \\ \hline Hire-MLP-Tiny [18] & M & 18 & 2.1 & 79.7 & - & - \\ Hire-MLP-Small [18] & M & 33 & 4.2 & 82.1 & - & - \\ Hire-MLP-Base [18] & M & 58 & 8.1 & 83.2 & - & - \\ Hire-MLP-Large [18] & M & 96 & 13.4 & 83.8 & - & - \\ \hline DynaMixer-S [17] & M & 26 & 7.3 & 82.7 & - & - \\ DynaMixer-M [17] & M & 57 & 17.0 & 83.7 & - & - \\ DynaMixer-L [17] & M & 97 & 27.4 & **84.3** & - & - \\ \hline PoolFormer-S12 [15] & P & 12 & 1.9 & 77.2 & - & - \\ PoolFormer-S24 [15] & P & 21 & 3.5 & 80.3 & - & - \\ PoolFormer-S36 [15] & P & 31 & 5.1 & 81.4 & - & - \\ PoolFormer-M36 [15] & P & 56 & 9.0 & 82.1 & - & - \\ PoolFormer-M48 [16] & P & 73 & 11.8 & **82.5** & - & - \\ \hline \hline \end{tabular} \end{table} Table 4: This table shows the performance of the various MLP-like Transformer models on ImageNet1K [1], ImageNet Real [1] and ImageNet V2 matched frequency [12] with image size \(224^{2}\). We report these numbers from CvT [15] paper. M stands for MLP-Mixer, and P stands for Pooling Network. \(*\) means extra data used while training the model. module in the vision Transformer. The early self-attention layers focus on local patterns and provide minor benefits in hierarchal vision transformers. Pan et al. further improve the LIT model with hilo attention [14] method to make a fast vision transformer. Venkataraman et al. [2] have proposed a skip-attention method to improve vision transformers by paying less attention. Similarly, GC-ViT [1] and Beit [2] proposed various attention mechanisms to improve the performance of the vision transformer on ImageNet-1K[6] dataset. We compare top-1 accuracy (%) of various transformer models for image size \(224^{2}\) on the ImageNet dataset over model parameter in millions(M) as shown in the figure-5. ### Spectral complexity Efficient transformers can be designed to speed up transformer encoder architecture by replacing the self-attention network with linear transformations that mix input tokens. The self-attention layer of the transformer is replaced by a parameterized Fourier transformation (Fnet) [11], which is then followed by a non-linearity and feed-forward network. Compared to BERT, this network is 80 percent faster and can archive 92 to 97 percent of the transformer \begin{table} \begin{tabular}{l|l l l l l l} \hline \hline Image Size & Method & Network & \#Param & FLOPS & ImageNet & Real top- & Imagenet-v2 \\ \hline \hline \(600^{2}\) & EfficientNet-B7 [14] & C & 43 & 19 & **84.3** & - & - \\ \hline \(256^{2}\) & BoNet-S1-128 [15] & T & 79.1 & 19.3 & 84.2 & - & - \\ & UniNet-B2 [13] & CT & 16.2 & 2.2 & 82.5 & - & - \\ & CMT-B [2] & CT & 45.7 & 9.3 & **84.5** & - & - \\ \hline \multirow{4}{*}{\(288^{2}\)} & CMT-L [2] & CT & 74.7 & 19.5 & 84.8 & - & - \\ & UniNet-B3 [13] & CT & 24 & 4.3 & 83.5 & - & - \\ & LV-ViT-\(L^{*}\)[13] & CT & 150 & 59.0 & **85.3** & **89.3** & - \\ \hline \multirow{4}{*}{\(384^{2}\)} & ViT-B16 [17] & T & 86 & 55.5 & 77.9 & 83.6 & \(-\) \\ & ViT-L/16 [17] & T & 307 & 191.1 & 76.5 & 82.2 & \(-\) \\ \cline{2-7} & DeiT-B [13] & T & 86 & 55.5 & 83.1 & - & - \\ & Swin-B [13] & T & 88 & 47.1 & 84.5 & - & - \\ & ViL-LS-Medium [13] & T & 39.9 & 28.7 & 84.4 & - & - \\ & LITv2-B [14] & T & 87 & 39.7 & 84.7 & - & - \\ & DAT-B [14] & T & 88 & 49.8 & 84.8 & - & - \\ & BoNet-S1-128 [15] & T & 79.1 & 45.8 & 84.7 & - & - \\ & CaiT-S36 [17] & T & 68 & 48.0 & **85.4** & **89.8** & - \\ \cline{2-7} & CSwin-T [6] & T & 23 & 14.0 & 84.3 & - & - \\ & CSwin-S [6] & T & 35 & 22.0 & 85.0 & - & - \\ & CSwin-B [6] & T & 78 & 47.0 & **85.4** & - & - \\ \hline CVT-13 [13] & CT & 20 & 16.3 & 83.0 & 87.9 & 71.9 \\ & CV-T-21 [13] & CT & 32 & 24.9 & 83.3 & 87.7 & **71.9** \\ \cline{2-7} & UniNet-B5 [13] & CT & 72.9 & 20.4 & 84.9 & - & - \\ \hline LV-ViT-\(S^{*}\)[13] & CT & 26 & 22.2 & 84.4 & 88.9 & - \\ LV-ViT-\(M^{*}\)[13] & CT & 56 & 42.2 & 85.4 & **89.5** & - \\ \hline UniFormer-S [13] & CT & 22 & 11.9 & 84.6 & - & - \\ UniFormer-B [13] & CT & 50 & 27.2 & 86.0 & - & - \\ UniFormer-L [13] & CT & 100 & 39.2 & 86.3 & - & - \\ \cline{2-7} & MaxViT-S [13] & CT & 69 & 36.1 & 85.74 & - & - \\ & MaxViT-B [13] & CT & 120 & 74.2 & 86.34 & - & - \\ & MaxViT-L [13] & CT & 212 & 133.1 & **86.40** & - & - \\ \hline \multirow{4}{*}{\(448^{2}\)} & CaiT-M36 [17] & T & 271 & 247.8 & **86.3** & **90.2** & - \\ \cline{2-7} & UniNet-B6 [13] & CT & 117 & 51 & 85.6 & - & - \\ \cline{2-7} & LV-ViT-\(L^{*}\)[13] & CT & 150 & 157.2 & **85.9** & 89.7 & - \\ \hline \multirow{4}{*}{\(512^{2}\)} & LV-ViT-\(L^{*}\)[13] & CT & 151 & 214.8 & 86.4 & **90.1** & - \\ \cline{2-7} & MaxViT-T [13] & CT & 31 & 33.7 & 85.72 & - & - \\ \cline{1-1} & MaxViT-S [13] & CT & 69 & 67.6 & 86.19 & - & - \\ \cline{1-1} & MaxViT-B [13] & CT & 120 & 138.5 & 86.66 & - & - \\ \cline{1-1} & MaxViT-L [13] & CT & 212 & 245.4 & **86.70** & - & - \\ \hline \end{tabular} \end{table} Table 5: Performance of the various models on ImageNet1K [6], ImageNet Real [2] and ImageNet V2 matched frequency [12] for various image sizes. We report these numbers from CvT [13] paper. C stands for Convolution, I stands for Involution, T stands for Transformer, and CT stands for Convolution Transformers. \(*\) means extra data used while training the model. performance. The Global Frequency network (GFnet) [14] proposes a depth-wise global convolution for token mixing. GFnet involves three steps: Spatial token mixing via Fast Fourier Transform (FFT), frequency gating, and inverse FFT for token demixing. GFnet is not involved in channel mixing, is expensive for higher solution images as sequence length increases, and is not adaptive. Guibias et al. [16] formulated the token mixing task as an operator-learning task that learns mapping among continuous functions in infinite dimensional space. Li et al. [11] discuss solving Partial Differential Equations (PDE) using a Fourier Neural Operator (FNO). FNO works well in continuous domains. Adapting FNO for a vision domain with high-resolution image inputs requires modification in the design architecture of FNO from PDE. This is because high images have discontinuities due to edges and other structures. Additionally, the channel mixing FNO depends on the channel size, which has quadratic complexity. The block-diagonal structure is used on channel mixing weight to handle this channel mixing issue. The author shared weights across the tokens of MLP layers for parameter efficiency and introduced sparsity in the frequency domain using soft thresholding for generalization. These solutions combine, known as Adaptive Fourier neural Operator (AFNO). In Wave-ViT [23], the author has discussed the quadratic complexity of the self-attention network of the transformer model with input patch numbers. People have used downsampling operations using global average polling (GAP) over key/values to solve this issue in the past. It is observed that downsampling operations such as (GAP) are non-invertible, which causes losing high frequency, such as texture details of the objects. The author has proposed a wavelet vision transformer to perform lossless downsampling using wavelet transform over keys and values. The model performs state-of-the-art results on image recognition, object detection, and instance segmentation tasks. The model is efficient in terms of the number of FLOPS and accuracy. Compared to GFNet [14] and AFNO [16], WaveVit [23] uses an attention network with extra labels for training. We compare all sorts of spectral networks in table-7. Another recent work in the spectral domain is Fourier Image transformer (FIT)[1], which uses Fourier domain encoding for image completion tasks, predicting high-resolution output given low-resolution input. This method is demonstrated in computer tomography (CT) image reconstruction tasks. \begin{table} \begin{tabular}{l|l l l l l l l} \hline \hline Image & Method & Network & \#Param & FLOPS & ImageNet & Real top- & Imagenet-v2 \\ \hline \hline \(480^{2}\) & BiT-M [10] & T & 928 & 837 & 85.4 & – & – \\ \hline \multirow{4}{*}{\(384^{2}\)} & ViT-B/16 [17] & T & 86 & 55.5 & 84.0 & 88.4 & – \\ & ViT-L/16 [17] & T & 307 & 191.1 & 85.2 & 88.4 & – \\ & ViT-H/16 [17] & T & 632 & – & 85.1 & 88.7 & – \\ \cline{2-7} & CVT-13 [20] & CT & 20 & 16 & 83.3 & 88.7 & 72.9 \\ & CVT-21 [20] & CT & 32 & 25 & 84.9 & 89.8 & 75.6 \\ & CVT-W24 [20] & CT & 277 & 193.2 & 87.7 & 90.6 & 78 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance of the various transformer models pre-trained on ImageNet21K [4] and fine-tuned on ImageNet21K [4] ImageNet Real [4] and ImageNet V2 matched frequency [11] for image size \(384\times 384\), except BiT-M [10], which is fine-tuned on image size \(480\times 480\). We report these numbers from CVT[20] paper. T tends for Transformer, CT tends for Convolution Transformers. \begin{table} \begin{tabular}{l|l l l l l} \hline \hline Image & Method & \#Param & FLOPS & ImageNet & Attention & Extra \\ Size & & (M) & (G) & top-1 (\%) & Network & Label \\ \hline \hline \multirow{4}{*}{\(224^{2}\)} & Finet [14] & 15 & 2.9 & 71.2 & ✗ & ✗ \\ \cline{2-6} & GFNet-Ti [14] & 7 & 1.3 & 74.6 & ✗ & ✗ \\ & GFNet-XS [14] & 16 & 2.9 & 78.6 & ✗ & ✗ \\ & GFNet-S [14] & 25 & 4.5 & 80.0 & ✗ & ✗ \\ & GFNet-B [14] & 43 & 7.9 & 80.7 & ✗ & ✗ \\ \cline{2-6} & AFNO-S/4 [16] & 16 & 15.3 & **80.9** & ✗ & ✗ \\ \cline{2-6} & Wave-ViT-S [23] & 19.8 & 4.3 & 82.7 & ✓ & ✗ \\ & Wave-ViT-S\({}^{*}\)[23] & 22.7 & 4.7 & 83.9 & ✓ & ✓ \\ & Wave-ViT-B\({}^{*}\)[23] & 33.5 & 7.2 & 84.8 & ✓ & ✓ \\ & Wave-ViT-L\({}^{*}\)[23] & 57.5 & 14.8 & **85.5** & ✓ & ✓ \\ \hline \multirow{4}{*}{\(384^{2}\)} & GFNet-XS [14] & 18 & 8.4 & 80.6 & ✗ & ✗ \\ & GFNet-S [14] & 28 & 13.2 & 81.7 & ✗ & ✗ \\ \cline{1-1} & GFNet-B [14] & 47 & 23.3 & **82.1** & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 7: This table shows the performance analysis of spectral transformer models trained on ImageNet1KIDeng et al. [16] for image size \(224\times 224\) and \(384\times 384\). It compares the number of parameters, number of FLOPs, and Top-1 accuracy of various spectral vision transformer models. The red tick mark in the attention column indicates that the attention network is not used in the transformer model. \(*\) indicates that WaveViT[23] uses extra training data while training. ### Bias and Fairness Features This section discusses inductive bias and how efficiently we can train our transformer models. We start with Inductive Bias (IB) and introduce ViTAE [23] work and its variants. ViTAEv2 [24] lack intrinsic inductive bias (IB) in modeling local visual structure. The model requires massive training data and time to learn inductive bias implicitly. The author has proposed ViTAEv2 method based on the ViT transformer model to explore Inductive intrinsic bias from convolutions. In this method, the author uses spatial reduction modules to downsample and feed the input images into tokens using multiple convolutions with different dilation rates. This model is based on ViTAE [23]. Edaelman et al. [1] has discussed the theoretical analysis of the Inductive biases of the self-attention module. The author shows that the bounded-norm transformer model creates sparse variables, and the single multi-head attention can represent the sparse function of the input sequence. Ren et al. [1] have discussed on induction bias of the vision transformer, which is not performing well with insufficient data. The author has introduced a knowledge distillation (KD) based method to help the training of the transformer. The existing methods in KD use heavy convolution neural network-based teacher modules, but in this work, the author uses lightweight modules with different architectural inductive biases as teacher modules (such as CNN-based teacher module and Involution-based teacher module) to co-advise the student transformer model. The author claims that Co-advise based transformer model is more efficient in training and provides better performances compared to existing methods. ### Transparency Transparency in the Transformer model indicates a clear understanding of the transformer model. Basically, we need to know the details of this model like, what is the transformer model?, How do we train a transformer model? How do we make inferences from those models? How can we explain the inference of the model? We would also like to know more details about the model, such as multi-headed self-attention and positional encoding do? In order to understand the transformer model, we need to know the details. Finally, this will help us to design a more efficient transparent transformer model. Few recent research has been carried out in the field of transparency in the model architecture and training processing. Park et al. [1] have analyzed the vision transformer model and explained "How Do Vision Transformers Work?" with certain examples. The author has analyzed the loss surface for the vision transformers' multi-headed self-attention (MSA). They have also analyzed the behavior of convolution neural network (CNN) and MSA, which seems to be opposite to each other, for example, MSAs are low pass filter and CNNs are high pass filters. This provides transparency while modeling multi-headed self-attention. Raghu et al. [12] have analyzed "how are vision transformers solving image classification like convolution networks?" They study the structure of transformers and CNN and found the ViT are uniform representations across all the layers. The role of self-attention enables early aggregation of global information, and the role of residual connection in ViT propagates features from lower to higher layers. We also consider efficiency in terms of training time and convergence of the model in terms of loss surface. LV-ViT [22] has proposed a different training objective for the vision transformers that compute classification loss with additional trainable class tokens. Location-specific, each patch token is generated by a machine using NFNet [1] image recognition model. The method reformulates the image classification problem into a multi-token-level recognition problem \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Method & \begin{tabular}{c} \#FLOPS \\ (G) \\ \end{tabular} & \begin{tabular}{c} \#Param \\ (M) \\ \end{tabular} & \begin{tabular}{c} CIFAR \\ 10 \\ \end{tabular} & \begin{tabular}{c} CIFAR \\ 100 \\ \end{tabular} & \begin{tabular}{c} Pet \\ \end{tabular} & Flower & Cars \\ \hline \hline ViT-B/16 [11] & 55.4 & 86 & 98.1 & 87.1 & - & 89.5 & - \\ ViT-L/16 [11] & 190.7 & 307 & 97.9 & 86.4 & - & 89.7 & - \\ DeiT-B [12] & 17.6 & 85.8 & 99.1 & **90.8** & - & 98.4 & 92.1 \\ TNT-S[38] [10] & 17.3 & 23.8 & 98.7 & 90.1 & 94.7 & **98.8** & - \\ CaiT-S[38] [11] & 12.9 & 24.2 & 99.1 & **90.8** & 94.9 & 98.6 & 94.1 \\ GFNet-XS[12] & 2.9 & 16 & 98.6 & 89.1 & - & 98.1 & 92.8 \\ GFNet-H-B[12] & 8.6 & 54 & 99.0 & 90.3 & - & **98.8** & 93.2 \\ CMT-S [13] & 4.04 & 25.1 & **99.2** & 91.7 & 95.2 & 98.7 & **94.4** \\ RegionViT-S [1] & - & - & 98.9 & 90.0 & 95.3 & - & 92.8 \\ RegionViT-M [1] & - & - & 99.0 & **90.8** & **95.5** & - & 91.9 \\ \hline BiT-M [14] & - & 928 & 98.91 & 92.17 & 94.46 & 99.30 & - \\ ViT-B/16 [11] & - & 86 & 98.95 & 91.67 & 94.43 & 99.38 & - \\ ViT-L/16 [11] & - & 307 & 99.16 & 93.44 & 94.73 & 99.61 & - \\ ViT-H/16 [11] & - & 632 & 99.27 & 93.82 & **94.82** & 99.51 & - \\ CvT-13 [21] & - & 20 & 98.83 & 91.11 & 93.25 & 99.50 & - \\ CvT-21 [22] & - & 32 & 99.16 & 92.88 & 94.03 & 99.62 & - \\ CvT-W24 [21] & - & 277 & **99.39** & **94.09** & 94.73 & **99.72** & \\ \hline \hline \end{tabular} \end{table} Table 8: Transfer Learning performance on CIFAR10 [11], CIFAR100 [11], Pfewler [23] and Cars [11] dataset. We reported top-1 accuracy, Number of FLOPS, and parameters for various transformer models on these datasets. The top block of the table indicates, models are pre-trained on ImageNet 1k [12] and the bottom block indicates, the models are pre-trained on ImageNet 22k[1]. by assigning each patch token with machine-generated supervised location-specific tokens. The method considers all image patch tokens to compute training loss in a dense manner using machine-supervised token labeling. DeiT [14] and AugReg [21] have discussed data augmentation methods and regularization methods to train Vision Transformers more efficiently. ### Robustness Robustness in the transformer is studied in terms of perturbation, common corruption, distributional shift, and natural adversarial examples. Shao et al. [20] analyzed the robustness of the transformer model using adversarial perturbation. The authors experimented with a white box and a transformer attack setting. They observe that ViT has better adversarial robustness compared to Convolutional Neural Networks (CNNs). They find that ViT features contain low-level information that provides superior robustness against adversarial attacks. They note the combination of CNNs and transformers leads to better robustness compared to pure transformer models with increasing size or added layers. Additionally, they find that pretraining larger datasets do not improve robustness. For a robust model, the opposite is applicable. Bhojanapalli et al. [1] investigated various measures of the robustness of ViT models and resnet models against adversarial examples, natural examples, and common corruptions. The authors have investigated robustness to perturbation both to the input and to the model. It is observed that transformers are robust to remove any single layer from either the input or the model. Paul et al. [13] studied various aspects of robust learning methods of ViT [12], CNNs, and Big transformer [15] methods. Paul et al. [13] benchmarked the robustness of ViTs on a wide range of ImageNet datasets. Their results are in table-9. Through six experiments, the authors verified that ViT has improved in robustness compared to CNN and BIG transformers. The results of those experiments include Experiment 1: attention is crucial for improved robustness; Experiment 2: the important role of pretraining; Experiment 3: ViT has better robustness to image masking; Experiment 4: Fourier spectrum analysis reveals low sensitivity for ViT; Experiment 5: adversarial perturbation has spread wider in the energy spectrum; and Experiment 6: ViT has smoother Loss Landscape to input perturbations. Hendrycks et al. [1] has been introduced for benchmarking neural network robustness against common corruptions like natural occurring and corruptions using ImageNet-C [1] dataset. The ImageNet-C dataset contains perturb version of the original ImageNet dataset. There are 1000 classes and each has 50 images. The performance of various models is shown in the table-10. ViT [12] models are less effective at capturing the high-frequency component of images as compared to CNNs, as investigated by Park et al. [13]. HAT [1] was the result of a further investigation into the effect of an existing transformer model from a frequency perspective. HAT perturbs the high-frequency component of the input image with noise using the RandAugment method. Wu et al. [23] investigated the issue of transformer models vulnerable to adversarial examples like CNNs. This issue is handled in CNNs with the help of adversarial training, which is the most effective way to accomplish it in CNNs. But in transformers, the adversarial training has a heavy computational cost due to the quadratic complexity of the self-attention computation. The AGAT method uses an efficient attention-guided adversarial mechanism with removing certainty patch embedding on each layer with an attention-guided dropping strategy during adversarial training. Bai et al. [1] have proposed the HAT method (named for High-frequency components via Adversarial Training), which perturbs high-frequency components during the training stage. The HAT method alters the high-frequency components of the training image by adding adversarial perturbation and then trains the Vision Transformer (ViT) [1] model with the altered image to improve the model performance and makes the model more robust. ### Privacy Today, pre-trained transformer models are deployed on cloud systems. One of the main issues in cloud-based model deployment pertains to privacy issues in the data. The major privacy issues are the exposure of user data such as search history, medical records, and bank accounts. The current \begin{table} \begin{tabular}{|l|c|c|c|} \hline Method & ImageNet & ImageNet-C & mCE (\(\downarrow\)) \\ \hline \hline ViT-B/16 & 81.43 & 58.85 & 51.98 \\ ViT-L/16 & 82.89 & 64.11 & 45.46 \\ Mixer-B/16 & 76.47 & 47.00 & 67.35 \\ Mixer-L/16 & 71.77 & 40.47 & 75.84 \\ RN18 & 69.76 & 32.92 & 84.67 \\ RN50 & 76.13 & 39.17 & 76.70 \\ \hline \end{tabular} \end{table} Table 10: This table reports top-1 scores and mCE score of various models on ImageNet [1], ImageNet-C [1] dataset. \begin{table} \begin{tabular}{l c c c c c c c} \hline Model & mCE(\%) & mFR(\%) & mT5D(\%)cAcc(\%) & Top-1 & Top-1 & Top-1 \\ & & & & (\%) & (\%) & (\%) \\ \hline \hline ResNet-50 & 76.70 & 58.00 & 82.00 & **22.30** & 25.25 & 2.21 & 16.76 \\ BiT m-r101x3 & 58.27 & 49.99 & 76.71 & 03.78 & 27.25 & 6.41 & 28.19 \\ ViT L-16 & **45.45** & **33.06** & **50.15** & 20.02 & **40.58** & **28.10** & **73.73** \\ \hline \end{tabular} \end{table} Table 9: [13] of different models and methods on ImageNet-C (lower is better). MFRs and mT5Ds on ImageNet-P dataset (lower is better). cAcc tends to challenge accuracy. The cAcc column shows performance on detecting vulnerable image foregrounds from the ImageNet-9 dataset. Columns 6,7, and 8 show top-1 accuracy scores (as percentages) on ImageNet-R, A, and O datasets respectively. research focuses on preserving privacy in the inference of transformer models. The paper [21] introduced TextHide, a federated learning technique to preserve privacy, but this method is for sentence-based like machine translation, sentiment analysis, and paraphrase generation tasks), rather than for token-based tasks (such as name entity recognition and semantic role labeling). Similarly, the DP-finetune [15] Differential Privacy (DP) method allows us to quantify the degree to which we can protect the sensitivity of data. But, training a DP algorithm degrades the quality of the model, which can be tuned using a public base model on a private dataset. The paper [20] proposed THE-X as a method by series of approximations on the HE [1] based solution in a transformer. THE-X method replaces non-polynomial operations with a series of approximations with the help of these layers such as the SoftMax and the GELU layer, drop the pooler layer, add Layer normalization, use knowledge distillation techniques, and then use HE-supported operations with HE transformer. THE-X method is evaluated using BERT-Tiny Model on GLUE [22] and benchmarked for a CONLL2003 [18] task. ### Approximation In this section, we consider various kinds of differential equations and their approximation methods to make them more efficient in terms of computational cost and number of parameters. The paper [23] was one of the first to provide a theoretical foundation based on Partial Differential Equations (PDEs) for deep neural networks such as ResNets. More specifically, the author showed that residual CNNs could be interpreted as a discretization of a space-time differential equation. Based on the theoretical characterization, Ruthotto also proposes new models such as hyperbolic and parabolic CNNs with special properties. Residual networks have also been interpreted as Euler discretizations of Ordinary Differential Equations (ODEs). However, the Euler method of solving is not precise and has truncation errors as it is a first-order method. The authors of ODE Transformers [10] used a classical higher-order method (Runge Kutta) to build a transformer block. They evaluated the ODE transformer on three sequence-generation tasks. These tasks proved the transformer's effectiveness, including abstractive summarization, machine translation, and grammar error correction. Another effort in this direction is TransEvolve [23], which provides a Transformer architecture such as ODE transformer but is modeled on multi-particle dynamic systems. Transformers have been shown to be equivalent to universal computation engines [12]. The authors have proposed an architecture known as the Frozen Pretrained Transformer (FPT), which can be trained on a single modality (such as text data for language modeling), and identify abstractions (such as feature representations) that are useful across modalities. They have taken a GPT, pre-trained it on only natural language data, and fine-tuned its input and output layers along with the layer normalization parameters and positional embeddings. This has resulted in the FPT performing comparably with transformers trained completely from scratch for a variety of tasks such as protein fold prediction, numerical computation, and even image classification. ### Probabilistic methods The bayesian-based probabilistic model plays an important role in estimating uncertainty in data and model while classifying an image. It brings efficiency in terms of less number of parameters (by sampling models using dropout) and cab able to model efficiently using a small amount of data. Guo et al. [19] have proposed Uncertainty-Guided Probabilistic Transformer (UGPT) for Complex Action Recognition. Multi-head self-attention is used to capture the complex and long-term dynamics of complex actions. The author has extended the deterministic transformer mechanism to the probabilistic transformer mechanism to quantify the prediction's uncertainty. The author introduces a novel training strategy using majority and minority models for estimating epistemic (model) uncertainty. Yang et al. [18] have discussed the difficulty in camouflaged object detection due to indistinguishable textures leading to inherent uncertainty on it. The author proposed an uncertainty-guided transformer reasoning (UGTR) to learn a conditional distribution over the model output to estimate uncertainty and make reasoning over it. ### Efficient learning: Continual/ Incremental/ Lifelong/ and Federated learning Another important aspect is how we will adapt the trained model with a few new categories of data and new tasks. In recent years much study has been done in the field of continual learning/incremental learning/ lifelong learning to take care of new classes and new tasks etc. our scope is not to define these terms as per definition. Here we focus on how the transformer model helps to improve these learning processes. Dytox [14] has discussed transformer models for continual learning with dynamic token expansion. The author has discussed the issue of the existing deep method's struggle to continually learn new tasks without forgetting the previous ones. The transformer's encoder and decoder modules are shared among all tasks in order to scale it for a large number of tokens. The author has introduced a lifelong vision transformer (LVT) [22] to get rid of catastrophic forgetting in Continual Learning tasks. In order to achieve this, the author has introduced an attention-based mechanism to get better stability for continual learning. Wang et al. [22] have introduced a contrastive vision transformer to mitigate catastrophic forgetting. The method uses a contrastive learning strategy using a transformer to get a better stability-plasticity trade-off for continual online learning. ### Inclusiveness The Inclusiveness domain focus on the The transformer model-based research focuses on empowering everyone and engaging people with visual, hearing, and other impairments. We define this as Inclusiveness. The main challenge is how we will deploy the transformer model in embedding systems(Like Microcontroller and Microprocessor) more effi ciently for general proposed applications and also an application for impaired persons. Very little research has been carried out in this field like ViT Cane [15], Flying guide dog [16], TransDARC [21], and Trans4Trans [17]. ViT Cane [16] provide visual assistants like finding the shortest path to a destination and detecting obstacle from a distance for visually impaired persons. ViT can model uses a PI camera module to capture the picture and detect the obstacle using the ViT transformer model. The complete work is done using a Raspberry Pi microcontroller. The author has compared the method with CNN-based TOLO architecture and shows better results. Flying guide dog [16] helps to discover the walkable path for a visually impaired person. They use drones to capture real-time street views and a transformer model to extract the walkable area from the segmentation predictions. Finally, the drone adjusts the movement automatically and guides the person to walk in the walkable areas. Trans4Trans [17] uses an efficient transformer for transparent object detection and semantic scene segmentation in real-world navigation assistance. TransDARC [21] uses a transformer-based model for Driver ActiViTy Recognition using latent space feature calibration. The author observes the slowness in understanding the driver's behavior and response time of the existing model compared to the transformer-based model. We account for efficiency in terms of comparing performance transformer models with existing CNN-based models in terms of speed in ViT-Cane and other parameters in the above papers. We also call it the efficient way of deploying transformer models on embedded systems. ## 3 Dataset, and Evaluation In this section, we have discussed various data sets for the Image classification task and evaluated the models' performance on those datasets. We have included three important profiling areas for efficient transformers such as 1. State of the Art comparison on the ImageNet dataset, 2. Transfer learning on new datasets, and 3. Long Rage Arena (LRA) performance. ### Dataset We compare various transformer results for image classification tasks on ImageNet-1K [21] dataset and ImageNet-21K [21]. The ImageNet-1K dataset contains 1.2 million images in the training set and 50K images in the test set over the 1000 (1K) category of classes. ImageNet-21K is a large-scale dataset containing 14 million images in the training set over the 21K category of classes. We show the transfer learning performance from ImageNet-1K to other datasets like CIFAR10 [11], CIFAR100 [12], Oxford-IIIT-Flower [22] and Standford Cars [16]. We show transfer learning performance from ImageNet-21K to the above datasets in table-8. We report the results transformer model on ImageNet-C [1] dataset. We have discussed another benchmark dataset for transformers models such as Long Rage Arena [17] (LRA), which consists of six challenging corpora focused on long-range sequences. ### State of the art comparison on ImageNet Dataset We analyzed and compared the model performance of various efficient vision transformers on ImageNet-1k [21] dataset as shown in the table- 2. The comparison is based on the number of parameters in millions (M), number of floating point operations (FLOPS), image size, type of networks, and top-1 accuracy. In table- 2, table- 3, and table-4 we compare various transformer's performance for input size- \(224\times 224\) pixels, whereas in table-5, we compare with different image sizes like \(256^{2}\), \(288^{2}\), \(384^{2}\), \(448^{2}\), \(512^{2}\), and \(600^{2}\). In table-2, we start comparing with Convolution Neural Net (CNN) architectures like ResNets [14] and RegNet [15] provide good performance on ImageNet [21] dataset, ImageNet Real Dataset [22] and ImageNet-v2 [16] dataset. Similarly, Involution Neural Network (INN) [11] provides good performance on ImageNet data compared to ResNet models. We start comparing transformer base architecture DeiT [15] \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline Tasks & Corpus & Length & Class & Metrics & Description \\ \hline \hline Long LISTOPS & ListOps [17] & 2K & 10 & Accuracy & To check the ability to reason hierarchically while handling long contexts \\ Byte-level Text Classification & IMDB Reviews [18] & 4K & 2 & Accuracy & Is the model the classify the text associated with the real-world applications? \\ Byte-level Document Retrieval & AAN [1] & 8K & 2 & Accuracy & Does the model able to retrieve the matched sequence document 1? \\ \hline Image Classification & CIFAR10 [12] & \(N^{2}\) and others, [19] & \(N^{2}\) & 10 & Accuracy & Does the model able classify Image of long sequence length \(N^{2}\) pixels? \\ Pathfinder & Synthetic [18] & 1K & 2 & Accuracy & Is there a connected path exist between point-1 and point-2 of image size 32 X 32? \\ Path-X & Synthetic [18] & 16K & 2 & Accuracy & Is there a connected path exist between point-1 and point-2 of image size 128 X 128? \\ \hline \end{tabular} \end{table} Table 11: This table reports a description of various tasks in the Long Range Arena (LRA) benchmark. Along with the description, it provides corpus details, length of the sequence, number of classes, and evaluation metrics for all the tasks in the LRA benchmark. with both CNN and INN-based architecture for image classification tasks and observe that transformer architecture performs better compared to ResNet and RegNet architecture. In comparison with all efficient transformers, WaveViT [22] transformer model is more efficient as compared to the number of parameters and its top-1 accuracy. WaveViT-\(S^{*}\) performs better with small parameters(22.7M) and has a comparative performance(83.9 top-1 accuracy) with other models on the ImageNet-1k dataset and WaveViT-\(L^{*}\) provides the best performance in terms of top-1 accuracy (85.5% with 57.5M of parameters) among all the transformer-based models. But the WaveViT model uses supervised extra training data to achieve this performance. CvT [23] model has been evaluated on ImageNet, ImageNet real [1] dataset, and ImageNet-v2 [16] dataset and Performs comparative results on all three datasets with a comparatively small number of parameters (32M) for the image of size 224x224 pixels. We analyzed top-1 accuracy for images of size 384x384. We found that CMT [11] with a small number of parameters(25.1M) and less number of FLOPS(4G) provides an equivalent top-1 score(83.3) on ImageNet-1k dataset and also evaluated on ImageNet real and ImageNetV2 [16] dataset as well. But Uniformer [12] performs best among all the models for image size \(224^{2}\) with more parameters (100M)and FLOPs(12.6) as shown in table-3. Similarly, we provide comparison of various MLP-like transformer models as in table-4. we report that Hire-MLP-Large [11] model provides the best performance in terms of top-1 accuracy (83.8% with 96M of parameters) among all the MLP-Mixer type transformer models. Similarly, in the pooler type network, we report that PoolFormer-M48 [23] model provides the best performance in terms of top-1 accuracy (82.5% with 73M of parameters) among all pooler type transformer models. Similarly, table-5 reports the performance of various transformer models in various image sizes. For image size \(256^{2}\), CMT-B[11] performs better, whereas, for image size \(288^{2}\), LV-ViT [20] performs best, but it uses extra labeled data to train the model. Similarly, for image size \(384^{2}\), CSwin-B[12] performs best(85.4% with 78M of parameters and 47G number of FLOPs) for transformer type networks and Uniformer[12] performs good (86.3% with 100M of parameters and 39.2G number of FLOPs), whereas MaxViTTu _et al._, 2022] performs best (86.4% with 212M of parameters and 133.1G number of FLOPs) for convolution transformer type networks. For image size \(448^{2}\), CaiT[23] performs best(86.3% with 271M of parameters and 247.8G number of FLOPs) for transformer type networks. For image size \(512^{2}\), LV-ViT[20] performs well (86.4% with 151M of parameters and 214.8G number of FLOPs), whereas MaxViTTu _et al._, 2022] performs best (86.7% with 212M of parameters and 245.4G number of FLOPs), for convolution transformer-type networks with extra labeled trained data. Table-7 shows the performance analysis of spectral transformer models trained on ImageNet1K [12] for image size \(224\times 224\) and \(384\times 384\). It compares the number of parameters, number of FLOPs, and Top-1 accuracy of various spectral vision transformers. The green tick mark in the WaveViT [22] model indicates attention is used in the transformer model. The green tick mark in the extra level column indicates that WaveViT [22] uses extra training data while training. Fnet [10], GFNet [11] and AFNO [11] do not use self-attention network compared to WaveViT [22] and achieve good performance with less number of parameters and FLOPs compare to WaveViT [22]. Only WaveViT [22] uses extra training data during the training of the transformer models. It is observed that the extra data helps to improve the performance in the ImageNet-1K dataset. Similarly, we report the performance of various transformers on the ImageNet-21K dataset as shown in the table-6. Here we compare types of transformer networks like a transformer and Convolution transformer, Number of Parameters, Number of FLOPs, image sizes, and top-1 accuracy. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Model & ListOps & Text & Retrieval & Image & Pathfinder Path-X & Avg \\ \hline \hline Chance & 10.00 & 50.00 & 50.00 & 10.00 & 50.00 & 50.00 & 44.00 \\ Transformer & 36.37 & 64.27 & 57.46 & 42.44 & 71.40 & FAIL & 54.39 \\ \hline Local Attention & 15.82 & 52.98 & 53.39 & 41.46 & 66.63 & FAIL & 46.06 \\ Sparse Trans. & 17.07 & 63.58 & **59.59** & 44.24 & 71.71 & FAIL & 51.24 \\ Longformer & 35.63 & 62.85 & 56.89 & 42.22 & 69.71 & FAIL & 53.46 \\ Linformer & 35.70 & 53.94 & 52.27 & 38.56 & 76.34 & FAIL & 51.36 \\ Reformer & **37.27** & 56.10 & 53.40 & 38.07 & 68.50 & FAIL & 50.67 \\ Sinkhorn Trans. & 33.67 & 61.20 & 53.83 & 41.23 & 67.45 & FAIL & 51.39 \\ Synthesizer & 36.99 & 61.68 & 54.67 & 41.61 & 69.45 & FAIL & 52.88 \\ BigBird & 36.05 & 64.02 & 59.29 & 40.83 & 74.87 & FAIL & **55.01** \\ Linear Trans. & 16.13 & **65.90** & 53.09 & 42.34 & 75.30 & FAIL & 50.55 \\ Performer & 18.01 & 65.40 & 53.82 & **42.77** & **77.05** & FAIL & 51.41 \\ \hline Nyströmformer* & 37.34 & 65.75 & 81.29 & - & - & - & 61.46 \\ Transformer-LS* & 38.36 & 68.40 & 81.95 & - & - & - & 62.90 \\ \hline \hline \end{tabular} \end{table} Table 12: Long Range Aerna results for 6 different tasks for various efficient transformer models. * tends for evaluation as per the proposed transformer paper ### Transfer learning on New datasets for Image Classification task Table- 8, shows the transfer learning capability of the pre-trained transformer models on CIFAR10 [11], CIFAR100 [12], Pets [13], Flowers [14], Cars [15] datasets. We show the comparison of the number of parameters, number of FLOPs, and top-1 accuracy of various models on the above datasets. The table 8 contains two blocks, the top blocks show a comparison of the transformer models trained on the ImageNet-1k [6] (1000 target categories) dataset, whereas the bottom block shows the comparison of the transformer models on ImageNet-22K [6][6][22000 target categories]. In the top block, we observed that CMT-S [11] performs better in CIFAR10 [12] dataset i.e., accuracy is around 99.2%, RegionViT-M [2], DeiT-B [13],CaiT-S [13] performs better in CIFAR100 [12] dataset i.e., accuracy is around 90.8%. In Oxford-IIIT-Pet [13] dataset, RegionViT-MClen _et al._, 2022a] performs better and its accuracy is around 95.5%. GFNet-H-B[14] and TNT-S [15] performs better in Oxford-IIIT-Flowers [14] dataset i.e., accuracy is around 98.8%. In Standard Cars [15] dataset,CMT-S [11] performs better and its accuracy is 94.4%. In bottom block, we observe that CvT-W24 [13] performs better in CIFAR10 [12], CIFAR100 [12] and Oxford-IIIT-Flowers [14] datasets, whereas ViT-H [14] performs well on the Oxford-IIIT-Pet [13] dataset. We also analyzed model architecture across layers for ViT [14] and CvT [13] models. We observe that ViT-H with 16 patch size performs better than ViT-Base and ViT-Large. A similar case with CvT-24 performs better compared to CvT-13 and CvT-21. So we can conclude that the larger model size performs better. It is very difficult to claim that pre-training models on ImageNet-22K [6] dataset gives better representation features, which helps to perform better in Transfer learning as compared to pre-training models on ImageNet-1K [6] because the models are different. It is not clear if it is due to the model, or it is due to the large dataset with more classes. ### Long Range arena (LRA) Benchmark Transformers are largely not performing very well on long sequence lengths due to quadratic complexity in self-attention. Long Range Arena (LRA) [15] is another evaluation benchmark method focused on evaluating model quality on long-range- context scenarios. LRA benchmark is applicable to sequence lengths ranging from 1K to 16K tokens. It focuses on a wide range of data types and modalities, such as mathematical reasoning requiring similarity, structural, text, natural, synthetic images, and visual-spatial reasoning. LRA benchmark is basically evaluated the efficiency of transformer models on a list of tasks focus on long-range data contexts such as Long ListOps task, Byte level Text Classification, Byte Level Document Retrieval task, Image classification on the sequence of pixels, Pathfinder and Pathfinder-X task as shown in table- 11. LRA benchmark is created based on the Generality of the model(the efficient transformer model should apply to a variety of the task), Simplicity(the task should have simple to setup), long inputs (the input sequence length should have reasonably long to capture long-range dependency of the model), Challenging(the task should have difficult enough for improvement in the model performance and to encourage future research direction), Probing diverse aspects ( the set tasks should access the different capabilities of the model like hierarchical/spatial structures, generalizations capabilities, etc.) and Non-resource intensive(the model should be designed to be the lightweight model which can be accessible to researcher). The LRA benchmark is evaluated on recent transformer models such as Performer [13], Reinformers [12], Linformers [15], Linear transformers [16], Synthesizers [17], Sinkhorn transformer [18], Sparse transformers [19], Nystromformer [10], Transformer-LS [20], Longformers [1] and Big bird [1]. None of the latest vision transformer(like Swin [17], Twin [21], CvT [13], CSwin [6], RegionViT [2], WaveViT [14]) models are not evaluated on this benchmark. It is more challenging and interesting to see results on this benchmark. Quantitative results for various transformer models are reported in table-12. From the table, we observe that it is a very challenging benchmark. The performance in visual domains is relatively low compared to language tasks. The image classification scores low for long sequences, and Most of the time, the model fails in the Path-X task. ## 4 Conclusion We have discussed all ten dimensions of the efficient 360 frameworks in different subsections. The survey has opened up new research areas and directions for transformers. For instance, we are ourselves coming up with a spectral neural operator-based transformer which we believe is likely to outperform state-of-art transformers with respect to robustness, explainability, and efficiency (have a significantly lesser number of parameters). We also notice from the diagram (Figure 1), there is room for research in dimensions such as privacy, transparency, fairness, efficient learning, and most importantly inclusiveness. Due to the applicability of some of the advanced transformer models on other modalities of data, a lot of work on audio, speech, and video has opened up. Further, AI for sciences is another open area of research including the applicability of the advanced transformer models on high-resolution data like weather forecasting and oceanography (wave modeling).
2310.11434
Is $K_{1}/K^{*}$ enhancement in heavy ion collisions a signature of chiral symmetry restoration?
We extend the recent study of $K_{1}/K^{*}$ enhancement as a signature of chiral symmetry restoration in heavy ion collisions at the Large Hadron Collider (LHC) via the kinetic approach to include the effects due to non-unity hadron fugacities during the evolution of produced hadronic matter and the temperature-dependent $K_1$ mass. Although the effect of non-unity fugacity only slightly reduces the $K_1/K^*$ enhancement due to chiral symmetry restoration, the inclusion of the temperature-dependent $K_1$ mass leads to a substantial reduction in the $K_1/K^*$ enhancement. However, the final $K_1/K^*$ ratio in peripheral collisions still shows a more than factor of two enhancement compared to the case without chiral symmetry restoration and thus remains a good signature for chiral symmetry restoration in the hot dense matter produced in relativistic heavy ion collisions.
Haesom Sung, Sungtae Cho, Che Ming Ko, Su Houng Lee, Sanghoon Lim
2023-10-17T17:43:07Z
http://arxiv.org/abs/2310.11434v2
# Is \(K_{1}/k^{*}\) enhancement in heavy ion collisions a signature for chiral symmetry restoration? ###### Abstract We extend the recent study of \(K_{1}/K^{*}\) enhancement as a signature of chiral symmetry restoration in heavy ion collisions at the Large Hadron Collider (LHC) via the kinetic approach to include the effects due to non-unity hadron fugacities during the evolution of produced hadronic matter and the temperature-dependent \(K_{1}\) mass. Although including non-unity pion and kaon fugacities reduces slightly the \(K_{1}/K^{*}\) enhancement found in previous study due to chiral symmetry restoration, adding temperature-dependent \(K_{1}\) mass leads to a substantial further reduction of the \(K_{1}/K^{*}\) enhancement. However, the final \(K_{1}/K^{*}\) ratio in peripheral collisions still shows a factor of 2.4 enhancement compared to the case without chiral symmetry restoration, confirming its use as a good signature for chiral symmetry restoration in the hot dense matter produced in relativistic heavy ion collisions. ## I Introduction According to lattice QCD calculations, the quark-gluon plasma (QGP) to hadronic matter (HM) transition at vanishing baryon chemical potential is a smooth crossover with a critical temperature \(T_{C}\) at about 156 MeV [1]. This temperature coincides with the chemical freeze-out temperature in the statistical model for particle production in relativistic heavy ion collisions at energies available from the Relativistic Heavy Ion Collider (RHIC) and the LHC [2; 3; 4]. Since the chiral symmetry is restored above this temperature, masses of chiral partners are expected to become degenerate near \(T_{C}\) as indicated in studies based on the QCD sum rules for the axial vector meson \(K_{1}(1270)\) and vector meson \(K^{*}(890)\) masses [5] as well as the lattice QCD [6] and the functional renormalization group [7] calculations for the axial vector meson \(a_{1}(1260)\) and vector meson \(\rho(770)\) masses. Because of the shorter lifetimes of \(K_{1}(1270)\) and \(K^{*}(890)\), which have vacuum decay widths of 90 MeV and 47 MeV, respectively, than that of the hadronic stage of relativistic heavy ion collisions, their yield ratio \(K_{1}/K^{*}\) in these collisions is expected to depend on the degree of chiral symmetry restoration in the produced matter. A recent study by some of the present authors [8] has indeed found this effect in Pb+Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV. Using the \(K_{1}\) number at \(T_{C}\) obtained from the statistical hadronization model by taking the masses of \(K_{1}\) and \(K^{*}\) to be \(m_{K_{1}}=m_{K^{*}}=890\) MeV according to a QCD sum rule calculation [9] and assuming that the \(K_{1}\) mass immediately changes to its vacuum mass in the produced hadronic matter, they have studied the effect of hadronic scatterings on the yield ratio \(K_{1}/K^{*}\) via a kinetic approach. Based on a schematic hydrodynamic model for the evolution of produced hot dense matter using the lattice equation of state for the QGP and the resonance hadron gas model for the HM [10], the time evolution of \(K_{1}\) and \(K^{*}\) numbers are studied by taking into account the reactions \(K_{1}\pi\leftrightarrow K\pi\), \(K_{1}\pi\leftrightarrow K^{*}\rho\), \(K_{1}\rho\leftrightarrow K^{*}\pi\), \(K_{1}\rho\leftrightarrow K\rho\), \(K_{1}\leftrightarrow K^{*}\pi\) and \(K_{1}\leftrightarrow K\rho\) that involve the \(K_{1}\) meson as well as the reactions \(K^{*}\pi\leftrightarrow K\rho\), \(K^{*}\rho\leftrightarrow K\pi\) and \(K^{*}\leftrightarrow K\pi\) that involve the \(K^{*}\) meson. Their results show that the ratio \(K_{1}/K^{*}\) is increased by a factor of 3 in mid-central collisions (40-50% centrality) and by a factor of 6 in peripheral collisions (70-80% centrality) compared to that without including the effect of chiral symmetry restoration, although it is not affected much in central collisions (0-5% centrality). The study in Ref. [8] has, however, neglected two important effects, namely, 1) the constancy of effective pion, kaon and nucleon numbers during the hadronic evolution after including those from resonance decays, which is supported by the success of the statistical hadronization model that these effective numbers are fixed at \(T_{C}\) when the chemical freeze out takes place, and 2) the temperature-dependent \(K_{1}\) mass in the hadronic matter [5]. As shown in a study based on a multi-phase transport (AMPT) model [11], constant effective pion, kaon and nucleon numbers is accompanied by a constant entropy per particle during the hadronic evolution, indicating non-unity pion, kaon and nucleon fugacities if the hadronic matter is modeled by a thermally equilibrated fireball that cools as it expands. In the present study, we extend the study of Ref. [8] to include this effect and also the temperature-dependent \(K_{1}\) mass given in Ref. [5] by using the temperature-dependent quark condensate from Ref. [12]. Including these two effects in the kinetic equations allow us to study more realistically the \(K_{1}/K^{*}\) ratio in relativistic heavy ion collisions. Although results from present study show a smaller \(K_{1}/K^{*}\) ratio than in Ref. [8], they do not change the conclusion that an enhanced \(K_{1}/K^{*}\) ratio than that predicted by the statistical hadronization model can serve as a good signature for the chiral symmetry restoration in the hot dense matter produced in relativistic heavy ion collisions. The present paper is organized as follows. We first review in Sec. II the temperature dependence of \(K_{1}\) mass in a hadronic matter at finite temperature and then use it in Sec. III to calculate the cross sections for \(K_{1}\) and \(K^{*}\) reactions with pion and rho meson as well as their thermal averages. In Sec. IV, we determine the temperature dependence of the pion, kaon, and nucleon fugacities by requiring the effective pion, kaon and nucleon numbers, which included those from resonance decays, as well as the entropy per particle to remain unchanged during the hadronic evolution. The kinetic equations for the time evolution of the \(K_{1}\) and \(K^{*}\) numbers are then given in Sec. V, with the results on the yield ratio \(K_{1}/K^{*}\) in Pb+Pb collisions presented in Sec. VI. Finally, a brief summary is given in Sec. VII. ## II Temperature-dependent \(K_{1}\) meson mass According to the QCD sum rule study of Ref. [5], the mass difference between \(K_{1}\) and \(K^{*}\) mesons in a hot hadronic matter depends on the quark condensate \(\langle\bar{q}q\rangle_{T}\) as \[m_{K_{1}}^{2}(T)=m_{K^{*}}^{2}+\frac{\langle\bar{q}q\rangle_{T}}{\langle\bar{ q}q\rangle_{0}}(m_{K_{1}}^{2}-m_{K^{*}}^{2}), \tag{1}\] where \(\langle\bar{q}q\rangle_{0}\) is the quark condensate in the vacuum. Neglecting the small change of \(K^{*}\) mass with temperature [9] and using \(m_{K_{1}}\)=1.25 GeV, \(m_{K^{*}}\)=0.892 GeV, and the temperature-dependent quark condensate from Ref. [12], the temperature dependence of \(K_{1}\) mass is shown in Fig. 1. It is seen that the \(K_{1}\) mass at \(T_{C}\) is about 1.1 GeV, instead of the \(K^{*}\) free-space mass of 0.892 GeV assumed in Ref. [8], and then gradually increases to its free-space value of 1.25 GeV. ## III \(K_{1}\) and \(K^{*}\) reaction cross sections In this Section, we review the \(K_{1}\) and \(K^{*}\) reaction cross sections with pion and rho meson, whose abundance dominate in the hadronic matter. These reactions include \(K_{1}+\pi\to K+\pi\), \(K_{1}+\pi\to K^{*}+\rho\), \(K_{1}+\rho\to K+\rho\), and \(K_{1}+\rho\to K^{*}+\pi\) for the \(K_{1}\) meson, and their cross sections have been calculated in Ref. [8] using the massive Yang-Mills approach with a Lagrangian involving spin-0 and spin-1 mesons [13]. Shown in Fig. 2 are the center-of-mass energy \(\sqrt{s}\) and temperature dependence of their isospin averaged cross sections. The most important channel for \(K_{1}\) annihilation is the endothermic reaction \(K_{1}+\pi\to K^{*}+\rho\), except near its threshold where other reactions dominate because of their exothermic nature. In calculating the pion-exchange \(t\)-channel diagram in the reaction \(K_{1}+\pi\to K^{*}+\rho\), the pion can be on shell at certain reaction energy. In this case, the reaction \(K_{1}+\pi\to K^{*}+\rho\) is the same as the two-step process of \(K_{1}\to K^{*}+\pi\) followed by \(\pi+\pi\to\rho\). Since the process \(K_{1}\to K^{*}+\pi\) is explicitly included in the kinetic equations used in our study, we therefore exclude the contribution of on-shell pion to the pion-exchange \(t\)-channel diagram of the reaction \(K_{1}+\pi\to K^{*}+\rho\) as in Ref. [8]. The above reactions enter the kinetic equations, which are given in Sec. V, through their thermal average over the momentum distributions of the particles in the initial Figure 1: Temperature dependence of \(K_{1}\) mass. Solid line is from the QCD sum rule calculations of Ref. [5], while dotted line is the one assumed in Ref. [8] with \(T_{C}=156\) MeV. state, i.e., \[\langle\sigma_{ab\to cd}v_{ab}\rangle = \frac{\int d^{3}{\bf p}_{a}d^{3}{\bf p}_{b}f_{a}({\bf p}_{a})f_{b}( {\bf p}_{b})\sigma_{ab\to cd}v_{ab}}{\int d^{3}{\bf p}_{a}d^{3}{\bf p}_{b}f_{a}( {\bf p}_{a})f_{b}({\bf p}_{b})} \tag{2}\] In the above, \(f_{i}({\bf p}_{i})\) is the Boltzman momentum distribution of particle species \(i=a,b\), i.e., \(f_{i}({\bf p}_{i})=e^{-\sqrt{{\bf p}_{i}^{2}+m_{i}^{2}}/T}\) with \(m_{i}\) being the particle mass, which we take as their vacuum masses for pion, kaon, rho meson, and \(K^{*}\) and the temperature-dependent mass for \(K_{1}\). The \(v_{ab}\) in the above equation is the relative velocity between the two initial particles \(a\) and \(b\). The temperature-dependent thermal averaged cross sections for \(K_{1}\) annihilation by pion and rho meson are shown in Fig. 3, where it is seen that \(\langle\sigma_{K_{1}\pi\to K^{*}\rho}\rangle\) dominates over other thermal averaged cross sections at the temperature range of interest for the present study. Also shown in Fig. 3 are the thermal averaged decay widths of \(K_{1}\) meson to \(K\rho\) and \(K^{*}\pi\), which are computed according to \(\langle\Gamma_{K_{1}}\rangle=\Gamma_{K_{1}}(m_{K_{1}})K_{1}(m_{K_{1}}/T)/K_{2} (m_{K_{1}}/T)\) with \(\Gamma_{K_{1}}(m_{K_{1}})\) evaluated with the inclusion of the \(\rho\) mass distribution in the final state, where \(K_{1}(x)\) and \(K_{2}(x)\) are modified Bessel functions of first and second kind, respectively, to take into account its temperature-dependent mass and the effect of time dilation. The \(\langle\Gamma_{K_{1}\to K\rho}\rangle\) is seen to have a larger value than \(\langle\Gamma_{K_{1}\to K^{*}\pi}\rangle\). For the \(K^{*}\) annihilation processes, they include the reactions \(K^{*}\pi\to K\rho\) and \(K^{*}\rho\to K\pi\) and the decay process \(K^{*}\to K\pi\). Their values and thermal averages have been calculated in Ref. [14] by using the free-space \(K^{*}\) mass, which we will use since we also neglect the small temperature dependence of the \(K^{*}\) mass in the present study. ## IV Fugacities of pion, kaon and nucleon According to the statistical model for particle production in relativistic heavy ion collisions, particle yields including contributions from resonances decays, i.e., their effective numbers, are determined at the chemical freeze-out temperature, which coincides with the QGP to HM phase transition temperature [2; 3; 4]. To maintain the effective pion, kaon and nucleon numbers, which are relevant in the present study, during the expansion and cooling of the hadronic matter, it is necessary for them to acquire non-unity fugacity, as shown in Ref. [11]. In this case, the pion, kaon and nucleon momentum distributions in the Boltzmann approximation need to be multiplied by their fugacity \(z_{i}\), i.e., \(z_{i}f_{i}({\bf p})\). In terms of the thermally equilibrated density \(n_{i}^{T}=\frac{g_{i}}{(2\pi)^{3}}\int d^{3}{\bf p}f_{i}({\bf p})\) of particle species \(i\), where \(g_{i}\) is its spin and isospin degeneracies, the effective pion, kaon and nucleon densities in a hadronic matter of temperature \(T\) is then given by the sum of the densities of free pions, kaons, and nucleons as well as those from resonance decays, i.e., \[n_{\pi}^{\rm eff}(T) = z_{\pi}n_{\pi}^{T}+z_{\pi}^{2}n_{\rho}^{T}+z_{\pi}z_{K}n_{K^{*} }^{T}+z_{\pi}^{2}z_{K}n_{K_{1}}^{T} \tag{3}\] \[+z_{\pi}z_{N}n_{\Delta}^{T}+\cdots,\] \[n_{K}^{\rm eff}(T) = z_{K}n_{K}^{T}+z_{\pi}z_{K}n_{K^{*}}^{T}+z_{\pi}^{2}z_{K}n_{K_{ 1}}^{T}+z_{K}^{2}n_{\phi}^{T}\] (4) \[+\cdots,\] \[n_{N}^{\rm eff}(T) = z_{N}n_{N}^{T}+z_{\pi}z_{N}n_{\Delta}^{T}+\cdots. \tag{5}\] In the above, \(\cdots\) denotes the contribution from strong decays of other resonances, which we include all particles of masses up to 2 GeV in the particle data book. In obtaining the above equations, we have also used the relations \(z_{\rho}=z_{\pi}^{2}\), \(z_{K^{*}}=z_{\pi}z_{K}\), \(z_{K_{1}}=z_{\pi}^{2}z_{K}\), \(z_{\Delta}=z_{\pi}z_{N}\), etc. from the assumption that all particles are in thermal and chemical equilibrium. In terms of the pion, kaon and Figure 4: Temperature dependence of pion (solid line), kaon (dashed line) and nucleon (dash-dotted line) fugacities as well as the volume ratio of hadronic matter (solid line in the inset). nucleon fugacities, the entropy and particle densities of a hadronic matter at temperature \(T\) are then given by \[s(T)=-\sum_{i}g_{i}\int\frac{d^{3}{\bf p}}{(2\pi)^{3}}(z_{i}f_{i}) \ln(z_{i}f_{i}), \tag{6}\] \[n(T)=\sum_{i}z_{i}n_{i}^{T}, \tag{7}\] where the summation \(i\) again includes all particles of masses up to 2 GeV. As shown in Eq.(6), the relativistic Boltzmann distribution is used to evaluate the entropy density as in the calculation of the thermal averaged cross sections and decay widths given by Eq.(2), the effective pion, kaon and nucleon densities in Eqs.(3)-(5) as well as in the total particle density in Eq.(7). Starting with an initial temperature \(T_{C}\) and volume \(V_{C}\) at hadronization of the QGP produced in relativistic heavy ion collisions, when all particles have unity forgacities according to the statistical model for particle production, the volume \(V(T)\) of the hadronic matter and the pion, kaon and nucleon fugacities \(z_{\pi}\), \(z_{K}\) and \(z_{N}\) at a later time when the temperature drops to \(T\) can be obtained from the constancy of entropy per particle and the effective pion, kaon and nucleon numbers by solving the four equations, \(n_{\pi,K,N}^{\rm eff}V(T)=n_{\pi,K,N}^{\rm eff}(T_{C})V(T_{C})\) and \(s(T)/n(T)=s(T_{C})/n(T_{C})\). In Fig. 4, we show the temperature dependence of \(z_{\pi}\), \(z_{K}\), \(z_{N}\) and \(V(T)/V(T_{C})\). It is seen that their values all increase with decreasing temperature of the hadronic matter, with \(z_{N}\) increasing faster than \(z_{K}\) and \(z_{K}\) increasing faster than \(z_{\pi}\). We note that the constant entropy per particle in the hadronic matter has a value of 6.1. ## V Kinetic equations for \(K_{1}\), \(K^{*}\) and \(K\) Neglecting the creation and annihilation of strange hadrons, such as the reaction \(\pi\pi\leftrightarrow K\bar{K}\), which has little effect on the results in the present study, then \(N_{0}=N_{K_{1}}+N_{K^{*}}+N_{K}\) is a constant during the hadronic evolution. In this case, the kinetic equation for the time evolution of \(K_{1}\) number can be written as \[\frac{dN_{K_{1}}}{dt}=\gamma_{K_{1},K_{1}}N_{K_{1}}+\gamma_{K_{1},K^{*}}N_{K^{* }}+\gamma_{K_{1},K}N_{K}, \tag{8}\] where \[\gamma_{K_{1},K_{1}}=-(\langle\sigma_{K_{1}\pi\to K\pi}\rangle+ \langle\sigma_{K_{1}\pi\to K^{*}\rho}v\rangle)z_{\pi}n_{\pi}^{T}\] \[-(\langle\sigma_{K_{1}\rho\to K^{*}\pi}v\rangle+\langle\sigma_{K_{1} \rho\to K\rho}v\rangle)z_{\pi}^{2}n_{\rho}^{T}\] \[-\langle\Gamma_{K_{1}\to K^{*}\pi}\rangle-\langle\Gamma_{K_{1} \to K\rho}\rangle, \tag{9}\] \[\gamma_{K_{1},K^{*}}=\langle\sigma_{K^{*}\rho\to K_{1}\pi}v \rangle z_{\pi}^{2}n_{\rho}^{T}\] \[+(\langle\sigma_{K^{*}\pi\to K_{1}\rho}v\rangle+\langle\sigma_{K^{*} \pi\to K_{1}}v\rangle)z_{\pi}n_{\pi}^{T}\] (10) \[\gamma_{K_{1},K}=\langle\sigma_{K\pi\to K_{1}\pi}v\rangle z_{\pi}n_{ \pi}^{T},\] \[+(\langle\sigma_{K\rho\to K_{1}\rho}v\rangle+\langle\sigma_{K\rho \to K_{1}}v\rangle)z_{\pi}^{2}n_{\rho}^{T}, \tag{11}\] with \(n_{\pi}^{T}\), \(n_{\rho}^{T}\), \(n_{K}^{T}\), \(n_{K^{*}}^{T}\) and \(n_{K_{i}}^{T}\) being, respectively, the thermally equilibrated densities of \(\pi\), \(\rho\), \(K\), \(K^{*}\) and \(K_{1}\) mesons. For the thermal averaged cross sections in Eqs.(10) and (11), which describe the regeneration of \(K_{1}\) meson, they are related to the thermal averaged cross sections and decay widths in Eq.(9), which describe the annihilation of \(K_{1}\) meson, by \(\langle\sigma_{K^{*}\rho\to K_{1}\pi}v\rangle=\langle\sigma_{K_{1}\pi\to K^{*} \rho}v\rangle\frac{n_{K_{1}}^{T}n_{\rho}^{T}}{n_{K^{*}}^{T}n_{\rho}^{T}}\), \(\langle\sigma_{K^{*}\pi\to K_{1}}v\rangle=\langle\sigma_{K_{1}\rho\to K^{*} \pi}v\rangle\frac{z_{\pi}^{2}n_{K_{1}}^{T}n_{\rho}^{T}}{n_{K_{1}}^{T}n_{\rho}^{T}}\), \(\langle\sigma_{K^{*}\pi\to K_{1}}v\rangle=\langle\Gamma_{K_{1}\to K^{*}\pi} \rangle\frac{n_{K_{1}}^{T}}{n_{K^{*}}^{T}n_{\rho}^{T}}\), \(\langle\sigma_{K\pi\to K_{1}\pi}v\rangle=\langle\sigma_{K_{1}\pi\to K\pi}v \rangle\frac{z_{\pi}^{2}n_{K_{1}}^{T}}{n_{K}^{T}}\), \(\langle\sigma_{K\rho\to K_{1}\rho}v\rangle=\langle\sigma_{K_{1}\rho\to K\rho}v \rangle\frac{z_{\pi}^{2}n_{K_{1}}^{T}}{n_{K}^{T}}\), and \(\langle\sigma_{K\rho\to K_{1}}v\rangle=\langle\Gamma_{K_{1}\to K\rho} \rangle\frac{n_{K_{1}}^{T}}{n_{K}^{T}n_{\rho}^{T}}\). Similarly, the kinetic equation for the time evolution of \(K^{*}\) number is given by \[\frac{dN_{K^{*}}}{dt}=\gamma_{K^{*},K_{1}}N_{K_{1}}+\gamma_{K^{*},K^{*}}N_{K^{*} }+\gamma_{K^{*},K}N_{K}, \tag{12}\] where \[\gamma_{K^{*},K_{1}}=\langle\sigma_{K_{1}\pi\to K^{*}\rho}v\rangle z_{\pi}n_{ \pi}^{T}+\langle\sigma_{K_{1}\rho\to K^{*}\pi}v\rangle z_{\pi}^{2}n_{\rho}^{T}\] \[+\langle\Gamma_{K_{1}\to K^{*}\pi}\rangle, \tag{13}\] \[\gamma_{K^{*},K^{*}}=-(\langle\sigma_{K^{*}\pi\to K_{1}\rho}v \rangle+\langle\sigma_{K^{*}\pi\to K_{1}\rho}v\rangle)\] \[+\langle\sigma_{K^{*}\pi\to K_{1}}v\rangle)z_{\pi}n_{\pi}^{T}\] \[-(\langle\sigma_{K^{*}\rho\to K\pi}v\rangle+\langle\sigma_{K^{*}\rho \to K_{1}\pi}v\rangle)z_{\pi}n_{\rho}^{T}\] \[-\langle\Gamma_{K^{*}\rho\to K\pi}\rangle,\] (14) \[\gamma_{K^{*},K}=(\langle\sigma_{K\pi\to K^{*}\rho}v\rangle+ \langle\sigma_{K\pi\to K^{*}}v\rangle)z_{\pi}n_{\pi}^{T}\] \[+\langle\sigma_{K\rho\to K^{*}\pi}v\rangle z_{\pi}^{2}n_{\rho}^{T}. \tag{15}\] As in the case for the \(K_{1}\) meson, the thermal averaged cross sections \(\langle\sigma_{K\pi\to K^{*}\rho}v\rangle\), \(\langle\sigma_{K\rho\to K^{*}\pi}v\rangle\), and \(\langle\sigma_{K\pi\to K^{*}}v\rangle\) in Eq.(V) are related to the thermal averaged cross sections \(\langle\sigma_{K^{*}\rho\to K\pi}v\rangle\) and the thermal averaged width \(\langle\Gamma_{K^{*}\to K\pi}\rangle\) in Eq.(V), which we take from Ref. [14], by \(\langle\sigma_{K\pi\to K^{*}\rho}v\rangle=\langle\sigma_{K^{*}\rho\to K\pi}v \rangle\frac{z_{\pi}^{2}n_{K^{*}}^{T}n_{\rho}^{T}}{n_{K}^{T}n_{\pi}^{T}}\), \(\langle\sigma_{K\rho\to K^{*}\pi}v\rangle=\langle\sigma_{K^{*}\pi\to K_{\rho}}v \rangle\frac{n_{K^{*}}^{T}n_{\rho}^{T}}{n_{K}^{T}n_{\rho}^{T}}\), \(\langle\sigma_{K\pi\to K^{*}}v\rangle=\langle\Gamma_{K^{*}\to K\pi}\rangle \frac{n_{K^{*}}^{T}}{n_{K}^{T}n_{\pi}^{T}}\). ## VI Results We solve the kinetic equations Eqs.(8) and (12) in Sec. V using the thermal averaged \(K_{1}\) and \(K^{*}\) reaction cross sections and \(K_{1}\) decay widths given in Sec. III and the thermal averaged \(K^{*}\) and \(K\) reaction cross sections and \(K^{*}\) decay width from Ref. [8]. For the time dependence of the temperature of the hadronic matter after the QGP to HM phase transition in Pb+Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV, we take it from Ref. [8] based on a schematic ideal hydrodynamics with an equation of state from the LQCD [15]. Although keeping constant entropy per particle as in the present study automatically takes into account the strong viscous effect in the hadronic matter because of the increase of total particle number from the decay of resonances, it has been shown in Ref. [10] that adding viscosity in the expanding hadronic matter does not affect much the time evolution of the temperature of the hadronic matter. With an initial chemical freeze-out temperature \(T_{C}=156\) MeV as in Ref. [8] and the initial volume of 6,076 fm\({}^{3}\), 938 fm\({}^{3}\), and 135 fm\({}^{3}\) from Ref. [8] for the three collision centralities of 0-5%, 40-50% and 70-80%, respectively, the effective pion, kaon, and nucleon numbers, which remain unchanged during the hadronic evolution in our study, agree with those measured by the ALICE Collaboration [16]. For the kinetic freeze-out temperatures, we take their values to be 90 MeV, 108 MeV, and 147 MeV, respectively, for the three centralities 0-5%, 40-50% and 70-80% according to a blast wave model fit to the measured particle transverse momentum spectra by the ALICE Collaboration [16]. In Fig. 5, we show the yield ratio \(K_{1}/K^{*}\) from the solutions of the kinetic equations. Results including both the effect of non-unity pion and kaon fugacities as well as the temperature-dependent \(K_{1}\) mass are shown by the solid red line with \(K_{1}/K^{*}\) having values of 0.357 for peripheral collisions, 0.158 for mid-central collisions, and 0.08 for central collisions. Compared to the results of Ref. [8], shown by the gray dashed line, in which both pion and kaon fugacities are taken to be one and the \(K_{1}\) has a mass equal to the \(K^{*}\) mass at \(T_{C}\) and free-space mass below \(T_{C}\), the final \(K_{1}/K^{*}\) ratio from present study is a factor of 2.5 smaller for 70-80% collision centrality, a factor of 1.7 smaller for 40-50% collision centrality and a factor of 1.4 larger for 0-5% collisions centrality. Although the collision centrality dependence of the \(K_{1}/K^{*}\) ratio from the present study is thus weaker than that in Ref. [8], it still shows an enhancement in peripheral and mid-central collisions compared to the case without including the chiral symmetry restoration effect shown by the black line, indicating that an enhanced \(K_{1}/K^{*}\) yield ratio in relativistic heavy ion collisions at these collision centralities remains a good signature for the chiral symmetry restoration. We note that the reduced \(K_{1}/K^{*}\) ratio in peripheral collisions in the present study compared to that in Ref. [8] is mainly due to the use of more realistic temperature-dependent \(K_{1}\) mass. As shown by the solid blue line, without the latter effect, the non-unity pion and kaon fugacities gives a \(K_{1}/K^{*}\) ratio that is only about 13% smaller in peripheral collisions compared to the results from Ref. [8]. Also shown in Fig. 5 by the dashed cyan line is the \(K_{1}/K^{*}\) ratio from the statistical model, which is determined at \(T_{C}\) and has a value of about 0.14 independent of the collision centrality. We would like to point out that among the many terms in the kinetic equations for the \(K_{1}\) and \(K^{*}\) numbers during the hadronic evolution, the dominant terms are those involving the \(K_{1}\) and \(K^{*}\) decay widths, i.e., \(\langle\Gamma_{K_{1}\to K^{*}\pi}\rangle\), \(\langle\Gamma_{K_{1}\to K\rho}\rangle\), and \(\langle\Gamma_{K^{*}\to K\pi}\rangle\), and the thermal average of their reverse processes. Including only these terms increases the \(K_{1}/K^{*}\) yield ratio by at most 18% in essentially all considered scenarios and collision centralities. Since the width of \(\rho\) meson at finite temperature is known to be significantly broadened [17], the \(K_{1}\) width would become larger after this effect is taken into account. In the limit of very large \(\rho\) meson width and thus large \(K_{1}\) width, the \(K_{1}/K^{*}\) ratio would approach the thermal limit given by the kinetic freeze-out temperature \(T_{K}\). For the scenario of non-unity fugacities and temperature-dependent \(K_{1}\) mass considered in the present study, the \(K_{1}/K^{*}\) ratio in this limit is 0.182 for peripheral collisions, 0.084 for mid-central collisions, and 0.053 for central collisions, which are all smaller than corresponding values shown in Fig. 5 from solving the kinetic equations as expected. Compared to the case without the chiral symmetry restoration effect, which has the values of 0.151, 0.124, and 0.081 for peripheral, mid-central and central collisions, respectively, the \(K_{1}/K^{*}\) ratio is still enhanced in peripheral collisions in this limit of fast chemical equilibration. The enhanced \(K_{1}/K^{*}\) ratio is thus a robust signature of the chiral symmetry restoration effect in hot dense matter produced in peripheral relativistic heavy ion collisions. ## VII Summary In the present study, we have extended the study of Ref. [8] on the use of enhanced yield ratio \(K_{1}/K^{*}\) in relativistic heavy ion collisions as a probe for chiral symmetry restoration by including non-unity pion and kaon fugacities as well as the temperature-dependent \(K_{1}\) mass in the expanding hadronic matter. Our results show that, although including non-unity pion and kaon fugacities only slightly reduces the \(K_{1}/K^{*}\) enhancement found in Ref. [8] due to chiral symmetry restoration, the inclusion of the temperature-dependent \(K_{1}\) mass leads to a substantial reduction in the \(K_{1}/K^{*}\) enhancement. However, the final \(K_{1}/K^{*}\) ratio in peripheral collisions still shows a factor of 2.4 enhancement compared to the case Figure 5: The yield ratio \(K_{1}/K^{*}\) in Pb+Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV at three centralities of 0-5%, 40-50% and 70-80% for various scenarios. without chiral symmetry restoration. The present study thus confirms the conclusion of Ref. [8] that the enhanced \(K_{1}/K^{*}\) ratio can be used as a signature for chiral symmetry restoration in the hot dense matter produced in ultra-relativistic heavy-ion collisions. ## Acknowledgements This work was supported by the Korea National Research Foundation under Grant No. RS-2023-00280831 (S.C.), No. 2023R1A2C300302311 (S.H.L.) and Project No. NRF-2008-00458 (S.L.), and the U.S. Department of Energy under Award No. DE-SC0015266 (C.M.K.). S. H. Lee also acknowledges the support from the Samsung Science and Technology Foundation under Project No. SSTF-BA1901-04. H. Sung thanks the Cyclotron Institute of Texas A&M University for its hospitality during her stay as a visiting scholar supported by a graduate fellowship from the National Research Foundation of Korea under Award No. NRF-2022K1A3A1A12097807.
2310.06148
Understanding Transfer Learning and Gradient-Based Meta-Learning Techniques
Deep neural networks can yield good performance on various tasks but often require large amounts of data to train them. Meta-learning received considerable attention as one approach to improve the generalization of these networks from a limited amount of data. Whilst meta-learning techniques have been observed to be successful at this in various scenarios, recent results suggest that when evaluated on tasks from a different data distribution than the one used for training, a baseline that simply finetunes a pre-trained network may be more effective than more complicated meta-learning techniques such as MAML, which is one of the most popular meta-learning techniques. This is surprising as the learning behaviour of MAML mimics that of finetuning: both rely on re-using learned features. We investigate the observed performance differences between finetuning, MAML, and another meta-learning technique called Reptile, and show that MAML and Reptile specialize for fast adaptation in low-data regimes of similar data distribution as the one used for training. Our findings show that both the output layer and the noisy training conditions induced by data scarcity play important roles in facilitating this specialization for MAML. Lastly, we show that the pre-trained features as obtained by the finetuning baseline are more diverse and discriminative than those learned by MAML and Reptile. Due to this lack of diversity and distribution specialization, MAML and Reptile may fail to generalize to out-of-distribution tasks whereas finetuning can fall back on the diversity of the learned features.
Mike Huisman, Aske Plaat, Jan N. van Rijn
2023-10-09T20:51:49Z
http://arxiv.org/abs/2310.06148v1
# Understanding Transfer Learning and Gradient-Based Meta-Learning Techniques+ ###### Abstract Deep neural networks can yield good performance on various tasks but often require large amounts of data to train them. Meta-learning received considerable attention as one approach to improve the generalization of these networks from a limited amount of data. Whilst meta-learning techniques have been observed to be successful at this in various scenarios, recent results suggest that when evaluated on tasks from a different data distribution than the one used for training, a baseline that simply finetunes a pre-trained network may be more effective than more complicated meta-learning techniques such as MAML, which is one of the most popular meta-learning techniques. This is surprising as the learning behaviour of MAML mimics that of finetuning: both rely on re-using learned features. We investigate the observed performance differences between finetuning, MAML, and another meta-learning technique called Reptile, and show that MAML and Reptile specialize for fast adaptation in low-data regimes of similar data distribution as the one used for training. Our findings show that both the output layer and the noisy training conditions induced by data scarcity play important roles in facilitating this specialization for MAML. Lastly, we show that the pre-trained features as obtained by the finetuning baseline are more diverse and discriminative than those learned by MAML and Reptile. Due to this lack of diversity and distribution specialization, MAML and Reptile may fail to generalize to out-of-distribution tasks whereas finetuning can fall back on the diversity of the learned features. ## 1 Introduction Deep learning techniques have enabled breakthroughs in various areas such as game-playing (Silver et al., 2016; Mnih et al., 2015), image recognition (Krizhevsky et al., 2012; He et al., 2015), and machine translation Wu et al. (2016). However, deep neural networks are notoriously _data-hungry_(LeCun et al., 2015), limiting their successes to domains where sufficient data and computing resources are available (Hospedales et al., 2021; Huisman et al., 2021). _Meta-learning_(Schaul and Schmidhuber, 2010; Schmidhuber, 1987; Thrun, 1998; Brazdil et al., 2022) is one approach to reduce these limitations by learning efficient deep learning algorithms across different tasks. By presenting the learning algorithm with different tasks, that presumably share similarities with the task of interest, the learning algorithm is presumed to be able to learn more efficiently than when it has to learn the task of interest from scratch. This approach involves two different time scales of learning: at the _inner-level_, a given task is learned, and at the _outer-level_ the learning algorithm is improved over tasks by adjusting the hyperparameters. Seminal approaches for this are MAML and Reptile. While the field attracted much attention, recent results (Chen et al., 2019; Tian et al., 2020; Mangla et al., 2020) suggest that simply pre-training a network on a large dataset and _finetuning_ only the final layer of the network (the final layer) may be more effective at learning new image classification tasks quickly than more complicated meta-learning techniques such as MAML (Finn et al., 2017) and Reptile (Nichol et al., 2018) when the data distribution is different from the one used for training. In contrast, MAML and Reptile often outperform finetuning when the data distribution is similar to the one used during training. These phenomena are not well understood and surprising as Raghu et al. (2020) have shown that the adaptation behaviour of MAML resembles that of finetuning when learning new tasks: most of the changes take place in the final layer of the network while the body of the network is mostly kept frozen. In this work, we aim to find an explanation for the observed performance differences between MAML and finetuning. More specifically, we aim to answer the following two research questions: 1. Why do MAML and Reptile outperform finetuning in _within-distribution_ settings? 2. Why can finetuning outperform gradient-based meta-learning techniques such as MAML and Reptile (Nichol et al., 2018) when the test data distribution diverges from the training data distribution? Both questions focus on the **few-shot image classification settings**. We base our work on MAML, Reptile and finetuning, as these are influential techniques that have sparked a large body of follow-up methods that use the underlying ideas. Since the questions that we aim to answer are inherently harder than just a simple performance comparison, answering them for the models that are at the basis of this body of literature will be the right starting point. We think that developing a better understanding of these influential methods is of great value and can cascade further onto the more complex methods built on top of these. Based on our analysis of the learning objectives of the three techniques (finetuning, MAML, Reptile), we hypothesize that MAML and Reptile specialize for adaptation in low-data regimes of tasks from the training distribution, giving them an advantage in within-distribution settings. However, since they may settle for initial features that are inferior compared with finetuning due to their negligence, or relative negligence, of the initial performance, they may perform comparatively worse when the test data distribution diverges from the training distribution. The primary contributions of our work are the following. First, we show the importance of the output layer weights and data scarcity during training for Reptile and MAML to facilitate specialization for quick adaptation in low-data regimes of similar distributions, giving them an advantage compared with finetuning. Second, we show that the pre-trained features of the finetuning technique are more diverse and discriminative than those learned by MAML and Reptile, which can be advantageous in out-of-distribution settings.1 Footnote 1: All code for reproducing our results can be found at [https://github.com/mikehuisman/transfer-meta-feature-representations](https://github.com/mikehuisman/transfer-meta-feature-representations) ## 2 Related work Meta-learning is a popular approach to enable deep neural networks to learn from a few data by learning an efficient learning algorithm. Many architectures and model types have been proposed, such as MAML (Finn et al., 2017), the meta-learner LSTM (Ravi and Larochelle, 2017), TURTLE (Huisman et al., 2022) and MetaOptNet (Lee et al., 2019). However, our understanding of newly proposed techniques remains limited in some cases. For example, different techniques use different backbones which raises the question of whether performance differences between techniques are due to new model-types or due to the difference in used backbones (Huisman et al., 2021). Chen et al. (2019) was one of the first that investigated this question by performing a fair comparison between popular meta-learning techniques, including MAML (Finn et al., 2017), on few-shot image classification benchmarks such as miniImageNet (Vinyals et al., 2016; Ravi and Larochelle, 2017) and CUB (Wah et al., 2011). Their results show that MAML often outperforms finetuning when the test tasks come from a similar data distribution as the training distribution when using shallow backbones. When the backbone becomes deeper and/or the domain differences between training and test tasks increase, however, this performance gap is reduced and, in some cases, finetuning outperforms MAML. In addition to these findings by Chen et al. (2019), Tian et al. (2020) demonstrate that simply finetuning a pre-trained feature embedding module yields better performance than popular meta-learning techniques (including MAML) on few-shot benchmarks. Mangla et al. (2020) and Yang et al. (2021) further support this finding as they have proposed new few-shot learning techniques based on finetuning pre-trained networks which significantly outperform meta-learning techniques. These performance differences between simple finetuning and more sophisticated techniques such as MAML may be surprising, as Raghu et al. (2020) found that the learning behaviour of MAML is similar to that of finetuning on image classification benchmarks. More specifically, they compared the feature representations of MAML before and after task-specific adaptation, and show that MAML relies mostly on feature re-use instead of quick adaptation because the body of the network is barely adjusted, which resembles the learning dynamics of finetuning (see Section 3.3). Collins et al. (2020) compared the feature representations of MAML and the finetuning method (expected risk minimization) in linear regression settings and found that MAML finds an initialization closer to the hard tasks, characterized by their gentle loss landscapes with small gradients. We demonstrate a similar property: MAML has greater flexibility in picking an initialization as long as the post-adaptation performance is good. In this work, we aim to unite the findings of Raghu et al. (2020) and Chen et al. (2019) by finding an answer to the question of why finetuning can outperform meta-learning techniques such as MAML and Reptile (Nichol et al., 2018) in some image classification scenarios while it is outperformed in other scenarios (when using a shallow backbone or when train/test task distributions are similar). ## 3 Background In this section, we briefly revise supervised learning and few-shot learning (the main problem setting used in this work) and describe finetuning, MAML, and Reptile in that context. ### Supervised learning In the _supervised learning_ setting, we have a joint probability distribution over inputs \(\mathbf{x}\) and corresponding outputs \(\mathbf{y}\), i.e., \(p(\mathbf{x},\mathbf{y})\). In the context of deep learning, the goal is to build deep neural networks that can predict for any given input \(\mathbf{x}\) the correct output \(\mathbf{y}\). Throughout this paper, we assume that the neural network architecture \(f\) is fixed and that we only wish to find a set of parameters \(\theta\) such that the network predictions \(f_{\theta}(\mathbf{x})\) are as good as possible. This can be done by updating the parameters \(\theta\) in order to minimize a loss function \(\mathcal{L}_{\mathbf{x}_{i},\mathbf{y}_{i}}(\theta)\) that captures how well the network parameterized by \(\theta\) is performing on input \(\mathbf{x}_{i}\) and corresponding output \(\mathbf{y}_{i}\). Here, network parameters \(\theta\) are a weight matrix, where \(\theta_{(i:j)}\) represent the weights of the \(i^{th}\) until the \(j^{th}\) layer (inclusive), where \(0<i<j\leq L\). Thus, under the joint distribution \(p(\mathbf{x},\mathbf{y})\), we wish to find \[\operatorname*{arg\,min}_{\theta}\operatorname*{\mathbb{E}}_{ \mathbf{x}_{i},\mathbf{y}_{i}}\left[\mathcal{L}_{\mathbf{x}_{i},\mathbf{y}_{i} }(\theta)\right], \tag{1}\] where \((\mathbf{x}_{i},\mathbf{y}_{i})\) are sampled from the joint distribution \(p(\mathbf{x},\mathbf{y})\), i.e., \(\mathbf{x}_{i},\mathbf{y}_{i}\sim p(\mathbf{x},\mathbf{y})\). The most common way to approximate these parameters is by performing gradient descent on that loss function, which means that we update the parameters in the direction of the steepest descent \[\theta^{(t+1)}=\theta^{(t)}-\alpha\nabla_{\theta^{(t)}} \operatorname*{\mathbb{E}}_{\mathbf{x}_{i},\mathbf{y}_{i}}\left[\mathcal{L}_ {\mathbf{x}_{i},\mathbf{y}_{i}}(\theta^{(t)})\right]. \tag{2}\] Here, \(\nabla_{\theta^{(t)}}\) is the gradient with respect to \(\theta^{(t)}\), \(t\) indicates the time step, and \(\alpha\) the learning rate or step size. ### Few-shot learning Few-shot learning is a special case of supervised learning, where the goal is to learn new tasks from only a limited number of examples, which is the main focus of this work and the techniques described below. In order to enhance the learning process on a limited number of examples, the learner is presented with an additional set of tasks, so that it can learn about the learning process. Here, every task \(\mathcal{T}_{j}\) consists of a data distribution \(p_{j}(\mathbf{x},\mathbf{y})\) and a loss function \(\mathcal{L}\). Since the loss function is often assumed to be fixed across all tasks, we henceforth use the term 'task' to refer to the task data distribution. The loss function is often assumed to be fixed, and therefore, we henceforth mean data distribution \(p_{j}(\mathbf{x},\mathbf{y})\) or a sample from this distribution, depending on the context. One notable exception is made in Section 5.1, where we abstract away from data distributions and define a task purely abstractly as a loss function. Tasks are commonly sampled from a large meta-dataset \(\mathcal{D}\backsim p_{s}(\mathbf{x},\mathbf{y})\), which itself is a sample from a source distribution \(p_{s}\). In the case of classification, this is often done as follows. Suppose that the source distribution from which dataset \(\mathcal{D}\) is sampled, is defined over a set of classes \(\mathcal{Y}=\{c_{1},c_{2},\ldots,c_{n}\}\). Then, we can create tasks \(\mathcal{T}_{j}\) by considering only a subspace of this source distribution corresponding to a subset of classes \(S_{j}\subseteq\mathcal{Y}\). The method can then be evaluated on tasks sampled from a disjoint subset of classes \(S_{m}\subseteq\mathcal{Y}\), where \(S_{m}\cap S_{j}=\). Below, we give a concrete example of this procedure for the popular \(N\)**-way \(k\)-shot classification** setting (Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017). Suppose that we have a classification dataset \(\mathcal{D}=\{(\mathbf{x}_{1},\mathbf{y}_{1}),(\mathbf{x}_{2},\mathbf{y}_{2} ),\ldots,(\mathbf{x}_{M},\mathbf{y}_{M})\}\) of examples. Then, we can create an \(N\)-way \(k\)-shot task \(\mathcal{T}_{j}\) by sampling a subset of \(N\) labels \(S_{j}\subseteq\mathcal{Y}\), where \(|S_{j}|=N\). Moreover, we sample precisely \(k\) examples for every class to form a training set, or _support set_\(D^{tr}_{\mathcal{T}_{j}}\), for that task, consisting of \(|D^{tr}_{\mathcal{T}_{j}}|=N\cdot k\) examples. Lastly, the test set, or _query set_\(D^{te}_{\mathcal{T}_{j}}\), is obtained by sampling examples of the subset of classes \(S_{j}\) from \(\mathcal{D}\) that are not present in the support set. Techniques then train on the support set and evaluated on the query set in order to measure how well they have learned the task. This is the problem setting that we will use throughout this work. The deployment of an algorithm for few-shot learning is often done in three stages. In the _meta-training_ stage, the algorithm is presented with training tasks and uses them to adjust the prior, such as the initialization parameters. After every X training tasks, the _meta-validation_ stage takes place, where the learner is validated on unseen meta-validation tasks. Finally, after the training is completed, the learner with the best validation performance is evaluated in the _meta-test_ phase, where the learner is confronted with new tasks that have not been seen during training and validation. Importantly, the tasks between meta-training, meta-validation, and meta-test phases are disjoint. For example, in image classification, the classes in the meta-training tasks are not allowed to occur in meta-test tasks as we are interested in measuring the learning ability instead of memorization ability. In regression settings, every task has its own ground-truth function (as in Section 5.1). For example, every task could be a sine wave with a certain phase and amplitude (Finn et al., 2017). ### Finetuning Achieving good generalization by minimizing the objective in Equation 1 using gradient-based optimization often requires large amounts of data. This raises the question of how we can perform few-shot learning of tasks. The transfer learning technique called _finetuning_ tackles this problem as follows. In the _pre-training phase_, it minimizes Equation 1 on a given source distribution \(p_{s}(\mathbf{x},\mathbf{y})\) using gradient descent as shown in Equation 2. This leads to a sequence of updates that directly update the initialization parameters. Then, it freezes the feature extraction module of the network: all parameters of the network through the penultimate layer, i.e., \(\theta_{(1:L-1)}\) where \(L\) is the number of layers. When presented with a target distribution \(p_{j}(\mathbf{x},\mathbf{y})\) from which we can sample fewer data, we can simply re-use the learned feature embedding module \(\int_{\theta_{(1:L-1)}}\) (all hidden layers of the network excluding the output layer) for this new problem. Then, in the _finetuning phase_, it only trains the parameters in the final layer of the network \(\theta_{(L)}\) (the final layer). By reducing the number of trainable parameters on the target problem, this technique effectively reduces the model complexity and prevents overfitting issues associated with the data scarcity in few-shot learning scenarios. This comes at the cost of not being able to adjust the feature representations of inputs. As a consequence, this approach fails when the pre-trained embedding module fails to produce informative representations of the target problem inputs. ### Reptile Instead of joint optimization on the source distribution, _Reptile_(Nichol et al., 2018) is a meta-learning algorithm and thus aims to learn how to learn. For this, it splits the source distribution \(p_{s}(\mathbf{x},\mathbf{y})\) into a number of smaller task distributions \(p_{1}(\mathbf{x},\mathbf{y}),p_{2}(\mathbf{x},\mathbf{y}),\ldots,p_{n}( \mathbf{x},\mathbf{y})\), corresponding to tasks \(\mathcal{T}_{1},\mathcal{T}_{2},\ldots\mathcal{T}_{n}\). On a single task \(\mathcal{T}_{j}\) for \(j\in\{1,\ldots,n\}\), its objective is to minimize Equation 1 under the task distribution \(p_{j}(\mathbf{x},\mathbf{y})\) using \(T\) gradient descent update steps as shown in Equation 2. This results in a sequence of weight updates \(\theta\rightarrow\theta_{j}^{(1)}\rightarrow\ldots\rightarrow\theta_{j}^{(T)}\). After task-specific adaptation, the initial parameters \(\theta\) are moved into the direction of \(\theta_{j}^{(T)}\) \[\theta=\theta+\epsilon\left(\theta_{j}^{(T)}-\theta\right), \tag{3}\] where \(\epsilon\) is the step size. Intuitively, this update interpolates between the current initialization parameters \(\theta\) and the task-specific parameters \(\theta_{j}^{(T)}\). The updated initialization \(\theta\) is then used as starting point when presented with new tasks, and the same process is repeated. It is easy to show that this update procedure corresponds to performing first-order optimization of the multi-step objective \[\operatorname*{arg\,min}_{\theta}\operatorname*{\mathbb{E}}_{ \mathcal{T}_{j}\sim p(\mathcal{T})}\left(\sum_{t=0}^{T-1}\operatorname*{ \mathbb{E}}_{\mathbf{x}_{i},\mathbf{y}_{i}\sim p_{j}}\left[\mathcal{L}_{t+1}( \theta_{\mathbf{j}}^{(\mathbf{t})})\right]\right), \tag{4}\] where \(\mathcal{L}_{t+1}\) is shorthand for the loss on a mini-batch sampled at time step \(t\). ### Maml Another popular gradient-based meta-learning technique is MAML (Finn et al., 2017). Just like Reptile, MAML also splits the source distribution \(p_{s}(\mathbf{x},\mathbf{y})\) into a number of smaller task distributions \(p_{1}(\mathbf{x},\mathbf{y}),p_{2}(\mathbf{x},\mathbf{y}),\ldots,p_{n}( \mathbf{x},\mathbf{y})\), corresponding to tasks \(\mathcal{T}_{1},\mathcal{T}_{2},\ldots\mathcal{T}_{n}\). On the training tasks, it aims to learn a weight initialization \(\theta\) from which new tasks can be learned more efficiently. However, instead of optimizing a multi-step loss function, MAML only optimizes the final performance after task-specific adaptation. More specifically, this means that MAML is only interested in the performance of the final weights \(\theta_{j}^{(T)}\) on a task and not in intermediate performances of weights \(\theta_{j}^{(t)}\) for \(t<T\). In other words, MAML aims to find \[\operatorname*{arg\,min}_{\theta}\operatorname*{\mathbb{E}}_{ \mathcal{T}_{j}\sim p(\mathcal{T})}\left(\operatorname*{\mathbb{E}}_{ \mathbf{x}_{i},\mathbf{y}_{i}\sim p_{j}}\left[\mathcal{L}_{T}(\theta_{\mathbf{ j}}^{(\mathbf{T})})\right]\right). \tag{5}\] To find these parameters, MAML updates its initialization parameters as follows \[\theta=\theta-\beta\nabla_{\theta}\mathcal{L}_{T+1}(\theta_{j}^{ (T)}), \tag{6}\] where \(\beta\) is the learning rate and \(\nabla_{\theta}\mathcal{L}_{T+1}(\theta_{j}^{(T)})=\nabla_{\theta_{j}^{(T)} }\mathcal{L}_{T+1}(\theta_{j}^{(T)})\nabla_{\theta}\theta_{j}^{(T)}\). The factor \(\nabla_{\theta}\theta_{j}^{(T)}\) contains second-order gradients and can be ignored by assuming that \(\nabla_{\theta}\theta_{j}^{(T)}=I\) is the identity matrix, in a similar fashion to what Reptile does. This assumption gives rise to _first-order_ MAML (fo-MAML) and significantly increases the training efficiency in terms of running time and memory usage, whilst achieving roughly the same performance as the _second-order_ MAML version (Finn et al., 2017). In short, first-order MAML updates its initialization in the gradient update direction of the final task-specific parameters. In this work, we focus on first-order MAML, as Finn et al. (2017) have shown this to perform similarly to second-order MAML. ## 4 A common framework and interpretation The three discussed techniques can be seen as part of a general gradient-based optimization framework, as shown in Algorithm 1. All algorithms try to find a good set of initial parameters as specified by their objective functions. The parameters are initialized randomly in line 1. Then, these initial parameters are iteratively updated based on the learning objectives (the loop starting from line 2). ``` 1:Initialization \(\theta\), \(\theta_{1}\), \(\theta_{2}\), \(\theta_{3}\), \(\theta_{4}\), \(\theta_{5}\), \(\theta_{6}\), \(\theta_{7}\), \(\theta_{8}\), \(\theta_{9}\), \(\theta_{10}\), \(\theta_{11}\), \(\theta_{12}\), \(\theta_{13}\), \(\theta_{14}\), \(\theta_{15}\), \(\theta_{16}\), \(\theta_{17}\), \(\theta_{18}\), \(\theta_{19}\), \(\theta_{19}\), \(\theta_{20}\), \(\theta_{21}\), \(\theta_{22}\), \(\theta_{23}\), \(\theta_{24}\), \(\theta_{25}\), \(\theta_{26}\), \(\theta_{27}\), \(\theta_{28}\), \(\theta_{29}\), \(\theta_{29}\), \(\theta_{28}\), \(\theta_{29}\), \(\theta_{29}\), \(\theta_{29}\), \(\theta_{28}\), \(\theta_{29}\), \(\theta previous stage (lines 4-8). Lastly, the initial parameters \(\theta\) are updated using the outcomes of the task-specific adaptation phase. Note that in this general gradient-based optimization framework, all techniques update their initialization parameters based on a single distribution \(p\) at a time. One could also choose to use batches of distributions, or _meta-batches_, in order to update the initialization \(\theta\). This can be incorporated by using the average of the losses of the different distributions as an aggregated loss function. ``` 1:Randomly initialize \(\theta\) 2:while not converged do 3: Select data distribution \(p=\)\(\boxed{p_{j}}\)\(p_{j}\sim p(\mathcal{T})\)\(p_{j}\sim p(\mathcal{T})\) 4: Set \(\theta^{(0)}=\theta\) 5:for\(t=0,...,T-1\)do 6: Sample a batch of data \(\mathbf{x},\mathbf{y}\sim p\) 7: Compute \(\theta^{(t+1)}=\theta^{(t)}-\nabla_{\theta^{(t)}}\mathcal{L}_{t+1}(\theta^{(t)})\) 8:endfor 9: Update \(\theta\) by \(\boxed{\theta=\theta^{(\mathcal{T})}}\) Equation 3 Equation 6 10:endwhile ``` **Algorithm 1** General gradient-based optimization: finetuning Table 1 gives an overview of the three algorithms. As we can see, finetuning only optimizes for the initial performance and does not take into account the performance after adaptation. This means that its goal is to correctly classify any input \(\mathbf{x}\) from the source problem distribution \(p_{s}\). Reptile, on the other hand, optimizes both for initial performance, as well as performance after every update step. This means that Reptile may settle for an initialization with somewhat worse initial performance compared with finetuning, as long as the performance during task-specific adaptation makes up for this initial deficit. MAML is the most extreme in the sense that it can settle for an initialization with poor initial performance, as long as the final performance is good. In short, Reptile and MAML can be interpreted as _look-ahead algorithms_ as they take the performance after task-specific adaptation into account whereas finetuning does not. Moreover, fo-MAML relies purely on the look-ahead mechanism and neglects the initial performance while Reptile also takes the initial and intermediate performances into account. This means that MAML may outperform finetuning with a _low-capacity_ network (with the worst initial performance) where there is not enough capacity to store features that are directly useful for new tasks. The reason \begin{table} \begin{tabular}{c c c} \hline \hline **Algorithm** & **Loss function** & **Focus** \\ \hline Finetuning & \(\underset{\mathbf{x}_{i},\mathbf{y}_{i}}{\mathbb{E}}[\mathcal{L}_{\mathbf{x}_{i},\mathbf{y}_{i}}(\theta)]\) & Initial performance \\ Reptile & \(\underset{\mathcal{T}_{j}\sim p(\mathcal{T})}{\mathbb{E}}\left(\sum_{t=0}^{T-1 }\underset{\mathbf{x}_{i},\mathbf{y}_{i}\sim p_{j}}{\mathbb{E}}\left[\mathcal{ L}_{t+1}(\theta_{\mathbf{j}}^{(\mathbf{t})})\right]\right)\) & Multi-step performance \\ MAML & \(\underset{\mathcal{T}_{j}\sim p(\mathcal{T})}{\mathbb{E}}\left(\underset{ \mathbf{x}_{i},\mathbf{y}_{i}\sim p_{j}}{\mathbb{E}}\left[\mathcal{L}_{T}( \theta_{\mathbf{j}}^{(\mathbf{T})})\right]\right)\) & Final performance \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of the loss functions and corresponding focus of finetuning, Reptile, and MAML. for this is likely that finetuning will be unable to obtain good embeddings for all of the training tasks and does not have a mechanism to anticipate what features would be good to learn future tasks better. MAML, on the other hand, does have this capability, and can thus settle for a set of features with worse initial performance that lends itself better for learning new tasks. In contrast, when we have _high-capacity_ networks with enough expressivity to store all relevant features for a task, finetuning may outperform MAML as it optimizes purely for initial performance without any additional adaptation, which can be prone to overfitting to the training data of the tasks due to the limited amount of available data. Lastly, one may expect Reptile to take place between MAML and finetuning: it works better than finetuning when using low-capacity backbones while it may be slightly worse than finetuning when using larger-capacity networks (but better than MAML). Although MAML focuses on the performance after learning, it has been shown that its learning behaviour is similar to that of finetuning: it mostly relies on feature re-use and not on fast learning (Raghu et al., 2020). This means that when a _distribution shift_ occurs, which means that the test tasks become more distant from the tasks that were used for training, MAML may be ill-positioned due to poor initial performance compared with finetuning which can fall back on more directly useful initial features. ## 5 Experiments In this section, we perform various experiments to compare the learning behaviours of finetuning, MAML, and Reptile, in order to be able to study their within-distribution and out-of-distribution qualities that can help us answer the two research questions posed in Section 1. All experiments are conducted using single PNY GeForce RTX 2080TI GPUs. In order to study the question of why MAML and Reptile can outperform finetuning in within-distribution settings with a shallow Conv-4 backbone, we perform the following three first experiments. Moreover, to investigate why finetuning can outperform MAML and Reptile in out-of-distribution settings, addressing our second research question, we perform experiment four listed below. 1. **Toy problem** (Section 5.1) We study the behaviour of the algorithms on a _within-distribution_ toy problems where there are only two tasks without noise in the loss signals caused by a shortage of training data. This allows us to investigate the initializations that the methods settle for after training. This allows us to see why MAML and Reptile may have an advantage over finetuning in within-distribution settings. 2. **The effect of the output layer** (Section 5.2.1) Finetuning removes the learned output layer and replaces it with a randomly initialized one when presented with a new task. MAML and Reptile, on the other hand, do not do this, and can directly start from the learned initialization weights for both the body and output layer of the network. To investigate whether this gives these two methods an advantage over finetuning in _within-distrbution_ few-shot image classification, we investigate the effect of replacing the learned output layers with randomly initialized ones before learning a new task. This allows us to determine the importance of having a learned weight initialization for the output layer and whether this is something that can explain the advantage of MAML and Reptile over finetuning in these settings. 3. **Specialization for robustness against overfitting** (Section 5.2.2) Another difference between the methods is that finetuning is trained on regular mini-batches of data, whilst MAML and Reptile are trained explicitly for post-adaptation performance on noisy loss signals induced by the limited amount of available training data. To investigate the importance of explicitly training under noisy conditions, we study the performances of MAML and Reptile as a function of the number of examples present in the training condition. Here, the risk of overfitting is inversely related to the number of training examples \(k\) per task. 4. **Information content in the learned initializations** (Section 5.2.3) Lastly, we investigate the within-distribution and out-of-distribution learning performances of finetuning, MAML, and Reptile, with three different backbones of different expressive power (Conv-4, Resnet-10, Resnet-18). More specifically, we propose a measure of broadness or discriminative power of the features and investigate whether this is related to the few-shot learning abilities of these methods to see whether the discriminative power of the three methods differ and can account for the potential superiority of finetuning in the out-of-distribution setting. Figure 1: Average initialization that finetuning, Reptile, and MAML converge to when using \(T=5\) or \(T=25\) adaptation steps per task. In scenario \(a\) (top figures), finetuning and Reptile both pick an initialization in the centre of the two optima where the initial loss is minimal. MAML neglects the initial performance and thus is freer to select an initialization point, especially when \(T\) is larger. In scenario \(b\) (bottom figures) the loss of task 2 is no longer convex and has a reasonably flat plateau. Finetuning and Reptile get stuck in the optimum of the first task and fail to learn the second task successfully, while MAML finds a location from which it can arrive at both optima. ### Toy problem First, we study the behaviour of finetuning, Reptile, and MAML in two synthetic scenarios \(a\) and \(b\), consisting of two tasks each. In this subsection, we use a slightly more abstract notion of tasks compared with the rest of the text, and define tasks purely abstractly by loss functions. These tasks can be considered the meta-train set, and the goal of the algorithms is to find good initialization parameters on this task distribution. We represent tasks by their loss landscape, which we have constructed by hand for illustrative purposes. In scenario \(a\), the two task loss landscapes are quadratic functions of a single parameter \(x\). More specifically, the losses for this scenario are given by \(\ell_{1}^{a}(x)=1.3(x-5)^{2}\) and \(\ell_{2}^{a}(x)=(x-100)^{2}\). In scenario \(b\), the first task loss landscape is the same \(\ell_{1}^{b}=\ell_{1}^{a}\) while the second task represents a more complex function: \[\ell_{2}^{b}(x)=\begin{cases}(x-100)^{2}&x>50\\ -5x+2750&x\leq 50\end{cases} \tag{7}\] The respective algorithms train by sampling tasks in an interleaved fashion, and by adapting the parameter \(x\) based on the loss landscape of the sampled task. We investigate the behaviour of Reptile and MAML when they make \(T=5\) or \(T=25\) task-specific adaptation steps. For this, we average the found solutions of the techniques over 100 different runs with initial \(x\) values that are equally spaced in the interval \([-200,+200]\). We find that finetuning converges to the same point regardless of the initialization and is thus represented by a single vertical line. For Reptile and MAML, the found solution depends on the initialization, which is why we represent the found solution as a probability density. A Jupyter notebook for reproducing these results can be found on our GitHub page. Based on the learning objectives of the techniques, we expect finetuning to settle for an initialization that has a good initial performance on both tasks (small loss values). Furthermore, we expect that MAML will pick any initialization point from which it can reach minimal loss on both tasks within \(T\) steps. Reptile is expected to find a mid-way solution between finetuning and MAML. The results of these experiments are displayed in Figure 1. In scenario \(a\) (top figures), we see that both finetuning and Reptile prefer an initialization at the intersection of the two loss curves, where the initial loss is minimal. MAML, on the other hand, neglects the initial performance when \(T=25\) and leans more to the right, whilst ensuring that it can reach the two optima within \(T\) steps. The reason that it prefers an initialization on the right of the intersection is that the loss landscape of task 1 is steeper, which means that task adaptation steps will be larger. Thus, a location at the right of the intersection ensures good learning of task 2 and yields comparatively fast learning on the first task. In scenario \(b\) (bottom figures), the loss landscape of task 2 has a relatively flat plateau on the left-hand side. Because of this, finetuning and Reptile will be pulled towards the optimum (also the joint optimum) of the first task due to the larger gradients compared with the small gradients of the flat region of the second task when \(T\) is small. The solution that is found by MAML when \(T=5\) depends on the random initialization of the parameter, as can be seen in plot c). That is, when the random initialization is on the left of the plateau, MAML can not look beyond the flat region, implying that it will also be pulled towards the minimum of task 1. When \(T=25\), allowing the Reptile and MAML to look beyond the flat region, we see that Reptile either finds an initialization at \(x=50\) (when the starting point \(x_{0}\) is on the right-hand side of the plateau) or at the joint optimum at \(x=0\) (when it starts with \(x_{0}\) on the plateau). In the latter case, the post-adaptation performance of Reptile on both tasks is not optimal because it cannot reach the optimum of task 2. MAML, on the other hand, does not suffer from this suboptimality because it neglects the initial and intermediate performance and simply finds an initialization at \(x\approx 85\) from which it can reach both the optima of tasks 1 and 2. ### Few-shot image classification We continue our investigations by studying why MAML and Reptile can outperform finetuning in within-distribution few-shot image classification settings (see Section 3.2) when using a Conv-4 backbone. For these experiments, we use the \(N\)-way \(k\)-shot classification setting (see Section 3.2) on the miniImageNet (Vinyals et al., 2016, Ravi and Larochelle, 2017) and CUB (Wah et al., 2011) benchmarks. miniImageNet is a mini variant of the large ImageNet dataset (Deng et al., 2009) for image classification, consisting of \(60\,000\) colored images of size \(84\times 84\). The dataset contains \(100\) classes and \(600\) examples per class. We use the same train/validation/test class splits as in Ravi and Larochelle (2017). The CUB dataset contains roughly \(12\,000\) RGB images of birds from \(200\) species (classes). We use the same setting and train/validation/test class splits as in Chen et al. (2019). Note that using real datasets entails that we move away from the abstract task definition as in the previous toy experiment, where the loss signal of the task was perfect. Instead, the loss signal is now approximated by sampling a finite set of data points for every task (for MAML and Reptile) or batch (for finetuning) and computing the performance of the methods on it. For finetuning and MAML, we tune the hyperparameters on the meta-validation tasks using random search with a budget of \(30\) function evaluations for every backbone and dataset. We train MAML on \(60\,000\) tasks in the 1-shot setting and on \(40\,000\) tasks in the 5-shot setting, and validate its performance every \(2\,500\) tasks. The checkpoint with the highest validation accuracy is then evaluated on \(600\) holdout test tasks. Similarly, finetuning is trained on \(60\,000\) batches of data from the training split when we evaluate it in the 1-shot setting and on \(40\,000\) batches when evaluating it in the 5-shot setting. Note that finetuning is trained on simple mini-batches of data instead of tasks consisting of a support and query set, and is later validated and tested on unseen validation and test tasks, respectively. In a similar fashion as for MAML, we validate its performance every \(2\,500\) batches. Due to the computational expenses, for Reptile, we use the best-reported hyperparameters and training iterations on 5-way 1-shot miniImageNet as found by Nichol et al. (2018). We use Torchmeta for the implementation of the data loaders (Deleu et al., 2019). We note that a single run of MAML and finetuning finish within one day, while Reptile finished within 4 days, perhaps due to the absence of parallelism in the implementation we used. #### 5.2.1 The role of the output layer Here, we investigate whether the fact that MAML and Reptile reuse their learned output layer when learning new tasks alter their inner-learning behaviour and give them an advantage in performance compared with finetuning, which removes the learned output layer and replaces it with a randomly initialized one when learning a new task. In short, we study the role of the output layer on the performance and inner-loop adaptation behaviour of MAML and Reptile. For this, we perform meta-training for MAML and Reptile on 5-way 1-shot miniImageNet classification, and study the effect of replacing the learned output layer initialization weights with random weights on their ability to learn new tasks. Note that even though the weight initialization of the output layer may be random, it is still trained on the support sets of unseen tasks, therefore, finetuned to the task upon which it will be evaluated. Figure 2 displays the effect of replacing the output layer of the meta-learned weight initialization by MAML and Reptile meta-trained on 5-way 1-shot miniImageNet, with a randomly initialized one on the gradient norms during the inner-loop adaptation procedure. As we can see, the networks of the variants with a learned output layer receive larger gradient norms at the first few updates compared with the variants using a randomly initialized output layer, indicating that the learned output layer alters the learning behaviour of the algorithms. However, at the end of adaptation for a given task, the gradient norms are close to zero for both variants, indicating that both have converged to a local minimum. This implies that the learned initialization of the output layer has a distinct influence on the learning behaviour of new tasks. More specifically, using a learned output layer may aid in finding an initialization in the loss landscape that is sensitive to tasks and can be quickly adapted, explaining the larger gradient norms. Next, we investigate whether reusing the learned output layers also leads to performance differences. For this, we investigate the influence of replacing the learned output layers in MAML and Reptile with randomly initialized ones when starting to learn new tasks on their learning performance for different numbers of update steps. The results are shown in Figure 3. As we can see, replacing the output layer with a random one leads to worse performance. Increasing the number of updates improves the performance for MAML, while the reverse is true for Reptile. In the end, the performance gap introduced by replacing the output layers with random ones is not Figure 2: The difference in the average gradient norms during inner-loop adaptation between MAML (left) and Reptile (right) with a learned output layer and a randomly initialized one on 5-way 1-shot miniImageNet (MIN; top row) and CUB (bottom row). The 95% confidence intervals are within the size of the symbols. The learned output layers have a higher gradient norm at the beginning of the training phase. closed, indicating that the output layers play an important role in successful inner-loop adaptation. #### 5.2.2 Specialization for robustness against overfitting In this subsection, we investigate the influence of the level of data scarcity in the support set on the performance of MAML and Reptile. We hypothesize that both algorithms learn an initialization that is robust against overfitting when the number of examples in the support set per class (\(k\)) is small. This would imply that their performance would suffer when the number of examples in the support sets in training tasks is large due to the reduced need to become robust against overfitting, disabling the meta-learning techniques to become robust to overfitting during task-specific adaptation. We investigate this for 5-way miniImageNet image classification by varying the number of examples in the support set of meta-training tasks and measuring the performance on tasks with only one example per class (1-shot setting). Figure 4 displays the results of these experiments. As we can see, there is an adverse effect of increasing the number of support examples per task on the final 1-shot performance of MAML. This shows that for MAML, it is important to match the training and test conditions so the initialization parameters can become robust against overfitting induced by data scarcity. In addition, we observe that Reptile is unstable due to its sensitivity to different hyperparameters on miniImageNet, even in Figure 3: The difference in performance between MAML (left) and Reptile (right) with a learned output layer and a randomly initialized one on 5-way 1-shot miniImageNet (MIN; top row) and CUB (bottom row) for different numbers of update steps. The 95% confidence intervals are displayed as shaded regions. Learning new tasks starting with a random output layer fails to achieve the same end performance as with the learned output layer. the setting where \(k=1\). This is caused by the fact that Reptile is not allowed to sample mini-batches of data from the support set. Instead, we force it to use the full support set to investigate the effect of the number of support examples. When the number of examples is close to ten, which is the mini-batch size commonly used, as by the original authors (Nichol et al., 2018), there is a slight increase in performance for Reptile on miniImageNet, supporting the observation that it is sensitive to the chosen hyperparameters. On CUB, in contrast, we observe that the performance improves with the number of examples per class at training time, although the maximum number of examples investigated is 25 due to the fact that not every class has more examples than that. This illustrates that the sensitivity to hyperparameters depends on the chosen dataset. #### 5.2.3 Information content in the learned initializations Next, we investigate the relationship between the few-shot image classification performance and the discriminative power of the learned features by the three techniques for different backbones (Conv-4, ResNet10, ResNet18 (He et al., 2015)). After deploying the three techniques on the datasets in a 5-way 1-shot manner, we measure the discriminative power of the learned initializations. Figure 5 visualizes this procedure for MAML Figure 4: The effect of the number of training examples per class in the support set on the performance of MAML (left) and Reptile (right) on 5-way 1-shot miniImageNet (MIN; top row) and CUB (bottom row) classification. The larger the number of examples, the worse the few-shot learning performance of MAML. The error bars show the maximum and minimum performance over 5 runs with different random seeds. Note that the test tasks contain only a single example per class in the support set. and Reptile; finetuning follows a similar procedure. First, we extract the learned initialization parameters from the techniques. Second, we load these initializations into the base-learner network, freeze all hidden layers, and replace the output layer with a new one. The new output layer contains one node for every of the \(|C_{test}|\) classes in the meta-test data. Third, we fine-tune this new output layer on the meta-test data in a _non-episodic_ manner, which corresponds to regular supervised learning on the meta-test dataset. We use a 60/40 train/test split and evaluate the final performance on the latter. We refer to the resulting performance measure as the _joint classification accuracy_, which aims to indicate the discriminative power of the learned initialization, evaluated on data from unseen classes. Note that we use the expressions "discriminative power" and "information content" of the learned backbone synonymously. The results of this experiment are shown in Figure 6. From this figure, we see that finetuning yields the best joint classification accuracy in all scenarios. From this figure, we see the following \begin{table} \begin{tabular}{l l l l l} \hline \hline & MIN & MIN \(\rightarrow\) CUB & CUB & CUB \(\rightarrow\) MIN \\ \hline Finetuning & **r=0.82, p=2e-4** & **r=0.71, p=3e-3** & **r=0.96, p=7e-9** & r=0.28, p=0.31 \\ MAML & **r=-0.77, p=8e-4** & **r=-0.85, p=6e-5** & r=0.36, p=0.18 & **r=0.90, p=4e-6** \\ Reptile & r=0.27, p=0.3 & r=0.50, p=0.06 & r=0.3, p=0.28 & r=0.31, p=0.27 \\ \hline \hline \end{tabular} \end{table} Table 2: Individual correlations between the joint classification accuracy and the few-shot learning performance. The Pearson correlation coefficients are indicated as \(r\) and corresponding p-values as \(p\). We note that the results for each of the three few-shot learning techniques are produced with three different backbone networks. As such, correlations should be interpreted with utmost care. Significant correlations (using a threshold of \(\alpha=0.005\)) are displayed in bold. “MIN”: miniImageNet. Figure 5: Flow chart for measuring the joint classification accuracy for meta-learning techniques. First, we train the techniques in an episodic manner on all data in the meta-train set. Second, we copy and freeze the learned initialization parameters and replace the output layer with a new one. Third, we fine-tune this new output layer on all meta-test data in a non-episodic manner. As such, the meta-test data is split into a non-episodic train and a non-episodic test set. Finally, we evaluate the learned evaluation on the hold-out test split of the meta-test data. We refer to the resulting performance measure as the joint classification accuracy. Note that finetuning follows the same procedure, with the exception that it trains non-episodically (on batches instead of tasks) on the meta-training data. things. * The within-distribution few-shot learning performance is better than the out-of-distribution performance for all techniques * MAML achieves the best few-shot learning performance when using a shallow backbone (conv-4) * When the backbone becomes deeper, the features learned by MAML become less discriminative * Finetuning learns the most discriminative set of features for direct joint classification on a large set of classes However, we note that the joint classification performance either weakly correlates or does not correlate with the few-shot learning performance across the different techniques. We note that these correlation patterns may be affected by the fact that we used the best-reported hyperparameters for Reptile for the Conv-4 backbone, while we also use ResNet-10 and ResNet-18 backbones [14] in different settings. For finetuning, however, we do observe an improvement in few-shot learning performance as the backbone becomes deeper. Figure 6: The joint classification accuracy (x-axes) plotted against the 5-way 1-shot performance (y-axis) on all test classes. For every technique, there are 15 results plotted, corresponding to 3 backbones (Conv-4=red, ResNet-10=green, ResNet-18=blue) and 5 runs per setting. The Pearson correlation coefficients (r) and p-values are displayed in the subcaptions. The general correlations between the few-shot learning performance and joint classification accuracy range from weak to mild. Next, we investigate whether there are statistically significant relationships per technique between the joint classification accuracy and the few-shot performance. Table 2 displays the Pearson correlation and corresponding p-values for individual techniques for the experiment in Section 5.2.3. As we can see, there are strong and significant (\(\alpha=0.005\)) correlations between the joint classification accuracy and the few-shot learning performance of finetuning in three settings. For MAML, there are strong negative correlations on miniImageNet and miniImageNet \(\rightarrow\) CUB, indicating that a lower joint classification accuracy is often associated with better few-shot learning performance. For Reptile, the correlations are non-significant and mild to weak. ## 6 Conclusion In this work, we investigated 1) why MAML and Reptile can outperform finetuning in _within-distribution settings_, and 2) why finetuning can outperform gradient-based meta-learning techniques such as MAML and Reptile when the test data distribution diverges from the training data distribution. We have shown how the optimization objectives of the three techniques can be interpreted as maximizing the direct performance, post-adaptation performance, and a combination of the two, respectively. That is, finetuning aims to maximize the direct performance whereas MAML aims to maximize the performance _after_ a few adaptation steps, making it a look-ahead objective. Reptile is a combination of the two as it focuses on both the initial performance as well as the performance after every update step on a given task. As a result, finetuning will favour an initialization that jointly minimizes the loss function, whereas MAML may settle for an inferior initialization that yields more promising results after a few gradient update steps. Reptile picks something in between these two extremes. Our synthetic example in Section 5.1 shows that these interpretations of the learning objectives allow us to understand the chosen initialization parameters. Our empirical results show that these different objectives translate into different learned initializations. We have shown that MAML and Reptile specialize for adaptation in low-data regimes of the training tasks distribution, which explains why these techniques can outperform finetuning as observed by Chen et al. (2019); Finn et al. (2017); Nichol et al. (2018), answering our first research question. Both the weights of the output layer and the data scarcity in training tasks play an important role in facilitating this specialization, allowing them to gain an advantage over finetuning. Moreover, we have found that finetuning learns a broad and diverse set of features that allows it to discriminate between many different classes. MAML and Reptile, in contrast, optimize a look-ahead objective and settle for a less diverse and broad feature space as long as it facilitates robust adaptation in low-data regimes of the _same_ data distribution (as that is used to optimize the look-ahead objective). This can explain findings by Chen et al. (2019), who show that finetuning can yield superior few-shot learning performance in out-of-distribution settings. However, we do not observe a general correlation between the feature diversity and the few-shot learning performance across finetuning, Reptile, and MAML. Another result is that MAML yields the best few-shot learning performance when using the Conv-4 backbone in all settings. Interestingly, the features learned by MAML become less discriminative as the depth of the backbone increases. This may indicate an over-specialization, and it may be interesting to see whether adding a penalty for narrow features may prevent this and increase the few-shot learning performance with deeper backbones and in out-of-distribution settings, which has been observed to be problematic by Rusu et al. (2019) and Chen et al. (2019) respectively. As this is beyond the scope of our research questions, we leave this for future work. Another fruitful direction for future work would be to quantify the distance or similarity between different tasks and to investigate the behaviour of meta-learning algorithms as a function of this quantitative measure. An additional benefit of such a measure of task similarity would be that it could allow us to detect when a new task is within-distribution or out-of-distribution, which could inform the choice of which algorithm to use. In summary, our results suggest that the answer to our second research question is that MAML and Reptile may fail to quickly learn out-of-distribution tasks due to their over-specialization to the training data distribution caused by their look-ahead objective, whereas finetuning learns broad features that allow it to learn new out-of-distribution concepts. This is supported by the fact that in almost all scenarios, there are statistically significant relationships between the broadness of the learned features and the few-shot learning ability for finetuning. ## Acknowledgements This work was performed using the compute resources from the Academic Leiden Interdisciplinary Cluster Environment (ALICE) provided by Leiden University, as well as the Dutch national e-infrastructure with the support of SURF Cooperative. ## Declarations ### Conflicts of Interest FundingNot applicable: no funding was received for this work. EmploymentAll authors declare that there is no recent, present, or anticipated employment by any organization that may gain or lose financially through publication of this manuscript. InterestsAll authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript. ### Compliance with Ethical Standards Not applicable: this research did not involve human participants, nor did it involve animals. ### Consent to participate Not applicable. ### Consent for publication Not applicable: this research does not involve personal data, and publishing of this manuscript will not result in the disruption of any individual's privacy. ### Availability of data and material All data that was used in this research have been published as benchmarks by Deng et al. (2009), Vinyals et al. (2016) (miniImageNet) and Wah et al. (2011) (CUB), and is publicly available. The data generator for sine wave regression experiments can be found in the provided code (see below). ### Code availability All code that was used for this research is made publicly available at [https://github.com/mikehuisman/revisiting-learned-optimizers](https://github.com/mikehuisman/revisiting-learned-optimizers). ### Authors' contributions MH has conducted the research presented in this manuscript. AP and JvR have regularly provided feedback on the work, contributed towards the interpretation of results, and have critically revised the whole. All authors approve the current version to be published and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. ### Ethics approval Not applicable.
2307.14183
A visit generation process for human mobility random graphs with location-specific latent-variables: from land use to travel demand
This research introduces a mathematical framework to comprehending human mobility patterns, integrating mathematical modeling and economic analysis. The study focuses on latent-variable networks, investigating the dynamics of human mobility using stochastic models. By examining actual origin-destination data, the research reveals scaling relations and uncovers the economic implications of mobility patterns, such as the income elasticity of travel demand. The mathematical analysis commences with the development of a stochastic model based on inhomogeneous random graphs to construct a visitation model with multipurpose drivers for travel demand. A directed multigraph with weighted edges is considered, incorporating trip costs and labels to represent factors like distance traveled and travel time. The study gains insights into the structural properties and dynamic correlations of human mobility networks, to derive analytical and computational solutions for key network metrics, including scale-free behavior of the strength and degree distribution, together with the estimation of assortativity and clustering coefficient. Additionally, the model's validity is assessed through a real-world case study of the New York metropolitan area. The analysis of this data exposes clear scaling relations in commuting patterns, confirming theoretical predictions and validating the efficacy of the mathematical model. The model further explains a series of scaling behaviors in origin-destination flows among areas of a region, successfully reproducing statistical regularities observed in real-world cases using extensive human mobility datasets. In particular, the model's application to estimating income elasticity of travel demand bears significant implications for urban and transport economics.
Fabio Vanni
2023-07-26T13:22:23Z
http://arxiv.org/abs/2307.14183v1
A visit generation process for human mobility random graphs with location-specific latent-variables: from land use to travel demand ###### Abstract This research introduces a mathematical framework to comprehending human mobility patterns, integrating mathematical modeling and economic analysis. The study focuses on latent-variable networks, investigating the dynamics of human mobility using stochastic models. By examining actual origin-destination data, the research reveals scaling relations and uncovers the economic implications of mobility patterns, such as the income elasticity of travel demand. The mathematical analysis commences with the development of a stochastic model based on inhomogeneous random graphs to construct a visitation model with multipurpose drivers for travel demand. A directed multigraph with weighted edges is considered, incorporating trip costs and labels to represent factors like distance traveled and travel time. The study gains insights into the structural properties and dynamic correlations of human mobility networks, to derive analytical and computational solutions for key network metrics, including scale-free behavior of the strength and degree distribution, together with the estimation of assortativity and clustering coefficient. Additionally, the model's validity is assessed through a real-world case study of the New York metropolitan area. The analysis of this data exposes clear scaling relations in commuting patterns, confirming theoretical predictions and validating the efficacy of the mathematical model. The model further explains a series of scaling behaviors in origin-destination flows among areas of a region, successfully reproducing statistical regularities observed in real-world cases using extensive human mobility datasets. In particular, the model's application to estimating income elasticity of travel demand bears significant implications for urban and transport economics. keywords: complex mobility networks, inhomogeneous random graph, stochastic visitation process, origin-destination transport flows, income elasticity of travel demand ## 1 Introduction How people move from one place to another represents a crucial key to depicting human relations and social interactions in complex societies. Data-driven structured studies on human mobility are valuable to identify the factors that drive the movement of individuals and goods, pointing out how they affects economic outcomes in field as labor market dynamics, economic growth, transportation planning and consumer behavior [1; 2]. Human mobility encompasses a wide range of spatial and temporal scales, from daily commuting within a city to long-term migration across countries and its dynamics are influenced by factors like economic development, technological advancements, political stability and natural features of the territory. Human mobility and transport dynamics has been studied through different mathematical approaches ranging from individual path-based models to collective population based ones starting from single individual's behavior up to examining collective mobility trends [3; 4; 5]. The study presented in this work aims to provide a stochastic model for complex network of human mobility based on an origin-destination structure which represents the backbone of mobility visitation flows with individuals moving from origin locations to destinations. Then, inferring latent-variables in a specific case study, the graph statistical patterns observed in origin-destination network flow can be recovered to have the same expected properties computed trough the latent-variable estimations. Specifically, the model is grounded in the class of inhomogeneous random graphs [6; 7] also known as latent-variable networks [8; 9; 10; 11; 12]. Moreover, the mobility network is a directed multigraph with weighted edges, since each travel is labeled with weights (or trip jumps) that represent trip costs such as distance traveled, travel time or emissions per trip. The use of a graph representation of human mobility helps to investigate the dynamic behavior of the system starting from the properties of single constituents, and to estimate how much one part of the network influences another by using graph metrics like degree distribution, centrality, assortativity and transitivity of trip mobility network. Following the approach of dynamical formation of latent-variable graphs [13, 14, 15], the network has been, then, formalized by using a master equation for the probability density function which encodes the distribution of the number of visits in destination locations weighted by the costs of trips to get there. In particular, the evolution of visit distribution over time is described trough an integro-differential equation with an explicit asymptotic solution which highlights the relation of visiting generation patterns to intrinsic and environmental features of locations as specified by latent-variable properties. In stochastic theory, the analytical interpretation achieved is related to the literature of continuous-time Markov processes with the use of a Kolmogorov-Feller partial differential equation [16, 17]. The real-valued solution of the visit ditribution of the trip mobility network has been as compound distribution of traveling packet of normal like conditional probabilities for independent visitation process for each type of destinations. Such visit generation process of the temporal random graph can be is then proofed to be equivalently formalized in terms of a mixture of compound Poisson counting processes commonly used in financial and actuarial science literature [18, 19], for example in models of option pricing, risk and insurance analysis, high-frequency trading and market microstructure. In the case of interest, the process is driven by hidden graph structures represented by latent variables associated to each location, that will be identified with an attractiveness attribute and a productiveness one. The first variable encodes an intrinsic capability of a location to attract visits due to some specific properties of the destination, as extensively discussed in the paper. The second variable encodes, complementary, the ability to produce new travelers, and it depends on some intrinsic feature of the origin area. Moreover the way how each area produces or attracts new trips is achieved according to arrival and departure rates that will be express by some function of the latent variables. In summary, the mobility graph process is fully characterized by the latent-variable statistical characteristics, and different configurations of the latent-variable patterns will determine the mobility network scaling properties with a particular attention on scale-free behavior of the degree distribution and the spectral trend of graph correlations. In a economic modeling framework, those latent variables allows the characterization of travel demand for different areas in the region, and they can be inferred through the analysis of important demographic, geographic and economic indicators. As discussed in literature [20, 21, 22], many different factors are involved for a place to be considered as attractive, such as trip purposes, job or leisure opportunities, infrastructure facilities, geographical characteristics, urban zoning planing. Generally, all those physical and human factors can be captured by land use and travel behavior analysis which are at the basis of two primary approaches in transportation economics and engineering literature i.e. trip-based models and activity-based models. In such literature, in fact, travel is considered to be a derived demand, that it is generated in response to people satisfying personal needs and desires [23, 24]. Moreover, the costs and efforts to realize the necessary trips are then influenced and determined by geographical, social and economic factors such as the distance from means of transport, house affordability, economic status, social equality, the proximity to nature and many others. The research direction is then focused on the use of a real mobility network in order to validate the model presented. In this work, New York metropolitan area has been used as case study by using detailed origin-destination tables where many important information such as the number of people that travel between different locations, along with the information on costs of the visits as well as demographic and economic properties of the travelers. The analysis of such data reveals clear scaling relations in commuting patterns of movement flow in agreement with empirical studies and consistent with theoretical predictions as in [25, 26, 27, 28]. The first contribution of the paper is to provide a theoretical network framework able to describe human mobility flow based on the existence of latent variables which characterize the travel demand features in urban and transport dynamics. Specifically, analytical, numerical and computational solutions are provided for the strength distribution, assortativity and clustering coefficient. The second contribution consists on the analysis of real origin destination network, so revealing the existence of scale-free behaviors in the frequency of visits of locations and scaling relations between visits and trip costs. Consequently, a more pragmatic original contribution of the study is to determine what are the latent variable statistical features able to reproduce the observed patterns of the trip mobility network in terms of distribution and correlations. The study emphasizes a reciprocal interplay between human mobility and urban structure, whi is an important topic widely discussed in the literature [21, 22, 29, 30, 31, 32]. Finally, as economical application, I will show the relation between the attractiveness scaling exponents and the value of income elasticity passing through the information about allocation and utilization of land resources for various economic and social activities. The estimated income elasticity of demand is often used to predict future changes in consumption in response to changes in income. [33, 34, 35, 36]. The paper is organized in the following way: the section 2 is devoted to the definition and mathematical modeling of the latent variable mobility network and, then, analytical estimates of scaling laws and graph correlations of visitation patterns. The section 3 will be devoted to network measurements in the case study of New York metropolitan area with the analysis of origin destination tables obtained from Safegraph dataset [37]. Visit distribution and network correlations will be measured. In Section 4, the network model is implemented and validated on the basis of latent variable specification. Finally, the latent variable formalism will be applied to an economic problem of the estimation of income elasticity of travel demand. An appendix section and supplementary material are provided to enhance the discussion on the stochastic intepretation of the visit process, further statistical analysis of the data, and a detailed description of data and interpretation of the latent variables in travel demand modeling. ## 2 Model A trip mobility network can be build upon an origin-destination rationale and it will be represented as a directed graph where the nodes are administrative units of an urban or regional area (a city, a county, a state etc...) and the directed edges represent number of visits from an origin to a destination area. Let us observe, that one latent-variable is the attractiveness of an area and it can be thought as an ancestral property of the block which captures ideally all the variables that can determine it, such as the number residential apartments, job positions, retail stores, geographical features, geopolitical conditions, social and economical factors, transportation systems and facilities including also the average distance of a block with attractiveness from the rest of the urban area (i.e. the physical average accessibility). Essentially those Figure 1: Example of a tessellation of a large urban area (i.e. New York City) in (Census) blocks (a) as in [38]. In particular, a block is characterized by an intrinsic attractiveness \(x\) as a latent variable depending on many features of location (b). latent-variables, from an economic perspective, can be seen as drivers of travel demand and trip behavior in general. Each node is characterized by an intrinsic attractiveness \(x\) representing the travel demand for a destination area. Such attribute is related to various properties of the location such as job opportunities, number of retail stores, geographic features, infrastructure, facilities, school districts etc. Similarly, each node is also characterized by another property \(y\) that characterizes the location as an origin of trips. That attribute captures the possible users which can depart from the area, so capturing the potential of the are to produce trips, see the Supplementary Materials for a detailed discussion. The model presented describes the occurrence dynamics of trips that occur between origins with a given population \(y\) and destination blocks with a given attractiveness label \(x\). In this paper, I will, primarily, focus on the case of exchangeable origin-destination blocks, so that the trip generating process can be represented by two independent process: the trip production and the trip attraction process. The first accounts for the number of trips (departures) originating from a block and the latter accounts for the number of trips (visits) ending in each destination block. First, the visitation model is defined as a time-varying network made up of three ingredients: 1) the location structure of the graph in terms of latent variables, 2) the visit arrival process to destinations, and 3) the effect of trips as sizes assigned to each visit. Second, properties of dynamic mobility network will be presented in terms of the its strength degree distribution and degree-correlations, where the strength of a location is intended as number of visits weighted by their correspondent trip size. ### Definition A geographical region of interest, \(R\), is specified as the portion of territory for which we are interested in generating the flows. Over the region of interest, a set of geographical tiles called tessellation, \(\mathscr{T}\), made up of locations \(l_{i}\) so that \(T=\{l_{i}:i=1\ldots n\}\) so that the locations are non-overlapping, \(l_{i}\cap l_{j}=\emptyset,\forall i\neq j\), and the union of all locations completely covers the region of interest, \(\cup_{i=1}^{n}l_{i}=R\). The tessellation for real geographical regions can be obtained in many ways according to the scope. In the case of interest, location tiles are the census areas defined by national authorities for administrative and demographic purposes. In particular e census blocks are the smallest geographic unit used by the United States Census Bureau for tabulation of data collected from all houses. An example of tessellation is given in Fig.1 for Ney York city, where tiles are the census block division of the urban area. We denote a graph by \(\mathscr{G}=(T,E)\), where \(T\) is the set of \(n\) locations (nodes) and \(E\) is the set of trips (edges). The graph is directed (trip direction) and it allows self-edges and multiple edges (many travelers from and to the same origin and destination). Furthermore, the network is a labeled graph since the nodes will be specified by intrinsic location attributes. Finally, the trip mobility network will be defined on top of three fundamental assumptions which will be explained in the next paragraph. The first assumption defines the latent variable backbone of the mobility graph. **A. 1** (Latent variable graph) _Each location \(\ell_{i}\) is labeled with a node type by the latent variable \(x_{i}\), which represents the attractiveness of destination where a trip ends. It will be interpreted as the driver for the travel demand to each destination. Let \((x_{i})_{i\in[n]}\) be a sequence of latent variables with values in a node-type space \(\Omega_{x}\subseteq\mathbb{R}\) such that the empirical distribution of \((x_{i})_{i\in[n]}\) approximates a probability measure \(\mu_{x}\) as \(n\to\infty\)[6; 7]. Consequently, the set of attractiveness variables \(\{x_{1},\ldots,x_{n}\}\) assocated to a location are considered realizations of independent and identically distributed latent random variables with an empirical distribution that converges almost surely to the cumulative distribution function \(F(x)\)._ Let us observe that, in the paper, the latent-variable probability measure is defined on \((\mathbb{R},\mathscr{B}(\mathbb{R}))\) where \(\mathscr{B}(\mathbb{R})\) is the Borel \(\sigma\)-algebra generated from the real line and the probability will be assumed absolutely-continuous respect to the Lebesgue measure, so that the probability density function can be defined as \(\rho(x)=F^{\prime}(x)\). Such condition will allow for various mathematical and statistical manipulations in the paper for practical applications. Similarly, the locations, other than being destinations, are, at the same time, labeled as origins. The correspondent intrinsic feature variable will represent the resident active population which stays in the area from which a trip originates1. The latent variable in this case is named as productiveness of the location. Consequently, the set of productiveness variables \(\{y_{1},\ldots,y_{n}\}\) in node-type space \(\Omega_{y}\subseteq\mathbb{R}\) consists of realizations of independent and identically distributed latent random variables with probabilty measure \(\mu_{y}\) where a probability density function is defined \(\phi(y)\), which can be different from the attractiveness distribution. From a dynamical perspective, the model is presented as random graph process that is a stochastic process that describes a random graph evolving in time [39, 40]. The processes studied here will have a fixed vertex set, and they will start without any edges and grow by adding edges according to linking rule, without deleting any, since the total number of visits up to a certain time is studied. Once a regional tessellation has been defined in terms of a graph as in A.1, the graph processes for directed multiedges can be defined [41] as random graph \(\mathbb{G}_{t}\) evolving so that \(\forall t\) a new edge is added \(\mathbb{G}_{t+1}=\mathbb{G}_{t}\cup\ell\) where \(\ell\in 2\binom{V}{2}\setminus E(\mathbb{G}_{t})\) is chosen randomly with replacement with probability proportional to a kernel function \(\mathcal{K}(x,y)\) defined as \(\Omega_{x}\times\Omega_{y}\rightarrow[0,\infty)\). Such kernel is rules out the chances that a location of attractiveness \(x\) will be a destination of a visit whose trip originated in a location of feature \(y\). Knowing that \(\mu_{y}(dy)\) represents the measure assigned to the infinitesimal interval of productiveness for origin locations. Footnote 1: Despite of not being a proper hidden variable, the people which can actively considered travelers undergoes to many factors (such as residence, age, employment status, and many others). **A. 2** (Trip generation) _The trip arrival process is defined through the infinitesimal arrival intensity \(d\mathbb{V}_{x}=\mathcal{K}(x,y)\mu_{y}(dy)\) which drives the dynamics of new travels landing in destinations of attractiveness \(x\) conditional to trips that departed from locations of infinitesimal productiveness \(y\)._ Such rate can be interpreted as the propensity (or intensity) for a traveler to move towards a destination of a given attractiveness \(x\). So, the graph evolves when a new visit from an origin to a destination is completed (a new link in the network) according to the attraction and the production rates in the degree-space of the mobility graph. Similarly, as dual problem, a trip departure process can be defined by the infinitesimal departure intensity \(d\mathbb{V}_{y}=\mathcal{K}(x,y)\mu_{y}(dy)\). The evolving mobility graph \(\{\mathbb{G}^{(t)}\}\) is fully described by means of a time-varying adjacency matrix \(A^{(t)}\) which represents the origin-destination table of the mobility problem at time \(t\) and the sum along the columns represent the in-degree of the nodes or equivalently the number \(k\) of visits received up to time \(t\). Finally, an extension of the model is considered where typical lengths of single trips is associated to the corresponded visits. In the case of weighted network where each links is associated to a weight as a random variable independent from visitation process and the graph is expressed in terms of a weighted directed multigraph adjacency matrix \(\tilde{A}^{(t)}=C^{(t)}\circ A^{(t)}\), which indicates a element-wise matrix multiplication and \(C\) is considered either a coefficient matrix or a random one according to the real-world phenomena under study alongside the data that can be used. Specifically: **A. 3** (Trip weights) _All the arrivals of new trips are weighted in terms of some trip features. The weights \(\mathfrak{r}\) are independent and identically distributed random variables, with a specified probability density function \(\varrho_{x}(\mathfrak{r})\)._ Those weights have the meaning of trip'size' as for example the distance traveled from an origin to the selected destination, or the emission impact of each trip, or any type of travel cost or visit benefit. The weight probability \(\varrho_{x}(\mathfrak{r})\) may depend or may not on the attractiveness of the destination node according to the particular type of weight considered in the specific situation under investigation. Let us notice that the in-strength (i.e. weighted in-degree) of a destination node \(\kappa\) is defined as the sum of weights of all the visits received. In the case that weights are all equal to 1 the strength \(\kappa\) is equivalent to the degree \(k\). At this point, finally, the model can be defined: **Definition 1**: _The trip mobility network is a temporal inhomogeneous random graph model that describes a visit generation process satisfying the assumptions A.1, A.2, A.3._ The weighted mobility multigraph process \(\bar{G}_{t}\) can be studied in terms of the adjacency matrix \(\tilde{A}^{(t)}\) representation of the graph. A general framework which describes ensembles of dynamic networks can be assessed by making a Markov assumption on the evolution of the network by studying the probability of realization of a member of configuration ensemble graph over time [42; 43]. Specifically, the temporal evolution of finding the trip mobility network in the configuration \(\mathcal{A}\) at time \(t\) after \(L\) steps (trips) can be written as: \[\mathbb{P}(\mathcal{A},t)=\prod_{l=0}^{L}\mathbb{P}(\tilde{A}^{(t_{l})}|\tilde {A}^{(t_{l-1})},\boldsymbol{\Lambda})\] where \(\boldsymbol{\Lambda}\) is any transition rule built on latent variables, which describes the creation, at each time \(t_{l}\), of a new trip from an origin towards a destination. From a modeling perspective, the complexity of knowing the configurational distribution of the origin-destination mobility network can be reduced trough the study of the degree (or strength) distribution and its higher order statistics. We actually model the degree distribution, that in the in-degree case results to be the visiting distribution defined as: \[P(\kappa,t)=\sum_{\{\mathcal{A}\}}\sum_{i}\delta(\kappa-\kappa_{i})\mathbb{P} (\mathcal{A},t)\] where \(\delta\) is the delta function, and \(\mathbb{P}(\mathcal{A},t)\) is the probability to find our trip mobility network in the configuration \(\mathcal{A}\) at time \(t\), where each node has a strength degree \(\kappa_{i}\). If the in-degree of a node is simply the number of arrivals of new travelers, the in-strength of a location node is defined as the number of arrivals of new travelers who have faced a cost. Such weighted version of arrivals is named visits where each trip has an intensity. ### Visit distribution In the case of this work, the derivation of the visit distribution is assessed on the basis of the latent variable framework. The model is developed upon the asymptotic regime assumption of an infinite network where the number of locations in a tessellation is extremely large, where each location can generate and attract an unlimited number of edges (trips). Consequently, the continuous mean-field approach is used so that single locations can be studied as uncorrelated nodes in the same class of locations with the same attractiveness [14; 15; 44]. So rather than studying the single location \(l_{i}\), one analyzes those locations which belong to the same class of attractiveness level, i.e. \(l_{x}\) seen as a continuous random variable with probability density \(\rho(x)\). Let us call conditional visit distribution \(p(\kappa,t|x)\) the strength distribution conditional to destinations of the attractiveness type \(x\), and each class evolves independently one from any another, so that a superposition of co-evolving conditional degree distributions is possible. The overall visit distribution can be derived according to the following proposition: **Proposition 1**: _According the visitation model's assumptions as in the Definition 1, the visit distribution of the mobility network is fully characterized by the attraction rate \(\nu_{x}\) that defines the transition probability, per unit of time, that a destination of attractiveness \(x\) increases its number of visits by one from any destination. The attraction rate is defined as the mean intensity of the trip arrival process \(\nu_{x}=\int_{\Omega_{y}}dV_{x}\)._ * _The evolution of the conditional visit distribution can be described by a master equation for destinations with attractiveness_ \(x\) _as an integro-differential Kolmogorov-Feller equation:_ \[\frac{\partial}{\partial t}p_{x}(\kappa,t)=\,\int_{0}^{\kappa}\nu_{x}\varrho_ {x}(\imath)\,[p_{x}(\kappa-\imath,t)-p_{x}(\kappa,t)]dx\] (1) _with the initial condition_ \(p_{x}(\kappa,0)=\delta(\kappa)\)_. In the asymptotic regime, the conditional probability_ \(p(\kappa,t|x)\) _can be written as:_ \[p(\kappa,t|x)\sim\epsilon_{t}\frac{1}{\sqrt{2\pi\nu_{x}(\imath^{2})_{x}t}}e^{- \frac{(\kappa-\nu_{x}(\imath)_{x}t)^{2}}{2\nu_{x}(\imath^{2})_{x}t}}\qquad \text{, where }\ \epsilon_{t}=\frac{2}{1+\text{erf}\left[\frac{\nu_{x}(\imath)_{x}t}{\sqrt{2\nu_{ x}(\imath^{2})_{x}t}}\right]}\] (2) _where the correction factor_ \(\epsilon_{x}\to 1\) _in the asymptotic limit_ \(t\rightarrow\infty\)_, and_ \(\langle\epsilon\rangle_{x}\) _and_ \(\langle\epsilon^{2}\rangle_{x}\) _are the first and the second moment of the trip-weight distribution_ \(\varrho_{x}(\epsilon)\)_._ * _Finally, the temporal asymptotic expression of the visit distribution is a mixture distribution as:_ \[P(\kappa,t)=\int_{\Omega_{x}}p(\kappa,t|x)\rho(x)dx\sim\ \sum_{i=1}^{m}\left| \frac{\partial z(x)}{\partial x}\right|_{x_{0,i}}^{-1}\rho\big{(}x_{0,i}\big{)}\] (3) _where_ \(x_{0,i}=x_{0,i}(\kappa,t)\) _are the zeros of the expression_ \(z(x)=\kappa-\nu_{x}\langle\epsilon\rangle_{x}\,t\)_, and_ \(P(\kappa,t)\) _the in-strength distribution of the overall trip mobility network._ Proof.: Let us notice that the attraction rate \(\nu_{x}\) is transition rate of new visits per unit of time interval, as the chance of having a new arrival in a destination of type \(x\) originated from any location, so that the attraction rate is the mean intensity as in [6, 7, 13, 15]: \[\nu_{x}=\int_{\Omega_{y}}d\psi_{x}=\int_{\Omega_{y}}\mathscr{R}(x,y)d\mu_{y}(y )=\int_{\Omega_{y}}\mathscr{R}(x,y)\phi(y)dy \tag{4}\] The master equation for the evolution of the conditional probabilities \(p_{x}(\kappa,n)\) for locations with attractiveness \(x\), where the step size is the correspondent \(\Delta\kappa=\epsilon\) which represents the weight of each link i.e. the decision heuristic variable to move in the selected destination. It can be written as: \[p(\kappa,t+\tau|x)=(1-\nu_{x})p(\kappa,t|x)+\int_{0}^{\kappa}\nu_{x}p(\kappa- \imath,t|x)\varrho_{x}(\imath)dx\] If we also assume that \(p_{x}\) is slow, so that it changes only slightly during this time step \(\tau=\Delta t\) and redefine \(\nu_{x}:=\nu_{x}/\tau\), then we can write the continuum master equation like: \[\frac{\partial}{\partial t}p_{x}(\kappa,t)=\dot{p}_{x}(\kappa,t)=\int_{0}^{ \kappa}\nu_{x}\varrho_{x}(\imath)\,[p_{x}(\kappa-\imath,t)-p_{x}(\kappa,t)]dx \tag{5}\] and \(\varrho_{x}(\imath)\) is the distribution of the trip distances covered to reach destination blocks of attractiveness \(x\). At this point let us apply the Laplace transformation in the variable \(\kappa\) so \(\angle\{p(k,t)\}\equiv\ddot{p}(s,t)\) and the master equation transforms as: \[\dot{\hat{p}}(s,t)= \nu_{x}\ddot{\varrho}_{x}(s)\hat{p}_{x}(s,t)-\nu_{x}\hat{p}_{x}(s, t)=\nu_{x}\left(1-\hat{\varrho}_{x}(s)\right)\hat{p}_{x}(s,t)\] where the convolution product has been used. The solution can be written as: \[\ddot{p}(s,t)= \,ce^{-\nu_{x}(1-\hat{\varrho}_{x}(s))t} \tag{6}\] with the initial condition \(\hat{p}(s,0)=\mathcal{L}\{\delta(k)\}=e^{0}=1\) so that \(c=1\). One can notice that the characteristic function is equivalent to the laplace transform as expressed before. It is possible to write the Laplace transform of the jump distribution \(\varrho_{x}(\imath)\) in terms of its moments as: \[\hat{\varrho}_{x}(s)= \sum_{n=0}^{\infty}(-1)^{n}\frac{s^{n}}{n!}\mathbb{E}(\kappa^{n}) \tag{7}\] If one assumes that \(\varrho(\imath)\) is a peaked distribution, following the central limit theorem rationale, one can assume it is described by the first two (finite) moments so that the solution in Eq.(6) can be approximately: \[\hat{p}(s,t)\approx e^{-\nu_{x}t(\imath)}z_{s}+\frac{1}{2}\nu_{x}t(\imath^{2})_{x}s^{2} \tag{8}\] its inverse Laplace transform can be considered [45] the case asymptotic case of \(s\ll\frac{\langle\imath\rangle_{x}}{\langle\imath^{2}\rangle_{x}}\) of the truncated normal distribution as stated in the eq.(2) For the second point, the visit (in-strength) distribution of the mobiliy network is expressed as a compound probability distribution that results from assuming that a random variable \(\kappa\) is distributed according to some parametrized distribution with the latent parameter \(x\) distributed according to some attractiveness distribution [46, 47]. So, the (unconditional) visiting in-strength distribution results from marginalizing the conditional distribution \(p(k,t|x)\) of the non-negative real-valued random variable \(x\). So, the probability density function of the visiting distribution is given by the following the mixture density: \[P(\kappa,t)= \mathbb{E}\left[p(\kappa,t|x)\right]=\int_{\Omega_{x}}p(\kappa,t|x )d\mu_{x}(x)=\int_{\Omega_{x}}p(\kappa,t|x)\rho(x)dx \tag{9}\] Here, \(p(\kappa|x,t)\) is the distribution of \(\kappa\) when we know \(x\) at time \(t\), in which the relation between \(\kappa\) and \(x\) can be seen as deterministic, i.e. \(\kappa=F(x,t)=\mathbb{E}_{t}[\kappa|x]=\nu_{x}\langle\imath\rangle_{x}t\), defining the distribution by its expected degree value through the moment-generating function from the Laplace transform above. So, \(\kappa\) can only be single value, whose distribution is represented by dirac-delta functions \(\delta(\kappa-F(x,t))\). Consequently, the empirical visiting density probability function can be written as: \[P(\kappa,t)\sim \int_{\Omega_{x}}\delta(\kappa-F(x,t))\rho(x)dx=\int_{\Omega_{x}} \delta(\kappa-\nu_{x}\langle\imath\rangle_{x}t)\rho(x)dx \tag{10}\] which consists in approximating the visiting probability density by means of a Dirac mixture [48, 49], where \(\rho(x)\) is the attractiveness probability density. Such procedure is equivalent to a change of variable respect to the deterministic one-to-one function in the static model as in [12]. At this point we use the property: \(\delta(z(x))=\sum_{i=1}^{m}\frac{\delta(x-x_{0}^{(i)})}{|\partial z(x)/\partial x|}\) where \(x_{0}^{(i)}\) are the m-roots of \(z(x)=0\) where in the transport model \(z(x)=\kappa-\nu_{x}\langle\rangle_{x}t\) where \(z\) is a continuously differentiable function with \(z^{\prime}\) nowhere zero. So: \[P(\kappa,t)\sim\sum_{i=1}^{m}\Big{|}\frac{\partial z(x)}{\partial x}\Big{|}_{x _{0}^{(i)}(\kappa)}^{-1}\ \int\delta\Big{(}x-x_{0}^{(i)}(\kappa)\Big{)}\rho(x)dx\sim\ \sum_{i=1}^{m}\Big{|}\frac{\partial z(x)}{\partial x}\Big{|}_{x_{0}^{(i)}( \kappa)}^{-1}\,\rho\big{(}x_{0}^{(i)}(\kappa)\big{)}\] which represent a general formula for the tail behavior of the degree distribution for a mobility network with a generic attraction rate \(\nu_{x}\). Let us observe that if the trip weight distribution \(\varrho_{x}(\imath)\) is a dirac delta, then the strength distribution is equivalent to the degree distribution. Another particular case is when the trip weight distribution is identical over the attractiveness variable so that \(\varrho_{x}(\imath)=\varrho(\imath)\) then the strength is proportional to the degree \(\kappa=\langle\imath\rangle k\). In the next remark, the same results can be expressed in terms of stochastic process terminology rather than in terms of an integral-differential master equation. Let us observe that the type of processes described in Proposition 1 can be reinterpreted in terms of a mixture of compound Poisson processes as described in the Appendix which represents to be very popular in financial mathematics to model stock prices, insurance claims, and other financial phenomena [50, 51]. The visitation model of the mobility network can also be interpreted in combinatorial terms by using urn processes for solving balls in bins problems and finding the occupancy distributions as sketched in SM4. Such approach is used in world trade literature as in [52, 53, 54]. Despite different approaches, the one introduced in the paper has the advantage to provided direct, though asymptotic, solutions to the scaling relations in the mobility networks. As a particular choice of the occupation probability \(p\), for every couple of origin-destination locations, trips can be realized according to a kernel function \(\mathcal{K}(x,y)\)[55, 56, 12, 57]. As a crucial case in the context of scale-free networks, one can consider the attraction rate to be proportional to some power of the destinations' attractiveness: **Remark 1**: _Le us assume that the attraction rate is of the form \(\nu_{x}=\nu_{0}x^{\alpha_{0}}\), with \(\alpha_{0}>0\), and homogeneous trip-weight distribution \(\varrho_{x}(\cdot)=\varrho(\cdot)\), the asymptotic trip-visit distribution can be written as:_ \[P(\kappa,t)\sim t^{-\frac{1}{\alpha_{0}}}\ \kappa^{\frac{1}{\alpha_{0}}-1}\, \rho(x_{0}) \tag{11}\] _where \(x_{0}=x_{0}(\kappa,t)=\Big{(}\frac{\kappa}{\nu_{0}t}\Big{)}^{\frac{1}{\alpha_{0}}}\), and where \(\rho\) is the attractiveness probability density function. For \(\alpha_{0}=0\) the Erdos-Renyi random graph is recovered._ _In the particular case that attractiveness distribution is \(\rho(x)\sim\rho_{0}x^{-\eta}\), the visiting in-degree distribution has the following asymptotic tail distribution_ \[P(\kappa,t)\sim t^{\frac{\eta-1}{\alpha_{0}}}\kappa^{-(1+\frac{\eta-1}{\alpha_ {0}})} \tag{12}\] _which shows the typical scale-free structure of an inverse power law distribution for the visiting degree of the mobility network._ The analytical results in the previous remark is confirmed by numerical integration of the compound distribution eq.(3) by using the truncated normal conditional probability eq.(2). Moreover, a graph process is performed via monte carlo (MC) simulation of the network where occupation probability is expressed via a separable linking function \(\mathcal{K}(x,y)=g(x)h(y)\) for the evolution of the adjacency matrix. Such kernel gives arise to an attraction rate as \(\nu_{x}=\nu_{0}g(x)\), where \(\nu_{0}\) is a normalization constant as shown in details in the supplementary materials. So by choosing \(g(x)=x^{\alpha_{0}}\) we are in the case as specified in Remark 1. At this point, it is possible to compare the three approaches and confirm the consistency of results obtained. In Fig.2 it can be observed how the three approaches provide the same visit distribution for a particular choice of the parameters as discussed in the caption. As expressed in different research works [11; 57; 58; 59; 60], different combinations of the attraction rate and attractiveness distribution can generate the same trip-visit distribution. In particular a scale-free behavior can be recovered trough exponential drivers where the attraction rate is in the form \(\nu_{x}=\nu_{0}e^{\gamma x}\) and the attractiveness distribution is \(\rho(x)\sim\rho_{0}e^{-\lambda x}\). Also in this case the the asymptotic trip-visit distribution is scale free, in particular, via the transformation variable \(x_{0}=x_{0}(\kappa,t)=\frac{1}{\gamma}\log\frac{\kappa}{\nu_{0}t}\), the distribution becomes \(P(\kappa,t)\sim t^{\frac{\lambda}{\gamma}}\kappa^{-(1+\frac{\lambda}{\gamma})}\). In particular, there is a model selection issue, since different choices of attractiveness features can generate the same effect on the strength-distribution, one could clarify the ambiguity investigating higher order characterization of the degree distribution of the mobility network. ### Visit correlations A more detailed characterization concerns the exploration of the connectivity correlations in the origin-destination correspondences of trip mobility network. Higher order statistics of a network in the degree space, can be obtained trough by the conditional probability \(P(\kappa^{(1)},\kappa^{(2)},\ldots,\kappa^{(k)}|\kappa^{\prime},t)\) that a node with strength \(\kappa^{\prime}\) connects to nodes with strength \(\kappa^{(1)},k^{(2)},\ldots,\kappa^{(k)}\) at time \(t\). The simplest of these degree correlations is the two-point correlation being described by the conditional probability \(P(\kappa|\kappa^{\prime},t)\) as the probability that a trip departing from an origin location of out-strength (departure) \(\kappa^{\prime}\) reaches a destination node of in-strength (visit) \(\kappa\). The correlations between degrees of the nearest-neighbouring vertices are described by the probability distribution: \[P(k,k^{\prime},t)=\sum_{\{\mathcal{A}\}}\sum_{ij}\delta(k-k_{i})\mathbb{P}( \mathcal{A},t)\delta(k_{j}-k^{\prime})\] However, the empirical evaluation of such conditional probability in real networks is cumbersome, so the weighted degree-degree correlations are commonly accounted by average-nearest-neighbor's strength function \(k_{nn}(\kappa,t)\) which makes use of a smoothed conditional probability [61] often used as a measure of degree homophily of the nodes. In the latent variable framework, as shown in Fig.(a)a, the conditional assortativity \(k_{nn}(x)\) measures how much a location with attractiveness \(x\) tend to be a destination of an origin location of population \(y\) as defined in [62; 14; 63]. In a similar way, the three-point correlations can be studied in terms Figure 2: Visit distribution computed in the case of constant trip weights, and the attractiveness distribution is \(\rho(x)\sim\rho_{0}x^{-2}\) and kernel function \(\mathscr{K}(x,y)\propto x^{1.5}h(y)\) so that \(\nu_{x}=\nu_{0}x^{1.5}\). In (a), the probability density \(P(\kappa)\) has been estimate with three different approaches: (1) it is evaluated through the numerical integration of the compound probability as in eq.(3). (2) It is evaluated through montecarlo (MC) graph simulation of sequential adjacency matrices with \(N=300\) locations and with a simulation time of \(t=10^{5}\) time steps. The three approaches provide the same scale-free behavior of the visit distribution as \(P(\kappa,t)\sim t^{\frac{2}{3}}\kappa^{-\frac{5}{3}}\) as expected by the analytical asymptotic estimate eq.(12). In (b) the compound distribution approach is calculated at three different time snapshots. of the clustering coefficient spectrum \(c(\kappa,t)\) which indicates the probability that two neighbors of strength-\(\kappa\) node are neighbors themselves. In the case of weighted and directed networks there many different ways to define the cluster coefficient [64; 65]. At the latent variable level, the conditional clustering coefficient of a destination with attractiveness \(x\) can be interpreted as the probability that two randomly chosen locations with trips towards a destination with attractiveness \(x\) are neighbors. Consequently, the Markovian property at the latent variable level [62; 66; 67] allows to calculate analytical expressions for the assortativity \(k_{nn}(\kappa)\), quantifying two vertices correlations, and clustering coefficient spectrum \(c(\kappa)\), as a measure of three vertices correlation. A very important result is that the degree correlations of trip-visit distributions are completely determined by the attraction (and production) rate and by the origin-destination conditional probability \(\chi(y|x)\). **Proposition 2**: _In the visiting mobility network under the latent variable assumption, the origin-destination correlation is defined as the conditional probability that a visit in the destination of attractiveness \(x\) has originated from a location of population \(y\), and it is written as:_ \[\chi(y|x)=\frac{\partial}{\partial y}\log V_{x} \tag{13}\] _As consequence, the following estimates of the two-point and three-point correlations hold:_ * _the average out-strength of origin neighbors of destinations with in-strength_ \(\kappa\)_, can be written as:_ \[k_{nn}(\kappa,t)\sim\frac{t}{P(\kappa,t)}\iint\nu_{y}p(\kappa,t|x)\chi(y|x) \rho(x)dydx\] (14) _If destinations and origins are independent_ \(\chi(y|x)=\chi(y)\) _and_ \(\langle k_{nn}\rangle(\kappa,t)=\text{const}\)_._ * _the clustering coefficient for destinations of in-strength_ \(\kappa\) _is:_ \[c(\kappa,t)\sim \frac{1}{2\nu_{0}P(\kappa,t)}\iiint p(\kappa,t|x)\rho(x)\left(\nu_ {y^{\prime}}+\nu_{y^{\prime\prime}}\right)\,\chi(y^{\prime}|x)\,\chi(y^{ \prime\prime}|x)dy^{\prime}dy^{\prime\prime}dx\] (15) Proof.: The conditional origin-destination probability is the conditional probability that a destination block of attractiveness \(x\) is connected to an origin block of population \(y\) is: \[\chi(y|x)=\frac{\phi(y)\mathcal{K}(x,y)}{\int\phi(y)\mathcal{K}(x,y)dy}=\frac {\partial V_{x}(x,y)}{\partial y}\frac{1}{\nu_{x}(x,y)}=\frac{\partial}{ \partial y}\log V_{x}(x,y) \tag{16}\] Figure 3: The two point and three point correlations of the trip mobility network can be calculated in terms of the hidden variables \(x\) and \(y\). In particular the conditional average-nearest-neighbor’s strength at the latent variable level (a) is the in-out strength (origin-destination) assortativity coefficient, and the cluster coefficient at the latent variable level (b) is the ”In” clustering (or destination clustering) coefficient for weighted and directed networks as [64; 65], i.e. a triangle such that there are two trips coming into of the destination node (\(x\gets y^{\prime},x\gets y^{\prime\prime},x^{\prime}\gets y^{ \prime\prime}\lor x^{\prime\prime}\gets y^{\prime}\)). where \(\mathcal{V}_{x}\) is the primitive function of the attraction rate \(\nu_{x}\). Io order to write the explicit expression of the origin-destination correlation it is necessary to know the pairing rule, for example trough the connection kernel \(\mathcal{K}(x,y)\) so that \(\nu_{x}=\nu_{0}\int_{\Omega_{y}}\mathcal{K}(x,y)\phi(y)dy\), or imagining a generic primitive function for the attraction rate. So the conditional origin-destination probability is an important indicator for correlations between origins and destinations in the degree-distribution of the visiting mobility network. The conditional average-nearest-neighbor's in-degree for destinations of attractiveness \(x\) can be written for directed multigraph in a continuous limit as in [12, 13]: \[k_{nn}(x)=\int\mathbb{E}[\kappa|y]\,\chi(y|x)dy \tag{17}\] where in the mobility model the conditional expected strength is \(E[\kappa|y]\propto\nu_{y}t\). Since, for the markovian degree property, the degree two point correlation can be fully determined by the conditional probability \(P(\kappa^{\prime}|\kappa)\), the average degree of neighbors of an in-degree \(\kappa\) destination is known to be calculated as [14, 63]: \[k_{nn}(\kappa,t)=1+\frac{1}{P(\kappa,t)}\int p(\kappa,t|x)\rho(x)k_{nn}(x)dx=1+ \frac{t}{P(\kappa,t)}\int\int\nu_{y}\chi(y|x)p(\kappa,t|x)\rho(x)dydx\] which is an in-out (origin-destination) assortativity measure, which is independent of \(\kappa\) for \(\chi(y|x)=\chi(y)\) as in the case of multiplicative separable linking function \(\mathcal{K}(x,y)\). Let us notice that since the network is directed, other than weighted, one could define, similarly, other three average nearest neighbor's degree functions: destination-origin, origin-origin and destination-destination. The clustering coefficient of a destination with attractiveness \(x\) can be interpreted as the probability that two randomly chosen edges from \(x\) are origin-neighbors. The clustering of a destination of degree one or zero is defined as zero. In the space of latent variables, consider a destination \(i\) of attractiveness \(x_{i}\) and population \(y_{i}\), which is connected with with probability \(p(y_{j},y_{k}|x_{i})\) through trips originated from two other locations \(j\) and \(k\) which have attractiveness \(x_{j}\) and \(x_{k}\) and population \(y_{j}\) and \(y_{k}\) respectively. Since the network is markovian at the latent variable level, \(p(y_{j},y_{k}|x_{i})=p(y_{j}|x_{i})p(y_{k}|x_{i})\). Thus, similarly to the definition in [68, 14] together with modifications [64, 69] for the directed and weighted case, the local origins-destination clustering for locations of attractiveness \(x_{i}\) can be written as: \[c(x_{i})=\sum_{j,k}p\big{(}(x_{j},y_{j}),(x_{k},y_{k})\big{)}\,p(y_{j},y_{k}|x_ {i})=\frac{\sum_{j,k}\frac{1}{2}\big{(}\mathcal{K}(x_{j},y_{k})+\mathcal{K}(x _{k},y_{j})\big{)}\,\mathcal{K}(x_{i},y_{j})\,\mathcal{K}(x_{i},y_{k})}{\sum_{j,k}\mathcal{K}(x_{i},y_{j})\,\mathcal{K}(x_{i},y_{k})}\] where \(p\big{(}(x_{j},y_{j}),(x_{k},y_{k})\big{)}\) is the probability that the two origin nodes are connected one to the other in both directions. Now, in the asymptotic continuous regime the clustering coefficient can be rewritten: \[c(x)=M\iiint\frac{1}{2}(\mathcal{K}(x^{\prime},y^{\prime\prime})+\mathcal{K}(x ^{\prime\prime},y^{\prime}))\,\mathcal{K}(x,y^{\prime})\,\mathcal{K}(x,y^{ \prime\prime})\,\rho(x^{\prime\prime})\rho(x^{\prime\prime})\phi(y^{\prime}) \phi(y^{\prime\prime})dx^{\prime}dx^{\prime\prime}dy^{\prime\prime}\] where \(M=(\int\mathcal{K}(x,y^{\prime})\phi(y^{\prime})dy^{\prime}\int\mathcal{K}(x,y^{\prime\prime})\phi(y^{\prime\prime})dy^{\prime\prime})^{-1}\), so it is possible to write: \[c(x)=\iiint\chi(y^{\prime}|x)\mathcal{K}(x^{\prime},y^{\prime\prime})\chi(y^{ \prime\prime}|x)\rho(x^{\prime})dx^{\prime}dy^{\prime}dy^{\prime\prime}=\frac{ 1}{2\nu_{0}}\iint\big{(}\nu_{y^{\prime}}+\nu_{y^{\prime\prime}}\big{)}\,\chi(y ^{\prime}|x)\,\chi(y^{\prime\prime}|x)dy^{\prime}dy^{\prime\prime}\] knowing that \(\nu_{y}=\nu_{0}\int_{\Omega_{x}}\mathcal{K}(x,y)\rho(x)dx\) and the definition of \(\chi(y|x)\). Let us notice that for independent origins and destinations then \(c(x)=c_{0}=const\). Moreover in the case of clustering coefficient in a multigraph one can calculate the number of triangles repeated \(\kappa\) times, which in a markovian graph can be approximated on average as \(\overline{c}_{t}(x)=tc(x)\), since at each time step a possible link is considered as a bernoullian trail, so that the observed number of links in \(t\) trials follows a binomial distribution and so the expected value is \(t\mathcal{K}(x,y)\). Consequently, since the clustering coefficient has values in \([0,1]\), we normalize the adjacency matrix respect to \(t\), so that the average local clustering coefficient of a node with strength \(\kappa\), denoted by \(c(\kappa)\), [14, 62, 63, 70] is given by: \[c(\kappa,t)=\frac{1}{tP(\kappa,t)}\int p(\kappa,t|x)\overline{c}_{t}(x)\rho(x) dx=\frac{1}{P(\kappa,t)}\int p(\kappa,t|x)c(x)\rho(x)dx\] which represents the local in-clustering spectrum for destination locations and it is independent of \(\kappa\) for \(\chi(y|x)=\chi(x)\) as in the case of multiplicative separable linking function \(\mathcal{K}(x,y)\), where \(P(\kappa,t)\) represents the in-strength distribution. Let us notice that in the case of the clustering coefficient the adjacency matrix and the latent variables are needed to be normalized in order to transform a multigraph in a weighted graph and from there a clustering coefficient no larger than 1 is guaranteed. The Markovian nature of this class of networks implies that all higher-order correlations can be expressed as a function of the attraction and production rates \(\nu_{x},\nu_{y}\) and the conditional origin-destination probability \(\chi(y|x)\), allowing an exact treatment of mobility models at the mean-field level. Under the hypothesis that origins and destinations are independent, that is \(\chi(y|x)=\chi(y)\), then the average-nearest-neighbor's strength function and the clustering coefficient are constant along \(\kappa\) as shown, as an example, in the simulation shown in Fig.(a)a and Fig.(b)b. Consequently, for neutral networks the two and three point correlations can be obtained with three different approaches that will provide the same estimate by using the input approach of latent variables ('latent estimate'), by using the output approach trough the adjacency matrix ('expected value') and, finally, the algorithm computation of assortativity and clustering coefficients for directed and weighted networks ('simulation approach'). **Remark 2**: _Under the hypothesis that visit production process and visit attraction process are independent the average in-strength of nearest neighbor function is constant, and in the asymptotic limit:_ \[k_{nn}(\kappa,t)\sim\frac{t\mathbb{E}[h^{2}(y)]}{N\mathbb{E}[h(y)]^{2}}=\frac{ \langle\kappa_{out}^{2}\rangle}{\langle\kappa_{out}\rangle} \tag{18}\] _As regard with the clustering coefficient spectrum, under the same hypothesis:_ \[c(\kappa)\sim\frac{\mathbb{E}[g(x)]\mathbb{E}[h^{2}(y)]}{\mathbb{E}[h(y)]}= \frac{\langle\kappa_{in}\rangle}{tN}\left(\frac{\langle\kappa_{out}^{2} \rangle-\langle\kappa_{out}\rangle}{\langle\kappa_{out}\rangle^{2}}\right)^{2} \tag{19}\] Proof.: The equations for the average in-strength of nearest neighbors and the in-clustering coefficient can be derived directly from Proposition 2 after some algebraic manipulations considering the production rate \(\nu_{y}\) and the conditional probability \(\chi(y|x)\) are independent from \(x\) as, in the case when the attraction and production rates are recovered from a linking function that is multiplicative separable, i.e. \(\mathscr{K}(x,y)=g(x)h(y)\). In fact, when origins and destinations are independent, that is \(\chi(y|x)=\chi(y)\), then the average-nearest-neighbor's strength function \(k_{nn}(\kappa,t)=1+t\int\nu_{y}\chi(y)dy\) which is a constant over the strength degrees \(\kappa\) and where \(E[g(x)]\) and \(E[h(y)]\) are the expectation values of the function \(g(x)\) and \(h(y)\) under the circumstances they have finite values, which occurs even for fat-tail distributions in a finite set for the latent variables [13]. In the case of infinite moments then the equation above represents a unreliable estimation but still shows the neutral assortativity of the graph. As regard with the clustering coefficient, the result in (2.3) is straightforward by using the multiplicative separable linking function as above, with the only difference that it has been normalized in order to provide a weighted matrix with link weights not larger that one. Another approaches for the estimates of the assortativity and clustering coefficient can be derived in terms of the strength degrees of the nodes as provided in [14, 61, 62] with proper modifications for directed weighted graph, see [13, 71, 72]. Their expected values for \(k_{nn}(\kappa)\) and \(c(\kappa)\) are analytically known in literature for neutral networks, i.e. no degree correlations, as derived in terms of degrees the expected average in-strength of nearest neighbors can be written: \[k_{nn}(\kappa_{in})=\sum_{\kappa_{out}}\kappa_{out}P(\kappa_{out}|\kappa_{in}) =\sum_{\kappa_{out}}\kappa_{out}\frac{\kappa_{out}P(\kappa_{out})}{\langle \kappa_{out}\rangle}=\frac{\langle\kappa_{out}\rangle^{2}}{\langle\kappa_{ out}\rangle}=\mathbb{E}[k_{nn}^{(u)}]\] where in the absence of correlations \(P(\kappa_{out}|\kappa_{in})=k_{out}P(\kappa_{out})/\langle\kappa_{in}\rangle\) has been used. In the case of clustering coefficient, after normalization of the multigraph, the in-clustering coefficient can be written as in [62, 73]: \[c(\kappa)= \sum_{\kappa^{\prime}_{out}\kappa^{\prime\prime}_{out}}\frac{( \kappa^{\prime}_{out}-1)(\kappa^{\prime\prime}_{out}-1)}{tN\kappa^{\prime \prime}_{out}P(\kappa^{\prime\prime}_{out})}P(\kappa^{\prime\prime}_{out}| \kappa^{\prime}_{out})P(\kappa^{\prime\prime}_{out}|\kappa_{in})P(\kappa^{ \prime}_{out}|\kappa_{in})\] \[= \frac{\langle\kappa_{in}\rangle^{3}}{tN\kappa_{in}^{2}P^{2}(\kappa _{in})}\sum_{\kappa^{\prime}_{out},\kappa^{\prime\prime}_{out}}\frac{(\kappa^ {\prime}_{out}-1)(\kappa^{\prime\prime}_{out}-1)P(\kappa^{\prime\prime}_{out},\kappa^{\prime}_{out})P(\kappa^{\prime\prime}_{out},\kappa_{in})P(\kappa^{ \prime}_{out},\kappa_{in})}{\kappa^{\prime}_{out}\kappa^{\prime\prime}_{out}P( \kappa^{\prime}_{out})P(\kappa^{\prime\prime}_{out})}\] That in the case of uncorrelated networks: \[c(\kappa)=\frac{\langle\kappa_{in}\rangle^{3}}{tN\kappa_{in}^{2}P^{2}(\kappa _{in})}\sum_{\kappa^{\prime\prime}_{out},\kappa^{\prime\prime}_{out}}\frac{( \kappa^{\prime}_{out}-1)(\kappa^{\prime\prime}_{out}-1)}{\kappa^{\prime}_{out} \kappa^{\prime\prime}_{out}P(\kappa^{\prime}_{out})P(\kappa^{\prime\prime}_{ out})}\cdot\frac{k^{\prime\prime}_{out}P(k^{\prime\prime}_{out})}{\langle k^{\prime\prime}_{ out}\rangle}\cdot\frac{k^{\prime\prime}_{out}P(k^{\prime\prime}_{out})}{\langle k _{in}\rangle}\cdot\frac{k_{in}P(k_{in})}{\langle k_{in}\rangle}\cdot\frac{k^{ \prime}_{out}P(k^{\prime}_{out})}{\langle k^{\prime}_{out}\rangle}\frac{k_{in} P(k_{in})}{\langle k_{in}\rangle}\] Let us notice the the clustering coefficient is normalized respect to time \(t\) since \(c(\kappa)\in[0,1]\) and so it results to be the generalization for directed multigraphs without correlations as for simple graphs in [14, 57, 68]. Simulations for such results are shown in Fig.5 where several computational simulations of a graph for different time length \(t\) is presented alongside the prediction results for uncorrelated networks for assortativity and clustering coefficient. It is worth noticing that the analytical prediction are asymptotically valid so that no isolated nodes or leafs exists since local neighborhood clustering is typically not defined if a node has one or no neighbor and such situation influences the estimation of the global clustering in sparse networks [74]. In the present work, the clustering coefficient algorithm removes all the local clustering of all the nodes with less than 2 neighbors, so the global clustering coefficient is over-estimated2. Footnote 2: Another choice would be to set to zero the local clustering coefficient for all nodes with less than two neighbors. Is such case the global clustering coefficient would be under-estimated ## 3 Results In the present section a network analysis of the main graph measures and topology will be conveyed in the particular case study of New York metropolitan area by using Safegraph Mobility Dataset [75] for the year 2019. ### Data Origin-destination (OD) data represent movement flows through geographic space, from an origin (O) to a destination (D). OD datasets represent information on trips between two geographic areas often represented by the geographical centroids of the areas. Typically encoded with a square symmetric matrix, OD flow data contain numerical data on the aggregate quantity of individuals travelling from one geographic area to another over a specific time period. Mostly used in transportation planning, OD flows are an invaluable source of data for understanding spatial and temporal patterns of urban mobility and dynamics [76, 77, 2, 78]. Visit flows can be in practice estimated in various ways in real world data. In particular, mobile phone location data are provided by SafeGraph trough dynamic population Origin-Destination flow matrices with hourly temporal resolution and aggregated by census block groups (CBG) in the USA as discussed in [37]. In the daily CBG to CBG visitor flows metric, each row contains an origin CBG and a destination CBG, as well as the number of mobile phone-based visitor flows from the origin CBG to the destination CBG. Every day, the number of unique mobile phone users who live in the origin CBG and visits to the destination CBG are recorded. As regarding with visit production model, the population in each block is the key information to obtain from data in order to define the variable \(y\) and its respective distribution. However the population data is susceptible to the way data are collected and sampled by the provider. In fact the demographic sampling depends on many factors as the geographical boundaries which define a block, a tract, or any Figure 4: The estimate of assortativity and transitivity of a latent variable network with the same structure as in Fig.2 with the additional specification of \(\phi(y)\sim\phi_{0}y^{-2}\). The average-nearest-neighbor’s strength function (a) which show a neutral assortativity in the network, the dashed line represents the assortativity mean value. The local clustering coefficient for different values of \(\kappa\) in (b), so that the transitivity is constant, the dashed line represent the average global cluster coefficient. administrative tessellation. Moreover in the statistical sampling methodology the individual measurements in each block go through a few transformations and aggregations which impacts the final measurement [79]. From SafeGraph data it is possible to build a matrix of trip flows between locations in a day for arbitrary large region \(R\) of the US. In particular, a county is considered with a tessellation at the resolution of census block group level, then it is possible to reconstruct the adjacency matrix of directed trips from an origin location towards a destination as stops of devices as described in [75]. Fig.6 plots a sequence of visitation counts in different census block group areas during different time windows of a day for New York city. The data has been re-organized as shown in Table 1 where the \((N+1)\times(N+1)\) matrix \(\mathbf{A}\) indicates the global origin-destination table visiting flow between the \(N\) blocks of the region \(R\) plus one external node which represents the resto of the world available in the data outseide the region of interes \(R\). In particular, \(a_{ij}\) is the number of visits registered as stop in the destination \(j\) in \(R\) that originated in the location \(i\) in \(R\). For locations outside the selected region, the entry \(w_{i}\) counts the visits in the destination \(i\) of the region \(R\) that originated from a location outside the region \(R\). The in-degree for the destination location \(i\) is the sum along the columns of the row \(i\) from the matrix \(\mathbf{A}\) from which the empirical visit distribution is evaluated. In addition, the trip-visit distribution, aka the in-strength distribution, is evaluated by associating a weight to each visit. As a typical choice, the weight is taken to be the distance between the origin block and the destination one in kilometers as estimate of the distance from home travelled by devices. Such information Figure 5: Two points and three point network correlations using three different approaches, in the case of uncorrelated graph as reported in Remark 2. The overall average strength of nearest neighbors \(\langle k_{nn}(\kappa)\rangle_{\kappa}\) is replicated for each \(t\) so to obtain a mean global value \(\langle k_{nn}\rangle\) over a ensemble of \(S=50\) replications as in (a). Similarly one can obtain the global mean in-clustering coefficient \(\langle c\rangle\) in (b). is recovered by the census bureau geographical data using Census Block Group geometries with longitude and latitude coordinates of the block centroid [75; 81], calculated as the haversine distance between the visitor's home geohash-7 and the destination location geohash-7 for each visit. A more detailed estimate would be the effective distance traveled by each visitor in the trip between its main location to the selected destination. Such information is not reported in SafeGraph at the moment neither in other data sources consistent with the data structure in the study. However, the radial movement approximation is motivated by the fact that travelers typically seek the shortest route [27; 78]. ### Mobility network analysis Let us start with the estimate of the distribution of visits among different destinations, which namely represent the in-strength distribution so that the in-strength of destinations is \(\kappa_{i}=\sum_{j}C_{ij}A_{ij}\) where \(A_{ij}\) is the entry of the adjacency matrix indicating the number of arrivals in destination \(i\) of a trip originated in the location \(j\). Whereas \(C_{ij}\) is the visit "size" of the traveler who departed from origin \(j\) and has arrived to destination \(i\). Such value is taken from weight matrix \(C\) that represents, in this particular case, the distances between origin-destination pairs. Such cost matrices are directly recovered from census data included in the dataset used. In Fig.7(a) the empirical complementary cumulative distribution function is plotted for the case of New York metropolitan area in a typical day of November 2019. The inspection of in-strength distribution shows that the visit distribution has a scale-free asymptotic behavior as \(P(\kappa)\sim\kappa^{-\mu}\) with power law coefficient of \(\mu\approx 1.8\). Let us now discuss the degree correlations such as assortativity and \begin{table} \begin{tabular}{|c|c c c c|c|} \hline & \multicolumn{3}{c|}{\(\boldsymbol{O}\)} & \multicolumn{1}{c|}{\(W_{O}\)} \\ \hline & \(a_{11}\) & \(a_{12}\) & \(\dots\) & \(a_{1N}\) & \(w_{1}\) \\ \(\boldsymbol{\Delta}\) & \(\vdots\) & \(\vdots\) & \(\ddots\) & \(\vdots\) & \(\vdots\) \\ \(a_{N1}\) & \(a_{N2}\) & \(\dots\) & \(a_{NN}\) & \(w_{N}\) \\ \hline \(w^{1}\cdots w^{N}\) & \(v_{w}\) \\ \hline \end{tabular} \end{table} Table 1: Data matrix format. The vector \(\boldsymbol{D}\) represents the array of locations as destination units in the tessellation region. Similarly \(\boldsymbol{O}\) represents the same array locations but as origins of trips inside the tessellation region. The array \(W_{D}\) is the set of destinations located outside the region and \(W_{O}\) is the set of all the origin locations outside the region. So \(\boldsymbol{A}_{0}\) is the open O-D table, and \(\boldsymbol{A}\) is the close O-D table. Figure 6: Number of cumulative visits in New York city [80] for different time windows. clustering just in the case of the origin-destination matrix. As already discussed, the average out-strength of neighbors of destinations of in-strength \(\kappa\) measures the tendency of having a directed trips from an origin location to a destination, defined as in [13; 82], and from here the assortativity spectrum can be built. As plotted in Fig.7(b), the average nearest neighbor in-strength function is flat, and this shows a neural assortativity behavior with a mean value of \(\langle k_{nn}(\kappa)\rangle_{\kappa}\approx 15.4\). Such estimate is in agreement with the analytical prediction of the expected average in-strength of the nearest neighbor for uncorrelated networks \(\mathbb{E}[k_{nn}^{(u)}]=\langle\kappa_{out}^{2}\rangle/\langle\kappa\rangle=15.7\) as proposed in Remark 2. Under the same conditions, the average in-clustering coefficient can be computed accordingly to the definition [69; 83] adapted to the in-clustering coefficient defined in the present work. The clustering spectrum for the data is shown in Fig.7(c) where the clustering spectrum is flat as for uncorrelated-graphs, and the global clustering coefficient is given by \(\langle c_{in}(\kappa)\rangle\approx 0.02\) consistently with the analytical prediction of \(\mathbb{E}[c^{(u)}]\) for uncorrelated networks as reported in Remark 2. This shows that the O-D SafeGraph mobility network is consistent with the hypothesis of uncorrelated graph with scale-free visit distribution at a macroscopic scale as also discussed in [2]. The absence of degree-correlations allows to consider the origin-destination conditional probability to be \(\chi(y|x)=\chi(y)\). This means that a destination receives a visit from a randomly chosen origin location, without any particular choice of the origin but the number of resident populations in it. Consequently, the correlations between origins and destinations are entirely due to trip costs represented in the weight matrix is a well-mixed locations at level of attractiveness property as specified in the model assumptions. The scale free behavior of the visit distribution together with the neutral tendency of degree correlations, allows us to narrow the type of kernel function to be considered in the model. Despite that, we do not have enough information to uniquely determine the attraction rate and the latent variable distribution. We will face such issue in the nex section when we will discuss possible proxies of attractiveness latent-variable. ### Network scaling topology A very important analysis of the mobility network, is to study how the trip costs affect the topology of network. The number of trip arrivals at the \(i\)-th destination can be written as the in-degree \(k_{i}=\sum_{j}A_{ij}\), meanwhile the visit strength of the \(i\)-th destination can be written as \(\kappa_{i}=\sum_{j}C_{ij}A_{ij}\) where \(C_{ij}\) is the entry Figure 7: Visit distribution and correlations of the Safegraph Neighbor Patterns mobility network for New York city in November 2019. The complementary cumulative in-strength distribution function is plotted in (a) of the trip-visit probability density function with a clear scale-free behavior. As for the trip-visit correlations, the network shows a neutral assortativity (b) and a neutral clustering (c) computed on the normalized version of the OD adjacency matrix. of the Origin-Destination distance matrix \(C\). The in-degree and in-strength distribution have been plotted in Fig.8a and Fig.8b respectively, with a clear scale-free asymptotic behavior but with different power law coefficients. Such evidence suggests a very interesting aspect of visiting patterns where trip cost weights have a significant effect of on the mobility network structure. In particular, the relation between strength and degree of a location node can be written the average strength of destinations with degree \(k\) changes as: \[\kappa(k)\sim k^{1+\delta} \tag{20}\] where the exponent \(\delta\) represents the rescaling factor and \(\delta=0\) occurs in the absence of correlations between the weight of links and the degree of nodes [82] so that the strength of a node is simply proportional to its degree and the two quantities provide therefore the same information on the system. The action of some correlation in the weight can bring cases where \(\delta\neq 0\). In such situation such relation induces a change in the scaling of the degree distribution (i.e. visit distribution) \(P(k)\sim k^{-\mu_{0}}\) and the strength distribution (i.e. trip-visit distribution) \(P(\kappa)\sim\kappa^{-\mu}\) according to the relation: \[\mu=\frac{\mu_{0}+\delta}{1+\delta} \tag{21}\] and in the case of \(\delta=-1\), \(P(\kappa)\) is a Delta distribution of constant strength. In the case of the data under study, there is a clear linear relation between strength and degree as in Fig.8c, consequently the strength distribution shows a scale free coefficient \(P(\kappa)\sim\kappa^{-\mu}\) different from the one in the degree distribution \(P(k)\sim k^{-\mu_{0}}\) consistently to the trasformation in eq.(21). The value of the rescaling exponent \(\delta\) has been estimated through a regression analysis which also allows to perform the significance level for a linear relation between degree and distance-strength. As regarding with the data used in this study, the weight of a links correspond to the physical distance between the origin and the destination of the trip which defines the trip-visit distribution as indicated in Fig.3. As reported in the caption, there is a neat change of the slope in the scale-free distribution, the visit-distribution based on the degree fig.8a shows a slower power law coefficient respect to trip-visit distribution based on node strengths Fig.8b. This is due to the linear relation between the degree and its weighted version trough the origin-destination distances. Let us notice that the weights have an impact on the degree which is significant in determining a change in the scaling relation of the distribution. Such result of \(\delta>0\) suggests that the strength of nodes grows faster than their degree, in other words, the trip distances associated to highly visited locations have higher values than those expected if the trip distances were assigned at random. Such tendency denotes a strong correlation between the trip weight and the topological properties in the mobility network, where the higher the number of visits in a location, the more traffic the location can handle. As a conclusion, a general scheme arises where the number of visitors to any destinations decreases as the inverse power law of the product of their visiting frequency and travel distance, as already suggested by [27]. On the contrary, random weights would have produced a \(\delta\) close to zero, which occurs in the case when link weights are independent from the network topology, so that the strength distribution would carry no information than the degree distribution. For a more detailed estimate of the rescaling exponent \(\delta\), a regression analysis is reported in table Tab.2, for different types of trip weights. As a additional study to the linear regression statistics, performing a residual analysis makes it possible to test the assumption of a linear regression model such as the errors are independent and normally distributed, as shown in the Supporting Materials (SM3). ## 4 Economic applications: from land-use to travel demand In this section, the investigation on possible properties of latent variables will be useful to select which kernel function is suitable to meet the trip mobility network characteristics. Later, the latent variable will be proved to be a crucial key that justify the income elasticity of a multi-purpose travel demand for any transportation mode. ### The latent variable interpretation The model dynamic is driven by the presence of latent-variables that are related to intrinsic properties of the areas but they cannot be not directly given (as for deterministic measurements) but they can only be inferred through statistical indicators (probabilistic measurement or proxy). At this point, it is possible to formulate the observed mobility network in terms of the latent-variable model so that the scale-free distribution shown in the real-world trip mobility network can be stated in terms of the latent variable statistical attributes. In particular, the attractiveness variable \(x\) and the productiveness variable \(y\) have been considered as hidden variables in the visitation model. The first summarizes the notion of the ability of a destination to attract visitors, the latter describes the ability of an origin location to produce and generate travelers with specific characteristics. However, latent variables can be interpreted more like proxies or indexes rather than proper measurements of observed phenomena. Under such perspective, there are several studies which try to estimate the number of visits by a combination of urban features, such as job opportunities, retail shops, business activities, infrastructural capacity, geographical positions etc... The standard approach to categorize urban areas classifies regions by their physical features and land use which refers to the way in which land is utilized, developed, and transformed for different purposes such as residential, commercial, industrial, and agricultural purposes. Urban morphology is seen as the result Figure 8: Visit patterns for in-strength defined in terms of distance traveled in trips. The linear fitting of degree-strength scatter plot provide a significant scaling coefficient as \(\kappa\sim k^{1+0.53}\), so that the visit distribution asymptotic behavior is \(P(k)\sim k^{-2.2}\) meanwhile the trip-visit distribution by distance weights is \(P(\kappa)\sim\kappa^{-1.8}\). of dynamic interactions between multiple factors, such as transportation efficiency, population size, and local land use. In particular, regional movement patterns, and consequently travelers' distribution, can be explained from land use, since purposes of people's trips are strongly correlated with the land use of the trip's origin and destination [22, 32, 77, 84]. For example, the latent-variable framework can provide an interesting interpretation of the effect of trip distances on the visit distribution, which can be formalized in terms of the attractiveness latent variable model. It is out of scope of this work to detect the best combination of factors which define the attractiveness of locations, however, how supported by some studies [21, 85, 24], let us take the non-residential land-use of census blocks to be the primary cause of travel demand and so it could be consider a proxy of destination attractiveness \(x\) in a multi-purpose travel model. In such perspective, the data from the New York Open Data [86] has been used where the land use zones is reported as the square feet occupied by building, parks and areas with a given use of destination (except residential), see the Supplementary Materials for more details on data used and correspondent interpretations. Land-use can be seen as a "ceteris paribus" candidate for attractiveness of locations and in Fig.9a, the plot shows the probability density function of square feet of land-use lots and it shows a scale-free behavior with a power law exponent of 2 as plotted in Fig.9a. The same analysis is performed after aggregating tax lots into census block groups, the land-use areas for non-residential purposes keep the same asymptotic fat-tail distribution with an inverse power law probability density function \(\rho(x)\sim x^{-\eta}\) with \(\eta\approx 2\) as plotted in Fig.9b. At this point it is possible to investigate the relation between mobility and land-use data (as attractiveness indicator). First, it is possible to analyze the relation between the number of arrivals \(k\) in each destination with the non-residential land-use of that area \(x\). Similarly the relation of land-use versus the visits \(\kappa\) as weighted arrivals is analyzed as well. So, let us investigate a log-log linear regression analysis of such variables. The linear fit analysis of the relation \(k_{x}\sim x^{\alpha_{0}}\) reveals an estimated value \(\alpha_{0}\approx 0.842\) as shown in Fig.10a and linear fit analysis of the relation \(\kappa_{x}\sim x^{\alpha}\) reveals an estimated value \(\alpha\approx 1.322\) as shown in Fig.10b. In table 2 such estimations for \(\alpha_{0}\) and \(\alpha\) are reported for different types of trip weights. By knowing that \(\kappa_{x}=\mathbb{E}[\kappa|x]\propto\langle\mathbf{r}\rangle_{x}\mathbf{\nu }_{x}\sim x^{\theta+\alpha_{0}}\), it is possible estimate the scaling of the trip size goes as \(\langle\mathbf{r}\rangle_{x}\sim x^{\theta}\), as confirmed by a direct linear fit analysis where \(\langle\mathbf{r}\rangle_{x}\propto x^{\theta}\) with \(\theta\approx 0.48\). In such circumstances, by using the latent variable framework one can recover the visit distribution as \(P(\kappa)\sim\kappa^{-(1+\frac{\eta-1}{\alpha_{0}+\theta})}\), which is the same scale-free distribution directly observed during the analysis of the origin-destination network. It is worth noticing the relation between the scaling exponents \(\theta\) and \(\delta\). It can be easily verified that the strength distribution in eq(21) has \(\mu=\frac{\mu_{0}+\delta}{1+\delta}=1+\frac{\eta-1}{\alpha}\), where \(\mu_{0}=1+\frac{\eta-1}{\alpha_{0}}\) and \(\alpha=\alpha_{0}+\theta\). Solving, we find that, the following relation holds: \[\delta=\frac{\alpha}{\alpha_{0}}-1=\frac{\theta}{\alpha_{0}} \tag{22}\] which can be checked by comparing the values reported in the Table 2 under their relative error margins. \begin{table} \begin{tabular}{c||c c c} \hline \hline & Distance & Travel time & Income \\ \hline \multirow{2}{*}{\(\delta\)} & \(0.531^{*}\) & \(0.033^{*}\) & \(0.017^{*}\) \\ & \([0.467,0.595]\) & \([0.025,0.042]\) & \([0.010,0.024]\) \\ \hline \multirow{2}{*}{\(\alpha_{0}\)} & \(0.842^{*}\) & \(0.842^{*}\) & \(0.842^{*}\) \\ & \([0.790,0.894]\) & \([0.790,0.894]\) & \([0.790,0.894]\) \\ \multirow{2}{*}{\(\alpha\)} & \(1.322^{*}\) & \(0.874^{*}\) & \(0.858^{*}\) \\ & \([1.223,1.421]\) & \([0.822,0.932]\) & \([0.805,0.912]\) \\ \hline \end{tabular} Note for the linear fit the p-value of the linear regression:*\(P\)\(<\)0.01. The confidence intervals have been calculated at a significance level of 99%. \end{table} Table 2: In-degree vs in-strength regression analysis for different trip-weights. The slope of linear fit reported is the coefficient \(1+\delta\). In conclusion, the attraction rate of a location is higher than another destination in the sense that the travelers are so motivated to travel a longer distance to get there. Such effect reveals the action of human travel demand on mobility, as formulated in the present paper, in terms of latent variable network model. The scaling relation between land use versus both travel distances and visits are power law like so that the attraction rate must be of a power law type and the distribution of land use that is the attractiveness latent variable has a pareto- type distribution. So no other attraction rate is compatible with the observation. Morever since degree correlation are absent, it is plausible that \(\chi(y|x)=\chi(y)\) so that the kernel function \(\mathcal{K}(x,y)\) is multiplicative separable function. ### Income elasticity The visitation model approach can provide economical interpretations of some empirical urban scaling evidences [2], where scaling laws are also present in economical values of each location respect to its attractiveness. A crucial attention will be focused on the income elasticity of visit demand. Let us, first, define the "benefit" that travelers received by visiting a location, and in particular, the variable \(I_{x}\) indicates the income level associated to the visitors who have traveled towards a given location with attractiveness \(x\) for a job purposes. In this way the benefit can be determined trough the strength-by-income variable \(\kappa_{i}\) which converts a visit into potential economic output through a conversion factor \(i_{0}\) that is the traveler's income per unit of visit time 3. On the other side, let us define the "cost" \(Q_{x}\) faced by travelers to reach a location Figure 9: Log-binning procedure for the probability density function of square feet for land-use zones as hypothetical index for destination attractiveness. It shows an inverse power law fat-tail distribution so that the asymptotic behavior the probability density function can be written as \(\rho(x)\sim x^{-2}\). In the inset the whole distribution is plotted where the data is fully represented even at a low spatial scale. In (b) the complementary cumulative density function of the land-use square feet aggregated for census block from tax lot data. as the strength-by-distance variable \(\kappa_{q}\) taking into consideration that commuting costs are proportional to distance or time traveled, and the proportionality conversion factor \(c_{0}\) is the cost of transportation per unit of quantity traveled4. Finally, the relation between the two variables can be written in terms of the attractiveness variable as: Footnote 4: Quantity can be measures in distance or travel time. A lower \(c_{0}\) correspond to a better transportation system. Let us observed that in the case of cities the transportation costs and distance are well approximated by a linear relation. However for larger areas, empirical studies [87; 88] show that transport cost is an increasing and concave function of distance so that \(c(\tau)=c_{0}\tau^{v}\) with \(0<v<1\), since travelers switch to faster transport modes for longer trips. \[\frac{\text{travelers' income }(I_{x})}{\text{travel quantity }(Q_{x})}=\frac{i_{0} \mathbb{E}[\kappa_{i}|x]}{c_{0}\mathbb{E}[\kappa_{q}|x]}\propto\frac{i_{0}\,x ^{\alpha_{0}+\theta_{I}}}{c_{0}\,x^{\alpha_{0}+\theta_{Q}}} \tag{23}\] where \(\theta_{Q}\) is the scaling exponent derived from regression slope by income in Table 2 and the exponent \(\theta\) from regression slope by amount of travel given by distance traveled in the same analysis. The relation between three variables is represented graphically in Fig.(a)a. At this point it is possible to write the income elasticity of travel demand which has the meaning of how sensitive the demand for traveling a certain distance is to changes in income levels, the direct relation between the income reward and the quantity of travel demanded for visiting locations can be derived from eq.(23) as: \[Q_{x}\sim I_{x}^{\varepsilon}\qquad\text{, with }\;\varepsilon=\frac{\partial Q _{x}}{Q_{x}}\frac{I_{x}}{\partial I_{x}}=\frac{\alpha_{Q}}{\alpha_{I}}=\frac{ 1+\delta_{Q}}{1+\delta_{I}}\quad,\forall x\in\Omega_{x} \tag{24}\] Figure 10: Regression fit between the land-use of census block group and (a) arrivals and (b) visits. where the exponent \(\varepsilon\) is the income elasticity as estimated in Fig.11, and in the case under study \(\varepsilon<1\), indicating that distance traveled is a necessity good, i.e., as income increases, people spend proportionally less on traveling when the income levels of travelers increase for any transportation mode5. Such prediction of an income elasticity of about \(\varepsilon\approx 0.65\) can be compared to results presented in literature reviews of empirical evidences and meta-analysis studies [35; 89; 90; 91] and also discussed with a theoretical interpretations [36; 88], where average income elasticity for aggregated travel demand is estimated to be in a range values compatible with the elasticity \(\varepsilon\) estimated here6. It's important to notice that the income elasticity of travel demand can be influenced by many factors, such as the availability of transportation options, the price of transportation, and individual preferences. In conclusion, the in eq.(23) shed lights on an intrinsic relation between the income elasticity of travel demand and the attractiveness scaling of urban areas. So urban planning policies, market conditions, and social dynamics emerges but also explains the interplay between attractiveness and economic through a mobility network flow which drives the travel behavior. From a network analysis perspective, it would be convenient to compare the number of trip arrivals at destinations against the travelers' income. Using eq.(23) and eq.(24), and visit elasticity of travel demand \(\varepsilon^{\prime}\) can be defined through: Footnote 5: For any mode and any purpose condition, undistinguished transportation mode is considered here, so all the possible modes are combined and only the distance necessary to reach the destination is taken into account. Footnote 6: Let us notice that income elasticity shows a large variability in the empirical evidences since distance traveled is not homogeneous across different sources of income, type of jobs and age[34]. Moreover, travel demand is reported in different units (individual or aggregate distance km/day, travel time, fuel consumption) and in different behaviors (commuting vs non-commuting, essential vs non-essential) or travel purposes (business, job, shopping, leisure). Travel can also vary in terms of different traveling modes according to transportation infrastructure. Moreover the estimations reported can even change over the years. \[Q_{x}\sim k_{x}^{\varepsilon^{\prime}}\qquad\text{ where }\ \varepsilon^{\prime}=\left(1+\frac{\theta_{I}}{\alpha_{0}}\right) \varepsilon=(1+\delta_{I})\varepsilon \tag{25}\] where the income scales as \(I_{x}\sim k_{x}^{1+\theta_{I}/\alpha_{0}}\) respect to the number of trip arrivals. The visit elasticity of travel demand is a measure of how sensitive the number of trips to a certain destination is to changes in key travel attributes, such as fare level, service quality, journey time components, income and car ownership, and price of competing modes. The previous scaling relations of income elasticity, it is possible to range from the microeconomics perspective of transport economics to the macroeconomic outputs such as employment and the economic growth. For example, during a period of an economic growth (recession) where incomes are rising (falling) the distribution and the magnitudes of attractiveness can change and then modify the mobility pattern which, in its turn, has an impact on the global economic performance as well. ## 5 Conclusions and perspectives In conclusion, the study presents a data-driven model for human mobility network based on an origin-destination structure, which serves as the foundation for understanding mobility visitation flows. The model utilizes latent variables associated with each location, representing attractiveness and productiveness, to capture the intrinsic characteristics of destinations and origins. The contributions of this research are twofold. Firstly, it provides a theoretical framework that describes and reproduces a visit generation stochastic process through the use of intergral-differential equation for the evolution of the visit probability density and degree correlations in a trip mobility network. Consequently, analytical, numerical, and computational solutions are provided for important network characteristics, such as the strength distribution, assortativity, and clustering coefficient. A collateral impact of the visit generation model in travel dynamics is the introduction of a mathematical formalism of compound renewal processes commonly used in financial and actuarial science literature. Such stochastic models in a network perspective are useful for capturing the randomness and dependence between the arrival times and event sizes, providing a flexible framework for modeling and analyzing various mobility dynamics and financial and actuarial risks as well. The second contribution of this research has to do with the empirical analysis of real world phenomena. By analyzing origin-destination data, the study reveals the presence of scale-free behaviors in visit frequencies and identifies correlations between visits and trip costs. The research also explores the statistical characteristics that latent variables should have to reproduce the observed patterns in the trip mobility network. Hence, the model permits to disentangle the effect of attractiveness (as land-use), population, trip costs and economic features of travelers on the visit dynamics in a mobility network. Clear scaling laws emerges between latent variables and travel demand. Finally, the model points out the effect of income on travel behavior depends strictly from the latent-variables that, therefore, can be considered as decision variable from a economic policy viewpoint. The possibility that human mobility belongs to the class of scale-free networks has impacted on the economic, engineering and mathematical communities in the multidisciplinary field of sustainable urban transportation, smart cities and world trade webs. As future outlooks, from a modeling side, the mathematical formalism Figure 11: Using the dataset, for New York in 2019, in (a) a 3D scatter plot shows the relation among attractiveness \(x\) as land-use, the income level of visitors and travel demand as distance traveled for each visit. The projection planes show three 2D scatter plots for pairs of the previous variables. In particular, the Q-I plane shows the projected regression analysis an elasticity of \(\varepsilon\simeq 0.76\) consistently with the theoretical prediction from eq.(24) and from eq.(24) where the income \(I_{x}\sim k_{x}^{1+\delta_{I}/\alpha_{0}}\). This indicates that a 10 per cent increase in income leads to about 6.5 percent increase in distance traveled or, conversely, a 10 percent increase in the demand of travel distance requires an increase of the income by about 15.4 percent. discussed in the paper could also be extended to more general mobility graphs by considering more granular interactions on a time interval much shorter than the one used in the data explored here. For example, one can investigate more on the distribution of inter-arrival times of new trips to be non-Poissonian, and the the trip size are events with intensity that does not have finite moments (Levy-jumps). Another forthcoming study will be the study of emission mobility reductions for sustainable city planning. The approach can help the decision for the optimal allocation of economically attractive urban areas by transportation mode and land-use polices, and at the same time, minimizing the emission of pollutants and the impact of mobility trips among origin and destinations. From the data-side, moreover, a larger scale panel analysis is required for other regions to increase the robustness and the validity of the model. Moreover, the study shed lights on possible applications in fields like urban, transportation and environmental economics. These findings offer valuable insights to policymakers and urban planners, aiding in the comprehension and prediction of mobility patterns for informed decision-making and sustainable development. ## 6 CRediT authorship contribution statement **Fabio Vanni**: conceptualization, methodology, formal, numerical and statistical analysis, investigation, data curation, writing - original draft, Revision. ## 7 Declaration of Competing Interest The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## 8 Data availability Origin Destination mobility data has been retreived by SafeGraph for academic research program [75], and they cannot made public available. Social, demographic and geographical data are public accessible at [38; 81; 86; 92]. Codes for the main network analysis are povide in the Github repository [93].
2310.19068
Sketching Algorithms for Sparse Dictionary Learning: PTAS and Turnstile Streaming
Sketching algorithms have recently proven to be a powerful approach both for designing low-space streaming algorithms as well as fast polynomial time approximation schemes (PTAS). In this work, we develop new techniques to extend the applicability of sketching-based approaches to the sparse dictionary learning and the Euclidean $k$-means clustering problems. In particular, we initiate the study of the challenging setting where the dictionary/clustering assignment for each of the $n$ input points must be output, which has surprisingly received little attention in prior work. On the fast algorithms front, we obtain a new approach for designing PTAS's for the $k$-means clustering problem, which generalizes to the first PTAS for the sparse dictionary learning problem. On the streaming algorithms front, we obtain new upper bounds and lower bounds for dictionary learning and $k$-means clustering. In particular, given a design matrix $\mathbf A\in\mathbb R^{n\times d}$ in a turnstile stream, we show an $\tilde O(nr/\epsilon^2 + dk/\epsilon)$ space upper bound for $r$-sparse dictionary learning of size $k$, an $\tilde O(n/\epsilon^2 + dk/\epsilon)$ space upper bound for $k$-means clustering, as well as an $\tilde O(n)$ space upper bound for $k$-means clustering on random order row insertion streams with a natural "bounded sensitivity" assumption. On the lower bounds side, we obtain a general $\tilde\Omega(n/\epsilon + dk/\epsilon)$ lower bound for $k$-means clustering, as well as an $\tilde\Omega(n/\epsilon^2)$ lower bound for algorithms which can estimate the cost of a single fixed set of candidate centers.
Gregory Dexter, Petros Drineas, David P. Woodruff, Taisuke Yasuda
2023-10-29T16:46:26Z
http://arxiv.org/abs/2310.19068v1
# Sketching Algorithms for Sparse Dictionary Learning: PTAS and Turnstile Streaming ###### Abstract Sketching algorithms have recently proven to be a powerful approach both for designing low-space streaming algorithms as well as fast polynomial time approximation schemes (PTAS). In this work, we develop new techniques to extend the applicability of sketching-based approaches to the _sparse dictionary learning_ and the _Euclidean \(k\)-means clustering_ problems. In particular, we initiate the study of the challenging setting where the dictionary/clustering _assignment_ for each of the \(n\) input points must be output, which has surprisingly received little attention in prior work. On the fast algorithms front, we obtain a new approach for designing PTAS's for the \(k\)-means clustering problem, which generalizes to the first PTAS for the sparse dictionary learning problem. On the streaming algorithms front, we obtain new upper bounds and lower bounds for dictionary learning and \(k\)-means clustering. In particular, given a design matrix \(\mathbf{A}\in\mathbb{R}^{n\times d}\) in a turnstile stream, we show an \(\tilde{O}(nr/\epsilon^{2}+dk/\epsilon)\) space upper bound for \(r\)-sparse dictionary learning of size \(k\), an \(\tilde{O}(n/\epsilon^{2}+dk/\epsilon)\) space upper bound for \(k\)-means clustering, as well as an \(\tilde{O}(n)\) space upper bound for \(k\)-means clustering on random order row insertion streams with a natural "bounded sensitivity" assumption. On the lower bounds side, we obtain a general \(\tilde{\Omega}(n/\epsilon+dk/\epsilon)\) lower bound for \(k\)-means clustering, as well as an \(\tilde{\Omega}(n/\epsilon^{2})\) lower bound for algorithms which can estimate the cost of a single fixed set of candidate centers. ## 1 Introduction A classic idea in machine learning and signal processing for efficiently handling large datasets is to approximate them by simpler or more structured surrogate datasets. Many methods in this direction have long been considered, including low rank approximation, which approximates a given dataset by one that lies on a low-dimensional subspace, \(k\)-means clustering, which approximates a given dataset by at most \(k\) distinct points, and sparse dictionary learning Olshausen and Field (1997), which approximates a given dataset by linear combinations of elements of a small dictionary of size \(k\) with \(r\)-sparse coefficient vectors (i.e., a vector with at most \(r\) nonzero entries). We focus on the latter two problems in this work: **Definition 1.1** (\(r\)-sparse dictionary learning).: _Let \(\{a^{i}\}_{i=1}^{n}\subseteq\mathbb{R}^{d}\) be a set of \(n\) vectors in \(d\) dimensions, and let \(\mathbf{A}\in\mathbb{R}^{n\times d}\) be the matrix with the \(i\)th row set to \(a^{i}\). Then for a matrix \(\mathbf{X}\in\mathbb{R}^{n\times k}\) with \(r\)-sparse rows and a dictionary \(\mathbf{D}\in\mathbb{R}^{k\times d}\), we define the dictionary learning cost to be_ \[\mathrm{cost}(\mathbf{X},\mathbf{D})\coloneqq\|\mathbf{X}\mathbf{D}-\mathbf{A}\| _{F}^{2}\] _In the \(r\)-sparse dictionary learning problem, we seek to minimize \(\mathrm{cost}(\mathbf{X},\mathbf{D})\) over all \(\mathbf{X}\in\mathcal{X}\) and \(\mathbf{D}\in\mathbb{R}^{k\times d}\), where \(\mathcal{X}\) denotes the set of all \(n\times k\) matrices with \(r\)-sparse rows._ **Definition 1.2** (Euclidean \(k\)-means clustering).: _Let \(\{a^{i}\}_{i=1}^{n}\subseteq\mathbb{R}^{d}\) be a set of \(n\) vectors in \(d\) dimensions, and let \(\mathbf{A}\in\mathbb{R}^{n\times d}\) be the matrix with the \(i\)th row set to \(a^{i}\). Then, for a matrix \(\mathbf{X}\in\mathbb{R}^{n\times k}\) with standard basis vectors in its rows and a set of centers \(\mathbf{C}\in\mathbb{R}^{k\times d}\), we define the \(k\)-means clustering cost to be_ \[\mathrm{cost}(\mathbf{X},\mathbf{C})\coloneqq\|\mathbf{X}\mathbf{C}-\mathbf{A }\|_{F}^{2}.\] _In the \(k\)-means clustering problem, we seek to minimize \(\mathrm{cost}(\mathbf{X},\mathbf{C})\) over all \(\mathbf{X}\in\mathcal{X}\) and \(\mathbf{C}\in\mathbb{R}^{k\times d}\), where \(\mathcal{X}\) denotes the set of all \(n\times k\) matrices with standard basis vectors as rows._ While dictionary learning and clustering have found extraordinary success in various applications in practice, they are known to be computationally difficult problems to solve (Mahajan et al., 2012; Natarajan, 1995), and thus there has been intense focus on developing approximation algorithms and heuristics for these problems, such as those based on greedy methods (Lloyd, 1982; Das and Kempe, 2011) or convex relaxations (Donoho and Elad, 2003; Fuchs, 2004; Cohen-Addad et al., 2022). In this work, we study algorithms for sparse dictionary learning and \(k\)-means clustering in two distinct settings via a unified set of techniques based on _sketching_. Sketching (Woodruff, 2014), broadly speaking, refers to techniques for compressing large matrices by linear maps, and includes methods such as oblivious sketching and nonuniform sampling. Classically, sketching has been applied to design low-memory algorithms in the _streaming setting_, when the input is presented to the algorithm as a sequence of updates. More recently, sketching has been shown to be invaluable for designing fast algorithms as well. In particular, there has been a line of work which shows how sketching techniques can be applied to obtain _polynomial time approximation schemes_ (PTAS) for a variety of NP-hard problems ranging from clustering (Feldman et al., 2007) to weighted low rank approximation (Razenshteyn et al., 2016) to tensor decompositions (Song et al., 2019). We study such sketching-based algorithms for sparse dictionary learning and Euclidean \(k\)-means clustering, both in the offline setting where we obtain the first PTAS for sparse dictionary learning, as well as in the turnstile streaming and other streaming models. In particular, in the streaming setting, we initiate the study of solving these problems in the setting where the algorithm must output the assignment of the points to the dictionary/clustering, which has received surprisingly little attention in prior work. ### Our contributions #### 1.1.1 PTAS for dictionary learning and clustering We start with a discussion of our results on designing fast PTAS's. Our main contribution that we highlight from this section is the _first PTAS for sparse dictionary learning_, which also gives a new and simple approach towards designing a PTAS for \(k\)-means clustering. A typical approach for designing PTAS's for shape fitting problems such as dictionary learning and clustering is to first find a smaller instance whose solution approximates the original instance, and then to solve the smaller instance using any algorithm, where even an inefficient algorithm will be tractable due to the smaller size of the instance. A representative work which takes such an approach for the \(k\)-means clustering problem is that of Feldman et al. (2007), which uses _coresets_ to implement the first step of finding a smaller instance. Here, coresets for \(k\)-means clustering are a weighted subset of the original data points such that the cost of any candidate set of centers approximates the cost when applied to the original dataset. Furthermore, the size of this coreset can be taken to be \(\mathrm{poly}(k/\epsilon)\), and thus solving for an optimal set of centers on this subset of points can be done in time independent of the number of points \(n\). Due to this natural approach, there has been a long line of work on obtaining smaller coresets for \(k\)-means clustering (Feldman and Langberg, 2011; Braverman et al., 2016; Bachem et al., 2018; Cohen-Addad et al., 2021, 2022, 2022). On the other hand, for the sparse dictionary learning problem, similar results are strikingly lacking. The only previous work we are aware of is a coreset construction for the sparse dictionary learning problem due to Feldman et al. (2013). However, the construction of the coreset in this work requires an algorithm for computing an approximately optimal dictionary, which prevents its use in designing fast PTAS's to solve the dictionary learning problem in the first place. To address this problem, we first show that a completely different coreset technique due to Tukan et al. (2022) for the projective clustering problem can in fact be applied to the sparse dictionary learning problem. Notably, this technique uses John ellipsoids to construct coresets rather than using a nearly optimal solution to the dictionary learning problem, and thus avoids computing approximately optimal dictionaries. In turn, this allows us to obtain the first PTAS for the dictionary learning problem. Our argument additionally combines this coreset construction with a sparsity-counting technique together with polynomial system solvers Renegar (1992a,b) to efficiently solve a smaller version of the original problem. Our techniques also yield a new PTAS for \(k\)-means clustering, which is arguably simpler than prior approaches such as the algorithm of Feldman et al. (2007). We give a full discussion of our results and techniques for our PTAS for sparse dictionary learning in Section 2. #### 1.1.2 Dictionary learning and clustering on streams As our next contribution, we study algorithms for dictionary learning and clustering in turnstile streams and other related models of streaming. In the turnstile streaming model, the input undergoes arbitrary entrywise insertions and deletions: **Definition 1.3** (Turnstile stream).: _We say that an input matrix \(\mathbf{A}\in\mathbb{R}^{n\times d}\) is presented in a turnstile stream if \(\mathbf{A}\) is initialized to \(0\) and receives entrywise updates \(\mathbf{A}_{i,j}\leftarrow\mathbf{A}_{i,j}+\Delta\) for \(\Delta\in\mathbb{R}\)._ We initiate a systematic study of the dictionary learning and clustering problems in the setting where the assignment of the points to their sparse set of dictionary elements or clusters must be output together with the dictionary/cluster centers. Indeed, even for the popular Euclidean \(k\)-means clustering problem, almost all prior work that we are aware of only focus on outputting either only the cluster partitions, or the centers, but do not study the problem of recovering both. We address this problem by providing a dimensionality reduction technique that applies to \(k\)-means, sparse dictionary learning, and more generally to any problem of the form \(\min_{\mathbf{X}\in\mathcal{X},\mathbf{D}\in\mathbb{R}^{k\times d}}\|\mathbf{ XD}-\mathbf{A}\|_{F}^{2}\). A typical approach for designing low-space streaming algorithms for clustering is to apply the standard Johnson-Lindenstraus lemma (Johnson and Lindenstrauss, 1984; Boutsidis et al., 2010; Cohen et al., 2015; Becchetti et al., 2019; Makarychev et al., 2019). This result states that if \(\mathbf{G}\in\mathbb{R}^{d\times s}\) is an appropriately scaled dense sub-Gaussian matrix for \(s=O(\epsilon^{-2}\log(k/\epsilon))\), then for any partition of \(\mathbf{A}\) into \(k\) clusters, the \(k\)-means clustering cost of \(\mathbf{A}\) approximates the \(k\)-means clustering cost of \(\mathbf{A}\) up to a \((1\pm\epsilon)\) factor. Furthermore, \(\mathbf{AG}\) can be efficiently maintained in the turnstile streaming model (Definition 1.3) using just \(ns=\tilde{O}(\epsilon^{-2}n)\) space, due to the linearity of the sketch \(\mathbf{G}\). Note however that, naively, we cannot retrieve the corresponding centers of a clustering found by this method, since we have only stored the \(s\)-dimensional sketches of the \(n\) points, and additional information must be stored in order to retrieve \(d\)-dimensional cluster centers which achieve a \((1+\epsilon)\) approximation. In fact, we note in Theorem 4.1 that there is in fact a \(\tilde{\Omega}(dk/\epsilon)\) space lower bound if we wish to output centers \(\mathbf{C}\in\mathbb{R}^{k\times d}\) which achieve a \((1+\epsilon)\) approximation, so the sketch \(\mathbf{AG}\) is _provably_ insufficient for outputting both a nearly optimal assignment \(\mathbf{X}\) and centers \(\mathbf{C}\) when \(n=\tilde{o}(edk)\). We give a full discussion of our approaches for sketching and streaming algorithms for \(k\) means clustering and dictionary learning and how we overcome this problem in Sections 2 and 3. On the other hand, a study of lower bounds for the \(k\)-means clustering problem in the streaming setting when the assignment of points must be output is notably lacking in prior work as well. The main challenge in this setting is in obtaining the right dependence on \(n\) and \(\epsilon\). Indeed, an \(\Omega(n)\) lower bound is immediate, since the size of the output is at least \(\Omega(n)\) when we need to output assignments of the \(n\) points to its appropriate cluster (in fact, we show in Theorems 4.3 and 4.4 that an \(\Omega(n)\) lower bound follows even for outputting a constant factor approximation of the cost or centers). On the other hand, the previous upper bound using the Johnson-Lindenstrauss lemma to compute a nearly optimal assignment to clusters requires \(\tilde{O}(\epsilon^{-2}n)\) bits of space. Note that there are many lower bounds that show that roughly \(\epsilon^{-2}\) dimensions are required to apply the Johnson-Lindenstrauss lemma in various settings Nelson and Nguyen (2014); Kane et al. (2010); Larsen and Nelson (2016, 2017); Makarychev et al. (2019). However, it is not clear whether or not this implies that \(\epsilon^{-2}\) bits must be stored for all \(n\) points in order to cluster them to a \((1+\epsilon)\)-approximately optimal clustering solution. Indeed, it may be possible that \(\epsilon^{-2}\) bits are required only for much fewer than \(n\) points, while the vast majority of the \(n\) input points requires only \(\tilde{O}(n)\) bits of space to assign to an approximately optimal center. We present two lower bounds to partially address the question of impossibility results for assigning points to clusters in turnstile streams. Our main lower bound result is the following, which establishes an \(\tilde{\Omega}(\epsilon^{-1}n)\) lower bound to output a \((1+\epsilon)\)-nearly optimal clustering. While this does not match the upper bound given by the Johnson-Lindenstrauss lemma, it shows that we cannot hope for a \(\tilde{O}(n)\) upper bound in the turnstile streaming model in general. **Theorem 1.1** (Informal restatement of Theorem C.1).: _Let \(k=d=\tilde{O}(1/\epsilon)\). Suppose a turnstile streaming algorithm outputs centers \(\{\hat{c}^{j}\}_{j=1}^{k}\subseteq\mathbb{R}^{d}\) as well as assignments of \(n\) points to the \(k\) centers, which achieves a \((1+\epsilon)\)-approximately optimal solution to the \(k\)-means clustering problem. Then, the algorithm must use at least \(\tilde{\Omega}(n/\epsilon)\) bits of space over any constant number of passes._ As a second lower bound result, we also show that the Johnson-Lindenstrauss lemma is nearly tight if we require our algorithm to give a nearly optimal assignment of the input points to a fixed set of candidate centers. That is, we show in Theorem 4.2 that there is a fixed set of centers such that, if a turnstile streaming algorithm can assign each of the \(n\) input points to a cluster such that the cost is at most \((1+\epsilon)\) times the cost of the optimal assignment, then at least \(\Omega(\epsilon^{-2}n)\) bits must be stored. A more detailed discussion of our lower bounds is given in Section 4. Finally, we show that under some natural settings, one can obtain upper bounds that circumvent the lower bounds presented above. Indeed, we show that if we work in the _random order row arrival_ streaming model, in which the input stream corresponds to the rows of \(\mathbf{A}\) that arrive in a uniformly random order, then we can obtain upper bounds that depend on the _maximum sensitivity_ of the input stream, and in particular, we obtain an upper bound using only \(\tilde{O}(n)\) bits of space if the maximum sensitivity is sufficiently small (Theorem 4.5). Here, a bounded sensitivity assumption states that there are no points that can take up a significant fraction of the objective function, and can also be interpreted as a way to formalize a "well-clustered" instance. ## 2 Fixed parameter PTAS for sparse dictionary learning ### PTAS for \(r\)-sparse dictionary learning In this section, we provide an algorithm which solves the \(r\)-sparse dictionary learning problem (Definition 1.1) in time polynomial in the input matrix size \((n)\) and dimension \((d)\) up to \(\epsilon\)-relative error, for fixed \(k\) and \(\epsilon\). Additionally, we show that a similar approach can be used to provide an algorithm for \(k\)-means (Definition 1.2) that matches the current best dependency on \(n,d,\epsilon\) and \(k\) up to lower terms. First, we introduce a dimensionality reduction method that applies to both problems. ### Dimensionality reduction Our first step is to reduce the dimensionality of the given problem. Since the only difference between \(k\)-means and sparse dictionary learning is the constraint on the left factor, \(\mathbf{X}\), we can use the same sketching approach to reduce both problems. Consider the following general definition: **General problem**: Let \(\mathcal{X}\subset\mathbb{R}^{n\times k}\) and \(\mathbf{A}\in\mathbb{R}^{n\times d}\). Let \(k\ll n,d\). Define the optimal solution as: \[(\mathbf{X}^{*},\mathbf{D}^{*})=\operatorname*{argmin}_{\mathbf{X}\in \mathcal{X},\mathbf{D}\in\mathbb{R}^{k\times d}}\|\mathbf{X}\mathbf{D}- \mathbf{A}\|_{F}^{2} \tag{1}\] The following theorem states that one may efficiently reduce the dimensionality of \(\mathbf{A}\) in sparse dictionary learning or \(k\)-means. We briefly sketch the ideas behind the reduction. Intuitively, the regression guarantee of Theorem 3.1 in Clarkson and Woodruff (2009) states that if \(\mathbf{S}\) is a rank \(k\ll d\ \ell_{2}\)-embedding matrix, then \(\tilde{\mathbf{D}}=\operatorname*{argmin}_{\mathbf{D}\in\mathbb{R}^{k\times d }}\|\mathbf{S}(\mathbf{X}^{*}\mathbf{D}-\mathbf{A})\|_{F}^{2}\) will be a good approximation to the optimal solution of the original problem. While we do not know \(\mathbf{X}^{*}\), this guarantee implies that there is an approximately optimal dictionary, \(\tilde{\mathbf{D}}\), in the row space of \(\mathbf{SA}\). We can then restrict the optimization problem to consider only dictionaries in this lower dimensional space. Therefore, we only need to consider the error residual in this lower dimensional space, so we may reduce the dimension of the problem by applying an affine-embedding matrix \(\mathbf{T}\) and then applying SVD to find the dominant singular subspace of \(\mathbf{SAT}\). Finally, we project the rows of \(\mathbf{A}\) to this dominant subspace. We can then solve the lower dimensional problem and map the solution to the original space. **Theorem 2.1**.: _There is an algorithm which solves the problem in (1) up to \(\epsilon\in(0,1)\) relative error with constant probability in \(\mathcal{O}(\mathsf{nnz}(\mathbf{A})+(n+d)\operatorname{poly}(k/\epsilon))\) time plus the time needed to solve:_ \[\min_{\mathbf{X}\in\mathcal{X},\mathbf{D}\in\mathbb{R}^{k\times s}}\|\mathbf{ X}\mathbf{D}-\mathbf{A}^{\prime}\|_{F}^{2},\] _to within \(\epsilon\)-relative error for \(s=\mathcal{O}(k\log(k)/\epsilon)\) and some \(\mathbf{A}^{\prime}\in\mathbb{R}^{n\times s}\) with constant probability._ In the rest of this section, we assume that \(d=\operatorname{poly}(k/\epsilon)\) for clearer exposition, since the above theorem implies we can reduce to this case efficiently. ### Algorithm for sparse dictionary learning The first component of our algorithm for sparse dictionary learning is a coreset construction that reduces the size of the problem from \(n\) to a size that is logarithmic in \(n\). We achieve this by first leveraging an existing coreset construction for projective clustering by Tukan et al. (2022). In the \((\ell,m)\)-projective clustering problem, the goal is to find a set of \(\ell\)\(m\)-dimensional subspaces that minimizes the sum of the squared Euclidean distances of the input vectors \(\{a^{i}\}_{i=1}^{n}\) to the closest subspace. Observe that, in the \(r\)-sparse dictionary problem, the minimum cost of a dictionary is the sum of the squared Euclidean distances of the input vectors to the \(\binom{k}{r}\) subspaces spanned by any subset of \(r\) vectors of the \(k\) vectors in the dictionary. Hence, a coreset which preserves the projective clustering cost when \(\ell=\binom{k}{r}\) will also preserve the cost of a dictionary in sparse dictionary learning. After applying the coreset, we have reduced the size of the sparse dictionary problem to be at most logarithmic in \(n\). This allows us to guess the sparsity pattern of the optimal left factor \(\mathbf{X}^{*}\), since at most \(r\) entries in each row of \(\mathbf{X}^{*}\) may be nonzero. For each guess of the sparsity pattern of \(\mathbf{X}^{*}\), we can find an approximately optimal solution under this constraint by recognizing this as a polynomial optimization problem. We apply the decision algorithm of Renegar (1992a) using binary search to determine each entry of \(\mathbf{D}\) and the nonzero entries of \(\mathbf{X}\) as done in Razenshteyn et al. (2016). At some point we guess the sparsity pattern of \(\mathbf{X}^{*}\), and hence attain an \(\epsilon\)-relative error solution to the sparse dictionary problem. The next theorem formally states the assumptions and guarantees of our algorithm, which is formalized in Algorithm 1 in the appendix. **Theorem 2.2**.: _For an input for the \(r\)-sparse dictionary learning problem (Definition 1.1) with error tolerance \(\epsilon\in(0,1)\) such that the entries of \(\mathbf{A}\) have bounded bit complexity, Algorithm 1 returns \(\tilde{\mathbf{X}}\in\mathcal{X}\) and \(\tilde{\mathbf{D}}\in\mathbb{R}^{k\times d}\) satisfying:_ \[\|\tilde{\mathbf{X}}\tilde{\mathbf{D}}-\mathbf{A}\|_{F}\leq(1+\epsilon)\min_ {\mathbf{X}\in\mathcal{X},\mathbf{D}\in\mathbb{R}^{k\times d}}\|\mathbf{X} \mathbf{D}-\mathbf{A}\|_{F},\] _in \(\operatorname{poly}(n)\) time with constant probability, when \(k\), \(r\), and \(1/\epsilon\) are bounded by a constant.1_ Footnote 1: If \(k\) and \(r\) are not assumed to be constant, then the time complexity is \(\exp((8k^{3r})^{O(k^{2r+1})}\log n)\). ### Algorithms for \(k\)-means The same general approach of applying dimensionality reduction and a coreset construction along with guessing the sparsity pattern of \(\mathbf{X}^{*}\) can be used to achieve a fixed-parameter PTAS for \(k\)-means as well. However, we can achieve an improved time complexity matching the current best dependency on \(k\) and \(\epsilon\) up to lower order terms by further reducing the problem using results on leverage score sampling. Specifically, we combine Theorem 17 in Woodruff (2014b) and Theorem 3.1 in Clarkson and Woodruff (2009) to prove the following lemma. **Lemma 2.1**.: _There is a set of matrices \(\mathcal{S}\subset\mathbb{R}^{s\times n}\) with exactly one non-zero entry per column such that for any \(\mathbf{A}\in\mathbb{R}^{n\times k}\) and \(\mathbf{B}\in\mathbb{R}^{n\times d}\), there exists \(\mathbf{S}\in\mathcal{S}\), so that if:_ \[\tilde{\mathbf{X}}=\operatorname*{argmin}_{\mathbf{X}\in\mathbb{R}^{k\times d }}\|\mathbf{S}(\mathbf{A}\mathbf{X}-\mathbf{B})\|_{F}\text{ and }\mathbf{X}^{*}= \operatorname*{argmin}_{\mathbf{X}\in\mathbb{R}^{k\times d}}\|\mathbf{A} \mathbf{X}-\mathbf{B}\|_{F},\] _then,_ \[\|\mathbf{A}\tilde{\mathbf{X}}-\mathbf{B}\|_{F}\leq(1+\epsilon)\|\mathbf{A} \mathbf{X}^{*}-\mathbf{B}\|_{F}.\] _Furthermore, \(\mathcal{S}\) depends only on \(n\), \(k\), and \(\epsilon\); and \(|\mathcal{S}|=n^{\mathcal{O}(\frac{k\log k}{\epsilon})}\)._ After applying a coreset construction to reduce the \(k\)-means problem to size \(\mathrm{poly}(k/\epsilon)\), we can efficiently apply the above lemma to then reduce the problem to size \(\tilde{\mathcal{O}}(k/\epsilon)\). Then, we brute force over all possible left-factors to find \(\mathbf{X}^{*}\). The following theorem states our results formally. **Theorem 2.3**.: _For any input \(\mathbf{A}\in\mathbb{R}^{n\times d}\) and \(\epsilon\in(0,1)\), Algorithm 2 will return a feasible solution to the \(k\)-means clustering problem (Definition 1.2), \((\tilde{\mathbf{X}},\tilde{\mathbf{D}}),\) satisfying:_ \[\|\tilde{\mathbf{X}}\tilde{\mathbf{D}}-\mathbf{A}\|_{F}\leq(1+\epsilon)\cdot \min_{\mathbf{D}\in\mathbb{R}^{k\times d},\mathbf{X}\in\mathcal{X}}\|\mathbf{ X}\mathbf{D}-\mathbf{A}\|_{F},\] _with constant probability. Furthermore, Algorithm 2 runs in \(n\cdot\mathrm{poly}(k/\epsilon)+\exp(\frac{k}{\epsilon}\,\mathrm{polylog}(k/ \epsilon))\) time._ ## 3 Turnstile streaming algorithms In this section, we consider the the _turnstile streaming model_ (see Definition 1.3). We provide upper bounds on the space needed to compute an \(\epsilon\)-relative error solution to the \(k\)-means problem and a restricted form of the sparse dictionary learning problem in a turnstile stream. We do this by showing that these approximately optimal solutions can be computed from a few small linear sketches of the original data matrix, and any linear sketch can be trivially maintained in a turnstile stream by linearity of the updates. A key idea behind these algorithms is applying the _guess-the-sketch_ approach introduced in Razenshteyn et al. (2016) along with the following theorem. **Theorem 3.1**.: _(Theorem 3.1 in Clarkson and Woodruff (2009)) Given \(\delta,\epsilon>0\), suppose \(\mathbf{A}\) and \(\mathbf{B}\) are matrices with \(n\) rows, and \(\mathbf{A}\) has rank at most \(k\). There is an \(m=O(k\log(1/\delta)/\epsilon)\) such that, if \(\mathbf{S}\) is an \(m\times n\) sign matrix, then with probability at least \(1-\delta\), if \(\tilde{\mathbf{X}}=\mathrm{argmin}_{\mathbf{X}}\|\mathbf{S}(\mathbf{A}\mathbf{ X}-\mathbf{B})\|_{F}^{2}\) and \(\mathbf{X}^{*}=\mathrm{argmin}_{\mathbf{X}}\|\mathbf{A}\mathbf{X}-\mathbf{B} \|_{F}^{2},\) then \(\|\mathbf{A}\tilde{\mathbf{X}}-\mathbf{B}\|_{F}\leq(1+\epsilon)\|\mathbf{A} \mathbf{X}^{*}-\mathbf{B}\|_{F}\)._ Notice that, if we knew the optimal solution \(\mathbf{X}^{*}\) exactly, then by the previous theorem we could compute an approximately optimal dictionary \(\tilde{\mathbf{D}}\) exactly as \(\tilde{\mathbf{D}}=(\mathbf{S}\mathbf{X}^{*})^{\dagger}\mathbf{S}\mathbf{A}\). The key observation is that, since \(\mathbf{S}\) is a random sign matrix and the rows of \(\mathbf{X}\) are standard basis vectors, the set \(\{\mathbf{S}\mathbf{X}\mid\mathbf{X}\in\mathcal{X},\mathbf{S}\in\{\pm 1\}^{ \tilde{\mathcal{O}}(k/\epsilon)\times n}\}\) is not too large. Also, we can approximately solve \(\min_{\mathbf{X}\in\mathcal{X}}\|\mathbf{X}\tilde{\mathbf{D}}-\mathbf{A}\|_{F} ^{2}\) for a fixed \(\tilde{\mathbf{D}}\) with constant probability by solving \(\tilde{\mathbf{X}}=\min_{\mathbf{X}\in\mathcal{X}}\|(\mathbf{X}\tilde{ \mathbf{D}}-\mathbf{A})\mathbf{T}\|_{F}^{2}\), where \(\mathbf{T}\) is a moderately sized affine embedding matrix. Since the number of possible \((\tilde{\mathbf{X}},\tilde{\mathbf{D}})\) is not too large, an \(\ell_{2}\)-embedding matrix, \(\mathbf{W}\), can be used to approximate \(\|\tilde{\mathbf{X}}\tilde{\mathbf{D}}-\mathbf{A}\|_{F}^{2}\) for every possible \((\tilde{\mathbf{X}},\tilde{\mathbf{D}})\). Our streaming algorithm relies on carefully balancing the roles of the three sketching matrices to minimize the size of the sketches, using the weakest guarantee possible for each component. In particular, it is critical to use the affine embedding matrix \(\mathbf{T}\) to only preserve the error for a fixed \(\tilde{\mathbf{D}}\) instead of every subproblem and instead use the \(\ell_{2}\)-embedding matrix \(\mathbf{W}\) to identify which subproblem provides an approximate solution to the overall problem. **Theorem 3.2**.: _(1) There are distributions of random sketching matrices \(\mathbf{T}\in\mathbb{R}^{d\times t}\), \(\mathbf{S}\in\mathbb{R}^{s\times n}\), and \(\mathbf{W}\in\mathbb{R}^{w\times nd}\), with \(t=\mathcal{O}(\log(nk)/\epsilon^{2})\), \(s=\mathcal{O}(\frac{k}{\epsilon})\), and \(w=\mathcal{O}(\frac{k^{2}}{\epsilon^{3}}\log(n))\) such that \(\mathbf{S}\mathbf{A}\), \(\mathbf{AT}\), and \(\mathbf{W}\;\mathrm{vec}(\mathbf{A})\) suffice to compute a \((1+\epsilon)\)-approximate solution to the \(k\)-means problem with at least constant probability, where \(\mathrm{vec}(\mathbf{A})\in\mathbb{R}^{nd}\) is the flattening of \(\mathbf{A}\)._ _(2) There is an algorithm which computes a \((1+\epsilon)\)-approximate solution to the \(k\)-means problem in the turnstile model with at least constant probability using \(\tilde{\mathcal{O}}(n/\epsilon^{2}+dk/\epsilon)\) space for \(n,d>\mathrm{poly}(k/\epsilon)\) in \(n^{\tilde{\mathcal{O}}(k^{2}/\epsilon)}\) additional time._ The previous proof critically relies on the fact that \(\{\mathbf{S}\mathbf{X}\mid\mathbf{X}\in\mathcal{X},\;\mathbf{S}\in\{\pm 1\}^{m \times n}\}\) is a finite set that is not too large. We must therefore introduce the following restricted form of the sparse dictionary problem. **Definition 3.1**.: _(Discrete \(r\)-sparse dictionary problem) Let \(\mathcal{X}\) be the space of \(n\times k\) matrices with at most \(r\) non-zero entries per row and non-zero entries taking values in \(\{-D,-(D-1),...,-1,0,1,...(D-1),D\}\). The goal of this problem is solve the following optimization problem:_ \[\mathbf{X}^{*},\mathbf{D}^{*}=\operatorname*{argminmin}_{\mathbf{X}\in\mathcal{ X},\mathbf{D}\in\mathbb{R}^{k\times d}}\|\mathbf{X}\mathbf{D}-\mathbf{A}\|_{F},\] _where \(\mathbf{A}\in\mathbb{R}^{n\times d}\) is an arbitrary input matrix._ Under this constraint that the solution is in a discrete space the proof of the streaming algorithm for sparse dictionary learning proceeds essentially the same as for \(k\)-means while accounting for the larger solution space. **Theorem 3.3**.: _(1) There are distributions of random sketching matrices \(\mathbf{T}\in\mathbb{R}^{d\times t}\), \(\mathbf{S}\in\mathbb{R}^{s\times n}\), and \(\mathbf{W}\in\mathbb{R}^{w\times nd}\), with \(t=\mathcal{O}(r\log(nkD)/\epsilon^{2})\), \(s=\mathcal{O}(\frac{k}{\epsilon})\), and \(w=\mathcal{O}(\frac{k^{2}}{\epsilon^{3}}\log(nD))\) such that \(\mathbf{SA}\), \(\mathbf{AT}\), and \(\mathbf{W}\ \mathrm{vec}(\mathbf{A})\) suffice to compute a \((1+\epsilon)\)-approximate solution to the discrete \(r\)-sparse dictionary problem (Definition 3.1) with at least constant probability._ _(2) There is an algorithm which computes a \((1+\epsilon)\)-approximate solution to the \(r\)-sparse dictionary problem in the turnstile model with at least constant probability using \(\tilde{\mathcal{O}}(nr/\epsilon^{2}+dk/\epsilon)\) space for \(n,d>\mathrm{poly}(k/\epsilon)\) in \(k^{r}\cdot(nD)^{\tilde{\mathcal{O}}(k^{2}/\epsilon)}\) additional time._ Removing the restriction that \(\mathbf{X}^{*}\) belongs to the restricted space would be an interesting future problem. However, two issues are that the entries of \(\mathbf{X}\) may be very large, since the rows of \(\mathbf{D}\) may not be orthogonal, and a uniform discretization is required to apply a guess-the-sketch argument. ## 4 Streaming lower bounds for Euclidean \(k\)-means clustering We introduce slightly different definitions of the \(k\)-means clustering problem than the one used in Definition 1.2 to facilitate the notation of our lower bound arguments in this section. **Definition 4.1** (\(k\)-means clustering cost).: _Let \(\{a^{i}\}_{i=1}^{n}\subseteq\mathbb{R}^{d}\) be a set of \(n\) vectors in \(d\) dimensions. Then, we define the \(k\)-means clustering cost of centers \(c^{1},c^{2},\dots,c^{k}\in\mathbb{R}^{d}\) to be_ \[\mathrm{cost}(c^{1},c^{2},\dots,c^{k})\coloneqq\sum_{i=1}^{n}\min_{j=1}^{k} \|a^{i}-c^{j}\|_{2}^{2}.\] **Definition 4.2** (Approximate solutions to \(k\)-means clustering).: _Let \(\{a^{i}\}_{i=1}^{n}\subseteq\mathbb{R}^{d}\) be a set of \(n\) vectors in \(d\) dimensions. Let_ \[\mathsf{OPT}\coloneqq\min_{c^{1},c^{2},\dots,c^{k}\in\mathbb{R}^{d}}\mathrm{ cost}(c^{1},c^{2},\dots,c^{k})\] _We say that an algorithm outputs an \(\epsilon\)-approximate solution to the \(k\)-means clustering problem if the algorithm outputs one of the following:_ * _Partition__: a partition_ \(C^{1},C^{2},\dots,C^{k}\subseteq[n]\) _such that_ \[\sum_{j=1}^{k}\sum_{i\in C^{j}}\|a^{i}-\hat{c}^{j}\|_{2}^{2}\leq(1+\epsilon) \mathsf{OPT}\] _where_ \(\hat{c}^{j}\coloneqq\frac{1}{|C^{j}|}\sum_{i\in C^{j}}a^{i}\)_._ * _Centers__: centers_ \(\hat{c}^{1},\hat{c}^{2},\dots,\hat{c}^{k}\in\mathbb{R}^{d}\) _such that_ \(\mathrm{cost}(\hat{c}^{1},\hat{c}^{2},\dots,\hat{c}^{k})\leq(1+\epsilon) \mathsf{OPT}\)_._ * _Cost__: a number_ \(c\geq 0\) _such that_ \(\mathsf{OPT}\leq c\leq(1+\epsilon)\mathsf{OPT}\)_._ ### Lower bounds for \(k\)-means clustering Our most technically involved and delicate lower bound result is the following theorem, which shows that nearly optimally solving \(k\)-means clustering to \((1+\epsilon)\) accuracy requires \(\tilde{\Omega}(n/\epsilon)\) bits of space: **Theorem 1.1** (Informal restatement of Theorem C.1).: _Let \(k=d=\tilde{O}(1/\epsilon)\). Suppose a turnstile streaming algorithm outputs centers \(\{\hat{c}^{j}\}_{j=1}^{k}\subseteq\mathbb{R}^{d}\) as well as assignments of \(n\) points to the \(k\) centers, which achieves a \((1+\epsilon)\)-approximately optimal solution to the \(k\)-means clustering problem. Then, the algorithm must use at least \(\tilde{\Omega}(n/\epsilon)\) bits of space over any constant number of passes._ We defer the full proof to Appendix C and give a proof sketch in this section to illustrate the most important ideas. The hard instance: set disjointness.The starting point to our lower bound is the information theoretic communication complexity lower bound for the set disjointness problem due to Bar-Yossef et al. (2004). In the two-party set disjointness problem, two players Alice and Bob each have a bit vector \(A,B\in\{0,1\}^{d}\) in \(d\) dimensions, and they must determine whether there exists a coordinate \(j\in[d]\) such that \(A_{j}=B_{j}=1\) or not. The work of Bar-Yossef et al. (2004) shows that in order to solve this problem, Alice and Bob must exchange messages that reveal at least \(\Omega(d)\) bits of information about their inputs, which in turn implies an \(\Omega(d)\) communication complexity lower bound for this problem, as well as an \(\Omega(nd)\) communication complexity lower bound for solving a constant fraction of \(n\) independent instances of the same problem. Furthermore, the hard instance of Bar-Yossef et al. (2004) has a simple input distribution: the vectors \((A,B)\) are such that the \(j\)th coordinate \((A^{j},B^{j})\) is drawn either as \((0,0)\) with probability \(1/2\) or \((1,0)\) with probability \(1/4\) or \((0,1)\) with probability \(1/4\), except for one coordinate, which may take the value \((1,1)\). We aim to make use of this result as follows. Consider the vector \(Z=A+B\). This vector has entries in \(\{0,1\}\), except possibly for one entry, which could be \(2\). If we have \(n\) such vectors, then we expect a good clustering into \(k=d\) clusters to cluster all points with \(Z_{j}=2\) together. Such a clustering would be able to output the _index_ of the intersection of \(A\) and \(B\), which intuitively requires more information than just determining whether there is an intersection or not, and thus should also require \(\Omega(d)\) bits of information cost. Furthermore, we can choose the dimension \(d\) to be roughly \(1/\epsilon\), so that the cost of clustering \(Z\) to the "correct" center will have a cost of \(\Theta(d)=\Theta(1/\epsilon)\), while clustering \(Z\) to the incorrect center will incur an additional error of \(\Theta(1)\), which is an \(\epsilon\) fraction of the cost. Cost calculations.The main challenge in carrying out the idea in the previous paragraph is in arguing that the target optimal clustering that we wish to discover indeed is a nearly optimal clustering, and that significant deviations from this clustering result in a large cost. This involves showing a lower bound on the cost of _any_ clustering. Our first step is to obtain a lower bound on the cost of any clustering of \(n\) random bit vectors in \(d\) dimensions. If we first fix a set of \(k\) centers \(\{c^{j}\}_{j=1}^{k}\), then the minimum distance between a random bit vector \(Z\) and any of the \(c^{j}\) can be bounded by using Chernoff bounds, which implies a lower bound of \(d/4-O(\log d)\) on this quantity in expectation (Lemma C.4). Note, however, that this lower bound is not high enough to prevent a nearly optimal solution from just assigning points according to the best clustering of the random bits while ignoring the one entry that takes the value of \(Z_{j}=2\), which means that the clustering need not solve the problem of identifying the intersection coordinate between \(A\) and \(B\). To address this problem, we need to make the cost of ignoring the intersection coordinate much more costly. We do this by instead considering the _multi-party_ set disjointness problem, so that we now have \(t=O(\sqrt{\log d})\) players rather than just \(2\), each with an input vector \(A^{(i)}\in\{0,1\}^{d}\), so that \(Z=\sum_{i=1}^{t}A^{(i)}\) is now a random bit vector except for a single entry with a \(t\) rather than a \(2\). Now, a clustering which does not correctly identify the intersection coordinate will pay a cost of roughly \(t^{2}=O(\log d)\), which is large enough to overcome the potential savings from a good clustering of the random bit coordinates. We also "plant" the target centers \(c^{j}\) by adding roughly \(n/k\) copies of each of our target centers \(c^{j}\) as part of the input instance (Lemma C.7), so that choosing centers \(\hat{c}^{j}\) that are significantly different from \(c^{j}\) must incur a large cost. In particular, we can get the guarantee that on average, \(\|\hat{c}^{j}-\hat{c}^{j}\|_{2}^{2}\leq o(1)\). At this point, we can argue that most of the \(k\) centers are the centers that expect, i.e., roughly \(t\) on one coordinate and \(1/2\) on the rest of the coordinates. Thus, if we cluster a point \(Z\) whose center we expect to be \(\hat{c}^{j}\) but is clustered to some other \(\hat{c}^{j^{\prime}}\), and furthermore \(\hat{c}^{j^{\prime}}\) is close to our expected center \(c^{j^{\prime}}\), then we must incur an additional \(O(\log d)\) cost which is too expensive. However, there is still the possibility that for the very small number of clusters \(\hat{c}^{j}\) which do not satisfy \(\|c^{j}-\hat{c}^{j}\|_{2}^{2}\leq o(1)\), these centers could be assigned a very large number of points with very low cost. We also show that this cannot be the case, by arguing that if a large number of points are assigned to very few clusters, then the cost must be large (Lemma C.8). With this lemma in hand, we are able to show our main result in Theorem C.1 by carefully combining the various cost contribution bounds discussed previously. #### 4.1.1 Lower bound for outputting nearly optimal centers We note that an \(\Omega(dk/\epsilon)\) lower bound follows from an earlier lower bound for low rank approximation due to Woodruff (2014a), even for row arrival streams: **Definition 4.3** (Row arrival stream).: _We say that an algorithm outputs an \(\epsilon\)-approximate solution to the \(k\)-means clustering problem in the row arrival streaming model if the input vectors \(\{a^{i}\}_{i=1}^{n}\subseteq\mathbb{R}^{d}\) arrive one at a time._ **Theorem 4.1**.: _Suppose that an algorithm outputs centers \(\{\hat{c}^{j}\}_{j=1}^{k}\subseteq\mathbb{R}^{d}\) that achieves a \((1+\epsilon)\)-approximately optimal solution to the \(k\)-means clustering problem after one pass through a row arrival stream (Definition 4.3). Then, the algorithm must use at least \(\tilde{\Omega}(dk/\epsilon)\) bits of space._ We briefly justify why the techniques of Woodruff (2014a) imply Theorem 4.1. The result of Woodruff (2014a) constructs a distribution over \(O(k/\epsilon)\times d\) matrices such that one can recover an arbitrary random bit among \(\tilde{\Omega}(dk/\epsilon)\) random bits by appending a set of \(k\) "query" rows and then computing a \((1+\epsilon)\)-approximately optimal low rank approximation to the resulting matrix. Furthermore, it is shown that a nearly optimal rank \(k\) approximation is obtained by approximating all but \(k\) rows by zero vectors. Such a rank \(k\) approximation in fact corresponds to a clustering solution, and thus the proof of Woodruff (2014a) immediately applies to our \(k\)-means clustering setting as well. ### Lower bounds for center cost query data structures Next, we study lower bounds against streaming algorithms which have the guarantee of approximating the cost of an arbitrary but fixed set of centers. We formalize the guarantee we study in Definition 4.4. **Definition 4.4** (Center cost query data structure).: _We say that \(\mathcal{Q}\) is an \(\epsilon\)-approximate center cost query data structure for the \(k\) means clustering problem for the instance \(\{a^{i}\}_{i=1}^{n}\) if, for any centers \(c^{1},c^{2},\ldots,c^{k}\in\mathbb{R}^{d}\), \(\mathcal{Q}\) outputs one of the following:_ * _Partition__: a partition_ \(C^{1},C^{2},\ldots,C^{k}\subseteq[n]\) _such that_ \[\sum_{j=1}^{k}\sum_{i\in C^{j}}\|a^{i}-c^{j}\|_{2}^{2}\leq(1+\epsilon)\cos(c ^{1},c^{2},\ldots,c^{k}).\] * _Cost__: a number_ \(c\geq 0\) _such that_ \[\cos(c^{1},c^{2},\ldots,c^{k})\leq c\leq(1+\epsilon)\cos(c^{1},c^{2},\ldots,c ^{k})\] Our first lower bound is an \(\Omega(n/\epsilon^{2})\) bit space lower bound for a center cost query data structure which can output a partition for \(k\)-means clustering with \(k=2\). We proceed by a standard encoding argument, showing that any such data structure must encode \(\Omega(n/\epsilon^{2})\) many random bits. We provide the full proof in Appendix D.1. **Theorem 4.2**.: _Let \(\epsilon\in(0,1/3)\) and \(k=2\). Suppose that an algorithm maintains an \(\epsilon/15\)-approximate center cost query data structure for \(k\)-means clustering that outputs a partition (Definition 4.4) over a row arrival stream (Definition 4.3). Then, the algorithm must use at least \(\Omega(n/\epsilon^{2})\) bits of space, over any constant number of passes._ ### Approximation of costs and centers We show \(\Omega(n)\) space memory bounds when we only need to estimate the optimal cost or centers achieving nearly optimal cost, up to a constant factor. Our lower bounds in this section are simpler reductions from the set disjointness problem Razborov (1990); Bar-Yossef et al. (2004). Proofs are provided in Appendix D.2 and D.3. **Theorem 4.3** (Lower Bound for Estimating \(k\)-Means Clustering Cost).: _Let \(k=2\) and let \(\mathcal{X}\) be the set of matrices \(\mathbf{X}\in\mathbb{R}^{n\times k}\) with standard basis vectors as rows. Let \(d=1\). Any randomized algorithm which outputs a number \(c\geq 0\) satisfying_ \[c\leq\min_{\mathbf{X}\in\mathcal{X},\mathbf{D}\in\mathbb{R}^{k\times d}}\| \mathbf{X}\mathbf{D}-\mathbf{A}\|_{F}^{2}<2c \tag{2}\] _in a constant number of passes over a turnstile stream requires \(\Omega(n)\) bits of space._ **Theorem 4.4** (Lower Bound for Computing Approximate Centers).: _Let \(k=3\) and let \(\mathcal{X}\) be the set of matrices \(\mathbf{X}\in\mathbb{R}^{n\times k}\) with standard basis vectors as rows. Let \(d=1\). Any randomized algorithm which outputs centers \(\tilde{\mathbf{D}}\in\mathbb{R}^{k\times d}\) satisfying_ \[\min_{\mathbf{X}\in\mathcal{X}}\|\mathbf{X}\tilde{\mathbf{D}}-\mathbf{A}\|_{F} ^{2}<2\min_{\mathbf{X}\in\mathcal{X},\mathbf{D}\in\mathbb{R}^{k\times d}}\| \mathbf{X}\mathbf{D}-\mathbf{A}\|_{F}^{2}\] _in a constant number passes over a turnstile stream requires \(\Omega(n)\) bits of space._ ### New upper bounds in random order streams In this section, we show some new upper bounds showing that we can go beyond the previously presented lower bounds. In particular, in random order row arrival streams with bounded sensitivity, we show that the first segment of the stream is sufficient to obtain approximately optimal centers, and these can in turn be used to nearly optimally cluster the rest of the stream. We give the full proof of this result in Appendix D.4. **Theorem 4.5**.: _Suppose that the rows of \(\mathbf{A}\in\mathbb{R}^{n\times d}\) arrive in a random order row arrival stream. Furthermore, suppose that the sensitivities of each row \(a^{i}\) are bounded by \(\alpha\), that is,_ \[\sup_{c^{1},c^{2},\ldots,c^{k}\in\mathbb{R}^{d}}\frac{\min_{j=1}^{k}\|a^{i}-c ^{j}\|_{2}^{2}}{\sum_{i^{\prime}=1}^{n}\min_{j=1}^{k}\|a^{i^{\prime}}-c^{j}\|_ {2}^{2}}\leq\alpha.\] _Then, there is an algorithm which, with constant probability, outputs a \((1+\epsilon)\)-nearly optimal clustering with partitions and centers using_ \[\tilde{O}(\alpha nkd/\epsilon^{4}+dk/\epsilon+n).\] _bits of space. In particular, if \(\alpha\leq\epsilon^{4}/kd\), then this algorithm uses just \(\tilde{O}(n+dk/\epsilon)\) bits of space._ ## 5 Open directions We conclude with several questions left open by our work. 1. In our PTAS for sparse dictionary learning of Theorem 2.2, can the bit complexity assumption be removed? 2. In the turnstile streaming setting, our main question is settling the space complexity of \(k\)-means clustering with assignments. Currently, the upper bound is \(\tilde{O}(n/\epsilon^{2})\) bits whereas our lower bound in Theorem C.1 is \(\tilde{\Omega}(n/\epsilon)\) bits. Can this \(\epsilon\) factor gap be closed by improving the upper bound or the lower bound? 3. In random order streaming model, we gave an \(k\)-means clustering upper bound using a bounded sensitivity assumption in Theorem 4.5. Can this assumption be removed? What upper bounds and lower bound are possible in this model? ## Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for useful feedback on improving the presentation of this work. Petros Drineas and Gregory Dexter were partially supported by NSF AF 1814041, NSF FRG 1760353, and DOE-SC0022085. David P. Woodruff and Taisuke Yasuda were supported by a Simons Investigator Award.
2305.14922
Finite reservoirs and irreversibility corrections to Hamiltonian systems statistics
We consider several Hamiltonian systems perturbed by external agents, that preserve their Hamiltonian structure. We investigate the corrections to the canonical statistics resulting from coupling such systems with possibly large but finite reservoirs, and from the onset of processes breaking the time reversal symmetry. We analyze exactly solvable oscillators systems, and perform simulations of relatively more complex ones. This indicates that the standard statistical mechanical formalism needs to be adjusted, in the ever more investigated nano-scale science and technology. In particular, the hypothesis that heat reservoirs be considered infinite and be described by the classical ensembles is found to be critical when exponential quantities are considered, since the large size limit may not coincide with the infinite size canonical result. Furthermore, process-dependent emergent irreversibility affects ensemble averages, effectively frustrating, on a statistical level, the time reversal invariance of Hamiltonian dynamics, that is used to obtain numerous results.
Matteo Colangeli, Antonio Di Francesco, Lamberto Rondoni
2023-05-24T09:09:13Z
http://arxiv.org/abs/2305.14922v1
# Finite reservoirs and irreversibility corrections to Hamiltonian systems statistics ###### Abstract We consider several Hamiltonian systems perturbed by external agents, that preserve their Hamiltonian structure. We investigate the corrections to the canonical statistics resulting from coupling such systems with possibly large but finite reservoirs, and from the onset of processes breaking the time reversal symmetry. We analyze exactly solvable oscillators systems, and perform simulations of relatively more complex ones. This indicates that the standard statistical mechanical formalism needs to be adjusted, in the ever more investigated nano-scale science and technology. In particular, the hypothesis that heat reservoirs be considered infinite and be described by the classical ensembles is found to be critical when exponential quantities are considered, since the large size limit may not coincide with the infinite size canonical result. Furthermore, process-dependent emergent irreversibility affects ensemble averages, effectively frustrating, on a statistical level, the time reversal invariance of Hamiltonian dynamics, that is used to obtain numerous results. Jarzynski equality, nonequilibrium process, fluctuation relations, finite size effects ## 1 Introduction The validity of the canonical ensemble is universally accepted to compute macroscopic quantities of systems in equilibrium at a given temperature \(T\), as averages of phase space functions. The corresponding formalism assumes that heat reservoirs are infinitely large, and that measurement times are exceedingly longer than the characteristic times of the microscopic events. Such mathematical idealizations yield a highly successful theory describing a vast range of macroscopic phenomena. The separation between microscopic and macroscopic scales is indeed sufficiently wide for calculations of quantities of thermodynamic interest. Nevertheless, there are various reasons for investigating the applicability of the canonical framework to non-standard observables. For instance, exponentials of microscopically expressed variables appear in Bennett's formulae for the free energy [1], in Widom's relation [2], Zwanzig's relation [3], and in the more recent Jarzynski [4] and Crooks relations [5]. Furthermore, current science and technology deal with small systems and fast processes, as well as with quantities not immediately interpretable in thermodynamic terms, as in the case of anomalous eneg transport [6, 7, 8]. Therefore, finite size effects and lack of ergodicity may turn important. Indeed, standard thermodynamic properties of macroscopic objects only require a proper characterization of the bulk of the relevant probability distributions, not of their tails. On the other hand, an accurate characterization of the tails of the relevant probability distributions becomes necessary when dealing with observables that get a substantial contribution from such tails. Then, the fact that thermal baths are necessarily finite and that experiments may last very short times may require particular attention. In this work we take the quantity used in the Jarzynski Equality (JE) as a paradigmatic example of topical non-standard observables. It is worth recalling that the time reversal symmetry of the microscopic dynamics [9, 10, 11, 12] is essential for the derivation of the JE, which belongs to a class of results, known as _Fluctuation Relations_, strongly relying on the time reversibility of the microscopic dynamics, see e.g. [13, 14, 15]. More generally, the time reversal symmetry turns out being a standard ingredient of a large variety of statistical mechanical results, including the Onsager Reciprocal Relations [16, 17], Fluctuation-Dissipation Theorem and the Green-Kubo relations [18, 19, 20], and applications to magnetic systems [21]. In works such as Ref.[22] it was found that certain nanoscopic Hamiltonian systems violate the JE, although formally amenable to analysis within the canonical framework, that yields the JE as an exact relation [4]. In the case of Ref.[22], the failure was caused by the emergence of irreversibility due to a process dependent nonequilibrium effect, and not to the large number of degrees of freedom. At the same time, highly nonequilibrium processes do not prevent the validity of the JE in _e.g._ 1-dimensional systems described by an overdamped Langevin equation [23]. That this may be the case is clear in the words of, _e.g_ Fermi [24] or Callen [25], who state, in practice, that the ensembles work if the observation times suffice for the observables of interest to have thouroughly explored their range. Khinchin then adds that this is easy to obtain, for the observables of interest typically have a small range [26]. In all instances, the state of the system is required to be stationary, or very slowly evolving with respect to the observation times. The above considerations are topical, given the rapid development of bio- and nano-technologies, which deal with small systems. Apart from being small, such systems are often briskly driven by external agents, so that thermal baths (even if effectively infinite) only express limited energy, and the deterministic thermodynamic laws must often be replaced by statistical laws. Certainly, some experiments of bio- and nano-technological interest intentionally take very short times, so that only a small part of a thermal bath is effectively involved. This poses the question, when computing ensemble averages, about proper approaches to the finiteness of the bath or, in other terms, to the restriction to finite subsets of phase space. In this paper we thus analyze the finite size effects on the statistics concerning simple Hamiltonian systems, subjected to various external drivings. We start by briefly reviewing the derivation of the JE in Sec. 2. In Sec. 3, three simple mechanical models are introduced to illustrate the onset of finite size effects that lead to violations of the JE, highlighting some of the limitations of the canonical statistic. In particular it is shown that the speed of of the protocol or the frequency of periodic drivings resonating with the system proper frequency may wildly enhance the protocol dependence, violating of up to 110% the JE. We also highlight the fact that analagous results are obtained for infinite baths at small temperatures. In Sec. 4, we consider a model mimicking the adiabatic expansion of an ideal gas, and also describe the validity of the JE in the presence of a protocol-dependent device concluding that the occurrence of an irreversible phenomenon (such as the free expansion of a gas) can invalidate the statistical description of a particle system through the canonical formalism. Conclusions are drawn in Sec. 5, where we also anticipate future developments. The Appendices give the details of some analytical calculations reported in the main text. ## 2 Derivation of the Jarzynski equality A well-known example involving both the canonical ensemble and exponential variables is the Jarzynski Equality (JE), which offers a useful playground to highlight the role of finite size effects on the statistics of thermodynamics quantities in the canonical framework. The JE has been derived for both stochastic, and deterministic systems. We focus on the second, which concerns a system S made of \(N\) particles, initially in equilibrium with a bath B at temperature \(T\). The system may interact with an environment, E, also initially in equilibrium with B. The Hamiltonian of system and environment, denoted by S+E, is assumed to take the following form: \[\mathcal{H}(\mathbf{x},\mathbf{v};\lambda)=H_{S}(x_{S},v_{S};\lambda)+H_{E}(x _{E},v_{E})+h_{I}(\mathbf{x},\mathbf{v}) \tag{2.1}\] where \(\lambda\) is a parameter controlled by an external agent, \((\mathbf{x},\mathbf{v})=(x_{S},v_{S},x_{E},v_{E})\) are the position and velocities of S and E, as indicated by the subscripts, \(H_{S}\), \(H_{E}\) and \(h_{I}\) are, respectively, the energy of S, the energy of E and the energy of their interaction. The initial distributions of coordinates and momenta of S+E, which is in equilibrium at temnperature \(T\), is given by the canonical ensemble: \[P_{0}(\Gamma)=\frac{1}{Z_{0}}e^{-\beta\mathcal{H}(\Gamma;A)}\,,\quad\beta= \frac{1}{k_{{}_{B}}T} \tag{2.2}\] where \(k_{B}\) is the Boltzmann's constant, \(\Gamma=(\mathbf{x},\mathbf{v})\) is one configuration of S+E, and \(Z_{0}\) is the initial canonical partition function. At time \(t=0\), this system is isolated from the bath, and driven by an external agent that modifies the parameter \(\lambda\). This is done many times, repeating the same protocol \(\lambda:[0,\tau]\to\mathbb{R}\) over a given finite time \(\tau\), changing the parameter from its initial value \(\lambda(0)=A\), to its final value \(\lambda(\tau)=B\). Each time, a different initial condition is taken at random, according to the canonical distribution (1.43), and the following quantity, called work, is computed [4]: \[W_{J}(\Gamma_{0})=\int_{0}^{\tau}\,\frac{\partial H}{\partial\lambda}\,\dot{ \lambda}\,\mathrm{d}t=\mathcal{H}(\Gamma_{\tau}(\Gamma_{0});B)-\mathcal{H}( \Gamma_{0};A) \tag{2.3}\] where \(\Gamma_{\tau}(\Gamma_{0})\) is the phase reached in the time \(\tau\) starting from the initial condition \(\Gamma_{0}\). Because the protocol \(\lambda(t)\) is fixed, the dynamics are deterministic, and the value of the work depends only on the initial condition. However, the initial conditions change randomly, yielding a different value of \(W_{J}\) for each realization of the process, and effectively making it a random variable. In this setting, the following relation, known as Jarzynski equality, was obtained: [4]: \[\left\langle e^{-\beta W_{J}}\right\rangle_{0}=e^{-\beta\Delta F} \tag{2.4}\] where \(\langle\cdot\rangle_{0}\) is the canonical ensemble average obtained from \(P_{0}\), and \(\Delta F=F_{B}-F_{A}\) is the equilibrium free energy difference between the equilibrium canonical state with parameter \(\lambda=B\) and the one with parameter \(\lambda=A\), both at temperature \(T\). One of the most striking aspects of the JE, that is a direct effect of the canonical ensemble, is that it does not depend on the protocol. This sounds at odds with the fact that physical theories have a range of applicability limited by space and time constraints, outside of which a different description must be adopted. On the other hand, it depends on the validity of the canonical ensemble whose applicability boundaries are not known, in general, especially if involving non standard quantities. Understanding the role of the canonical ensemble is important in general, not just in relation to the JE. We will see that the quantity in the right hand side of Eq.(2.4), depends on the protocol, if the ensemble does not extend to infinity. Note that the form of the probability distribution properly describing the effect of finite environments are not known in general, but the finite size effects can be evidenced on any distribution. In the concluding remarks we address this issue. ## 3 Models and methods Below, we investigate possible finite size effects for several different systems. In particular, we analyze three simple harmonic oscillators models, perturbed from their equilibrium states. The perturbation is applied by harmonic springs, whose center of force moves according to deterministic rules \(\lambda(t)\). The first model consists of a single oscillator, playing the role of S, with \(\lambda(t)=\ell t\), \(t\in[0,\tau]\), where \(\ell\) and \(\tau\) are constants that can be varied, in such a way that the initial and final values of \(\lambda\) do not change: \(\lambda(0)=A\) and \(\lambda(\tau)=B\). In the second model, the protocol is changed to \(\lambda(t)=\sin(\gamma t)\). Being periodic in time, this protocol yields different phenomena when the frequency \(\gamma\) is changed, like resonances that affect \(W_{J}\) and, consequently the JE. Both, the first and the second case do not have any environment E or, equivalently, the interaction energy vanishes: \(h_{I}=0\). The third model we consider has two oscillators, one of which is taken to be the system S and the other the environment E. As the theory requires, only S is subjected to a time dependent perturbation. ### Single oscillator under linear protocol Take a 1D system made of a single harmonic oscillator with rest position in \(x=0\), that is driven by a moving harmonic trap, centered in \(\lambda(t)=\ell t\), where \(\ell\) is a positive constant, and \(t\in[0,\tau]\). The initial value of \(\lambda\) is given by \(A=\lambda(0)=0\), and let its final value be denoted by \(B=\lambda(\tau)=\ell\tau\), with \(B\) fixed. To explore the effect of modifying the speed of the protocol, we vary \(\ell\) and \(\tau\), so that \(B\) is fixed. Let the oscillator mass be \(m\), and its momentum \(p=mv\), where \(v\) is the velocity. Then, the motion is determined by the following time dependent Hamiltonian: \[{\cal H}(x,v;t) = \frac{p^{2}}{2m}+\frac{k_{p}}{2}x^{2}+\frac{k_{D}}{2}\left( \lambda-x\right)^{2}=\frac{p^{2}}{2m}+\frac{k}{2}x^{2}+\frac{k_{D}\ell}{2} \left(\ell t^{2}-2xt\right)\, \tag{3.5}\] where \(k_{p}\) is the elastic constant of the spring with rest position in \(x=0\), \(k_{D}\) the elastic constant of the moving trap, and \(k=k_{D}+k_{p}\). The equation of motion consequently takes the form: \[\ddot{x}=-\omega^{2}x+\frac{k_{D}}{m}\ell t\,\quad\mbox{with}\ \ x(0)=x_{0}\,,\ v(0)=v_{0} \tag{3.6}\] where we introduced the natural frequency of the oscillator \(\omega=\sqrt{k/m}\). In this case, the work \(W_{J}\) is expressed by: \[W_{J} =\int_{0}^{\tau}\,k_{D}\left(\ell t-x\right)\ell\,\mathrm{d}t= \frac{k_{D}\ell^{2}\tau^{2}}{2}-k_{D}\ell\int_{0}^{\tau}\,x(t;x_{0},v_{0})\, \mathrm{d}t\] \[=k_{D}B\left[\frac{B}{2}-\frac{1}{\tau}\int_{0}^{\tau}\,x(t;x_{0},v_{0})\,\mathrm{d}t\right] \tag{3.7}\] where the oscillator position is expressed by: \[x(t;x_{0},v_{0})=x_{0}\cos\omega t+\frac{v_{0}-\ell k_{D}/k}{\omega}\sin\omega t +\frac{\ell k_{D}}{k}t \tag{3.8}\] Then, performing the integration in expression (3.7), one obtains: \[W_{J}(\ell;x_{0},v_{0})=k_{D}B\left[\frac{B}{2}\left(1-\frac{k_{D}}{k}\right)- \frac{x_{0}\ell}{B\omega}\sin\omega\frac{B}{\ell}+\left(\frac{p_{0}\ell}{Bk}- \frac{\ell^{2}k_{D}}{Bk\omega^{2}}\right)\left(\cos\omega\frac{B}{\ell}-1 \right)\right]\, \tag{3.9}\] where \(B\) is fixed, while the protocol speed \(\ell\) can be varied. Although \(\exp(-\beta W_{J})\) depends on \(\ell\), its average with respect to the initial canonical ensemble, \(P_{0}\), does not. Given \(\ell\in(0,\infty)\), one has: \[\left\langle e^{-\beta W_{J,\ell}}\right\rangle_{0}=\exp\left\{-\beta\frac{k_ {D}k_{p}B^{2}}{2k}\right\}\, \tag{3.10}\] which does not depend on the speed of the protocol, as the Jarzynski theory predicts. Explicit calculations are reported in the Appendix A. In the case in which the environment is bounded and the bath can only express a finite energy, the corresponding probability density is truncated at a given distance \(L\) from the rest position of the oscillator, and at a maximum momentum \(M\). For the sake of argument, we assume that the form of the finite support distribution is the canonical one, truncated and normalized, and that the two bounds \(L\) and \(M\) do not depend on each other. After all, the classical ensembles constitute a most successful postulate of statistical mechanics that, however, only seldom can be derived from the particles dynamics. Morevoer the resulting distributions are truncated Gaussians, hence mitigte the effects of truncation. Then, suppose we have: \[P_{0}(x,p)=\frac{1}{Z_{0}(L,M)}\left\{\begin{array}{ll}e^{-\beta(kx^{2}+p^{2}/ m)/2}&\mbox{if}\quad|x|\leq L\ \ \mbox{and}\ \ |p|\leq M\\ \\ 0&\mbox{if}\quad\quad|x|>L\ \ \mbox{or}\ \ |p|>M\end{array}\right. \tag{3.11}\] with \(Z_{0}(L,M)\) a normalizing factor. In this case, one obtains: \[\left\langle e^{-\beta W_{J,\ell}}\right\rangle_{0;L,M}=I_{exp}\cdot I_{x} \cdot I_{p} \tag{3.12}\] where \(I_{exp}\) represents the infinite size result, that does not depend on \(\ell\), while the finite size correction factors \(I_{x}\) and \(I_{p}\) do depend on \(\ell\), hence on the protocol. The explicit expressions of \(I_{exp},I_{x},I_{p}\), along with the detailed calculations leading to Eq. (3.12), are deferred to the Appendix A. This result shows that for fixed \(\ell\) and \(m\), sufficiently large \(L\) and \(M\) exist such that the infinite size result is recovered; indeed \(I_{x}\) and \(I_{p}\) both tend to \(1\), if \(L,M\) grow at fixed \(\ell\). However, for fixed \(L\) and \(M\), sufficiently large \(\ell\), _i.e._ a sufficiently fast protocol, together with a large enough value of the product \(\omega B\), or sufficiently large \(m\), yield \(I_{x},I_{p}<1\), _i.e._\(\exp(-\beta W_{J}))_{0;L,M}<\langle\exp(-\beta W_{J})\rangle_{0}\). The term \(I_{p}\) is particularly sensitive to variations of \(m\), because the argument of the error function on the right of its numerator may even turn negative, if \(m\) is sufficiently large. In any event, the left hand side of the JE is protocol dependent, if the ensemble is finitely supported. While at odds with the infinite bath result, this is in accord with the fact that too fast protocols (_e.g._ comparable with microscopic rates) require a specifically developed approach. ### Single Oscillator with Periodic Forcing Let now the single oscillator be driven by a moving harmonic trap centered in \(\lambda(t)=\sin\gamma t\), where \(\gamma=2\pi/T\), and \(T\) is the period of the center of force of the trap. Take \(t\in[0,\tau]\), \(A=\lambda(0)=0\), and \(B=\lambda(\tau)=\sin\gamma\tau\). If the final value of \(\lambda\) is fixed, as in the previous subsection, different \(\gamma\) correspond to faster or slower protocols, that last a time \(\tau=\arcsin(B)/\gamma\). The time dependent Hamiltonian now takes the form \[\mathcal{H}(x,v;t) = \frac{p^{2}}{2m}+\frac{k_{p}}{2}x^{2}+\frac{k_{D}}{2}\left(\lambda -x\right)^{2} \tag{3.13}\] \[= \frac{p^{2}}{2m}+\frac{k_{p}}{2}x^{2}+\frac{k_{D}}{2}\left(\sin \gamma t-x\right)^{2}=\frac{p^{2}}{2m}+\frac{k}{2}x^{2}+\frac{k_{D}}{2}\left( \sin^{2}\gamma t-2x\sin\gamma t\right)\, \tag{3.14}\] where \(k=k_{D}+k_{p}\). In this case, the Jarzynski work \(W_{J}\) is given by: \[W_{J}=\int_{0}^{\tau}\,\frac{\partial H}{\partial\lambda}\, \dot{\lambda}\,\mathrm{d}t= \tag{3.15}\] \[\qquad=\frac{k_{D}}{4}\left(1-\cos 2\gamma\tau\right)-k_{D}\gamma \int_{0}^{\tau}x(t;x_{0},v_{0})\cos\left(\gamma t\right)\,\mathrm{d}t \tag{3.16}\] Given the Hamiltonian (3.14), the equation of motion for this system is: \[\ddot{x}=-\omega^{2}x+\frac{k_{D}}{m}\sin\gamma t\,\quad\text{with i.c.}\ \ x(0)=x_{0}\,,\ v(0)=v_{0} \tag{3.17}\] where \(\omega=\sqrt{k/m}\). For \(\gamma\neq\omega\), one obtains: \[x(t;x_{0},v_{0})=\frac{k_{D}/m}{\omega^{2}-\gamma^{2}}\sin\gamma t+\frac{1}{ \omega}\left(v_{0}-\frac{\gamma k_{D}/m}{\omega^{2}-\gamma^{2}}\right)\sin \omega t+x_{0}\cos\omega t \tag{3.18}\] and the work takes the form: \[W_{J}(\gamma;x_{0},v_{0})=\frac{k_{D}}{4}\left(1-\cos 2 \gamma\tau\right)-k_{D}\gamma\int_{0}^{\tau}\cos\left(\gamma t\right)\times \tag{3.19}\] \[\left[\frac{k_{D}/m}{\omega^{2}-\gamma^{2}}\sin\gamma t+\frac{1} {\omega}\left(v_{0}-\frac{\gamma k_{D}/m}{\omega^{2}-\gamma^{2}}\right)\sin \omega t+x_{0}\cos\omega t\right]\,\mathrm{d}t \tag{3.20}\] Solving the integral on the right, we finally get: \[W_{J}(\gamma;x_{0},v_{0})=\frac{k_{D}}{4}\left(1-\frac{k_{D}/m}{ \omega^{2}-\gamma^{2}}\right)\left(1-\cos 2\gamma\tau\right)+ \tag{3.21}\] \[-\frac{k_{D}\gamma}{\omega^{2}-\gamma^{2}}\left(v_{0}-\frac{k_{ D}\gamma/m}{\omega^{2}-\gamma^{2}}\right)\left(1-\frac{\gamma}{\omega}\sin \gamma\tau\sin\omega\tau-\cos\gamma\tau\cos\omega\tau\right)+\] (3.22) \[+\,x_{0}\frac{k_{D}\gamma\omega}{\omega^{2}-\gamma^{2}}\left( \frac{\gamma}{\omega}\sin\gamma\tau\cos\omega\tau-\cos\gamma\tau\sin\omega\tau\right) \tag{3.23}\] This quantity can now be multiplied by \(-\beta\), exponentiated and averaged over all the initial conditions \((x_{0},v_{0})\). In the case of the full canonical ensemble, one obtains a result that does not depend on \(\gamma\), when \(A\), \(\tau\) and consequently \(B\) are fixed. If, on the other hand, the probability density is expressed by Eq.(3.11), one finds: \[\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}=I_{exp}\cdot I_{x}\cdot I_{ p}\, \tag{3.24}\] where the explicit expressions of \(I_{exp},I_{x},I_{p}\) are given in the Appendix B. The _resonance_, corresponding to \(\gamma=\omega\), must be treated separately, since the solution of the equation of motion (3.17) takes the form: \[x(t;x_{0},v_{0})=x_{0}\cos\omega t+\frac{v_{0}}{\omega}\sin\omega t+\frac{k_{D }/m}{2\omega^{2}}\left(\sin\omega t-\omega t\cos\omega t\right) \tag{3.25}\] The Jarzynski work is now expressed by: \[W_{J}(\tau;x_{0},v_{0})=\frac{k_{D}^{2}/m}{8}\tau^{2}+\frac{k_{D }^{2}/m}{8\omega}\tau\sin 2\omega\tau+\frac{k_{D}}{4}\left(1-\frac{3}{4} \frac{k_{D}/m}{\omega^{2}}\right)\left[1-\cos 2\omega\tau\right]\] \[-x_{0}\frac{k_{D}}{2}\left[\omega\tau+\frac{1}{2}\sin 2\omega \tau\right]-v_{0}\frac{k_{D}}{4\omega}\left[1-\cos 2\omega\tau\right] \tag{3.26}\] and its finite energy ensemble average can again be written as: \[\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}^{\text{(res)}}=I_{exp}^{ \text{(res)}}\cdot I_{x}^{\text{(res)}}\cdot I_{p}^{\text{(res)}} \tag{3.27}\] where: \[I_{exp}^{\text{(res)}}=\exp\left\{\frac{\beta k_{D}}{2}\left(\frac{k_{D}}{k}-1 \right)\sin^{2}\omega\tau\right\} \tag{3.28}\] \[I_{x}^{\rm(res)}=\frac{1}{2\ {\rm erf}\left(\sqrt{\frac{\beta k}{2}}L \right)}\left[{\rm erf}\left(\frac{\beta kL-\frac{\beta k_{D}}{2}\left(\omega \tau+\frac{1}{2}\sin 2\omega\tau\right)}{\sqrt{2\beta k}}\right)+\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.{\rm erf} \left(\frac{\beta kL+\frac{\beta k_{D}}{2}\left(\omega\tau+\frac{1}{2}\sin 2 \omega\tau\right)}{\sqrt{2\beta k}}\right)\right] \tag{3.29}\] \[I_{p}^{\rm(res)}=\frac{1}{2\ {\rm erf}\left(\sqrt{\frac{\beta}{2m}}M \right)}\left[{\rm erf}\left(\frac{\frac{\beta}{m}M-\frac{\beta k_{D}/m}{4 \omega}\left(1-\cos 2\omega\tau\right)}{\sqrt{2\beta/m}}\right)+\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\left.{\rm erf}\left( \frac{\frac{\beta}{m}M+\frac{\beta k_{D}/m}{4\omega}\left(1-\cos 2\omega\tau\right)}{ \sqrt{2\beta/m}}\right)\right] \tag{3.30}\] Because \(\omega\) can be considered an intrinsic property of the system coupled to the driving mechanism, we take it as fixed. Then, Equation (3.27), together with (3.28)-(3.30), shows that the average of the exponential of the Jarzynski work for a bounded ensemble of initial states depends on the protocol time \(\tau\). Indeed, for a sinusoidal protocol, there is an infinite set of values of \(\tau\) that yields the same final value \(\lambda(\tau)=B\). In particular, Eq.(3.29), shows that \(I_{x}^{\rm(res)}\) may even approach \(0\) or \(2\), however large \(L\) is taken, for sufficiently large \(\tau\). Indeed, the first error function in Eq.(3.29) tends to \(-1\), while the other tends to \(1\), as \(\tau\) grows, all the other parameters being fixed. On the other hand, small \(\tau\) implies a sum of two equal quantities, which approaches \(2\) for large \(L\). Tuning the values of \(\tau\) one observes quite a sensitive protocol dependence, for the average (3.27). This is illustrated in Figs. 3.2 and 3.3. The cause of this behaviour, in presence of a resonance, is the fact that the amplitude of the oscillator position grows linearly in time, yielding the \(\omega\tau\) term in the arguments of the error functions of \(I_{x}^{\rm(res)}\). ### Coupled Oscillators with Periodic Forcing In this subsection, a single oscillator, \(S\), is harmonically tied to the origin of the real line, and is harmonically driven, as in Sec. 3.2. In addition, S is harmonically coupled to a second oscillator, E. We denote by \(k_{I}\) the stiffness of the harmonic potential linking S and E, and we also assume that E is harmonically bound to the origin of the line, with elastic constant \(k_{E}\). Let the oscillators masses be \(m_{E}\) and \(m_{S}\), and let the phase of S+E be denoted by \(\Gamma=(x_{E},x_{S},p_{E},p_{S})=({\bf x},{\bf v})\), where \(p_{E}=m_{E}v_{E}\) and \(p_{S}=m_{S}v_{S}\) are the momenta associated to each oscillator. Then, the Hamiltonian of the total system is given by: \[\mathcal{H}({\bf x},{\bf v};\lambda) = \left[\frac{p_{S}^{2}}{2m_{S}}+\frac{k_{S}}{2}x_{S}^{2}+\frac{k_{ D}}{2}\left(\lambda-x_{S}\right)^{2}\right]+\left[\frac{p_{E}^{2}}{2m_{E}}+ \frac{k_{E}}{2}x_{E}^{2}\right]+\frac{k_{I}}{2}(x_{E}-x_{S})^{2} \tag{3.31}\] \[= H_{S}(x_{S},p_{S};\lambda)+H_{E}(x_{E},p_{E})+h_{I}({\bf x}) \tag{3.32}\] where the square brackets delimit the different contributions to the full Hamiltonian, respectively \(H_{S},\ H_{E}\) and \(h_{I}\), as in Eq.(2.1) for the JE theory. As driving term, we take the periodic protocol used above: \(\sin\gamma t\), and we set again \(k=k_{S}+k_{D}\). The equations of motion for this system are the following: \[\begin{cases}m_{S}\ddot{x}_{S}=-kx_{S}+k_{D}\sin\gamma t-k_{I}\left(x_{S}-x_{E }\right)\,\\ m_{E}\ddot{x}_{E}=-k_{E}x_{E}-k_{I}\left(x_{E}-x_{S}\right)\,\end{cases} \tag{3.33}\] with initial conditions \((\mathbf{x}(0),\dot{\mathbf{x}}(0))=(\mathbf{x}_{0},\mathbf{v}_{0})=\Gamma_{0}\). While analytical solutions for this set of equations are conceptually trivial, they are practically involved if \(k_{S}\neq k_{E}\) and \(m_{S}\neq m_{E}\), especially when integrated to compute the left hand side of the JE. On the other hand, they can be quite simply handled in numerical calculations. We have thus numerically sampled the initial conditions \(\Gamma_{0}\) from the truncated canonical distribution, and for each of them we have computed the initial energy \(\mathcal{H}(\Gamma_{0};\lambda(0))\). Then, we have numerically solved Eqs. (3.33) for that \(\Gamma_{0}\), obtaining the final condition \(\Gamma_{\tau}(\Gamma_{0})\), that has been introduced in the final Hamiltonian \(\mathcal{H}(\Gamma_{\tau}(\Gamma_{0});\lambda(\tau))\), to obtain the work as: \[W_{J}(\Gamma_{0})=\mathcal{H}(\Gamma_{\tau}(\Gamma_{0});\lambda(\tau))- \mathcal{H}(\Gamma_{0};\lambda(0)) \tag{3.34}\] as in Eq. (2.3), where \(\lambda(0)=A=0\) and \(\lambda(\tau)=\sin\gamma\tau=B\). Collecting many works, with \(\tau\) fixed, we have eventually estimated the quantity \[\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M} \tag{3.35}\] ### Results Our first observation is that finite size effects make the protocol dependent the quantity (3.35) unlike the case of systems initially in contact with truly infinite reservoirs. Of course, no real reservoir is infinite, but considering it infinite introduces no errors when taking equilibrium averages of standard observables, such as power laws. The situation changes if exponentials of standard observables are considered. For the single oscillator driven by a harmonic trap moving with constant velocity, Fig. 3.1 shows the dependence of (3.35) on \(\ell\) and on \(\beta\), for different values of the harmonic potential stiffness \(k_{p}\): fast and slow protocols yield different ensemble averages. The cases with \(L=1\) and \(M=1\), represented by solid lines, show an abrupt transition at about \(\ell=1\), for small \(k_{p}\). For large \(k_{p}\), dominating the coupling with the driving agent, the result gradually turns independent of the speed of the process. Increasing the reservoir size to \(L=5\) and \(M=5\), the quantity (3.35) does not appear to depend anymore on the speed of the process \(\lambda(t)\), cf. dashed lines in Fig. 3.1. In reality, the dash-dotted lines for \(L=M=2\) reveal that the process dependence merely shifts with \(L\) and \(M\), becoming evident at larger \(\ell\). Therefore, process independence for (3.35) is only obtained when \(L=\infty\) and \(M=\infty\). An analogous behaviour is observed as a function of the inverse temperature \(\beta\), with more evident transitions at higher temperatures. The second model analyzed above is even more intriguing, as resonances significantly affect the work done on the system by external perturbations, when finite size effects play a role. Figures 3.2 and 3.3 show that the extension of the phase space volume does not suffice to tame the resonances produced over sufficiently long times \(\tau\). Unlike the case of infinitely large baths, which yield the same result for all \(\tau\), here a protocol dependence arises. The reason is that a harmonic oscillator subject to no friction and to periodic forces performs oscillations whose amplitude grows linearly in time, if the forcing frequency equates the natural frequency of the system. In our example, this happens for \(\gamma=\omega\). Thus, the work done on the system grows together with the amplitude, pushing \(\left\langle e^{-\beta W_{J}}\right\rangle_{L,M}\) toward \(0\), at and near the resonance. Figure 3.1: Values of \(\left\langle e^{-\beta W_{J}}\right\rangle_{L,M}\) for the single harmonic oscillator driven by a constant speed moving harmonic trap, with final protocol value of \(\lambda(\tau)=B=1\). The result is shown as a function of \(\ell\), for different values of \(L\) and \(M\). Solid lines refer to \(L=M=1\), dash-dotted lines to \(L=M=2\), and dashed lines to \(L=M=5\). In the left panel, blue, red, yellow and purple plots refer to \(k_{p}=0.1,\ 1,\ 10,\ 100\), respectively. In the right panel, blue, red, yellow and purple plots refer to \(\beta=1,\ 4,\ 7,\ 10\), respectively. Other parameters are set to \(m=1\) and \(k_{D}=1\). The inverse temperature \(\beta\) and the global stiffness \(k\) of the potential also have noticeable effects on the behavior of (3.35). In particular, increasing \(\beta\) (reducing the temperature) flattens the curve about the resonance, widening the interval of \(\gamma\) values that make \(\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}\) equal \(0\), rather than the infinite size theoretical value \(1\), cf. Fig. 3.2. A larger value of \(k\) seems instead to stabilize \(\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}\) and reduce its Figure 3.3: Values of \(\left\langle e^{-\beta W_{J}}\right\rangle_{L,M}\) for the single harmonic oscillator with \(\lambda=\sin\gamma t\), as a function of the forcing frequency \(\gamma\), for different values of \(\Gamma_{0}\) volumes and different final times \(\tau\). Left and right panels refer to \(L=M=1\) and \(L=M=10\) respectively, with \(\tau\) such that \(B=\sin(2\pi)\) for the first case and \(B=\sin(200\pi)\) for the second. Blue, red, yellow and purple plots refer to stiffnesses \(k_{s}=0.1,\ 1,\ 10,\ 100\), respectively. Other parameters are set to \(m=1\), \(\beta=1\) and \(k_{D}=1\). dependence on \(\gamma\), as shown by Fig. 3.3. The pair of oscillators S and E from Sec. 3.3, with periodic forcing on S, shows a similar behavior, at least when the coupled particles have the same mass, \(m_{S}=m_{E}=1\), and the interaction stiffness is sufficiently low (like \(k_{I}=1\)). This is illustrated by the first two panels of Fig. 3.4, where the only parameters varied are \(\beta\) and \(k_{S}\). It is interesting to note how the temperature of the system needs to decrease (thus \(\beta\) to increase) to make \(\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}\) vanish. This is especially evident in central panel of Fig. 3.4 where none of the tested values of \(k_{S}\) yields \(0\) for \(\beta=1\). On the contrary, \(\beta=100\) obtains \(\left\langle e^{-\beta W_{J}}\right\rangle_{L,M}=0\) for \(k_{S}=1\) and different values of \(\gamma\). The reason is that higher \(\beta\) implies a narrower distribution, hence analogous to a case with smaller \(L\) and \(M\). Note that this is relevant also for infinite baths. A small temperature causes kinds of finite size effects, due to the smallness of the distribution variance. That, assuming the infinite space can at least in principle be explored, may be eliminated only at the cost of collecting enormous statistics, which is often impossible. Therefore, the finite ensemble result remains the only physically relevant. The right panel of Fig. 3.4 shows the quantity (3.35) computed on a variation of the coupled oscillators model, in which the "environment" mass \(m_{E}\) is ten times bigger than the "system" mass \(m_{S}\). To include possible effects due to the efficiency of the energy exchange between system and environment, the stiffness of the interaction potential is varied: first a value of \(k_{I}=0.1\) is implemented, then \(k_{I}=10\) is employed to account for a rapid exchange of energy between the parts, that result almost rigidly connected. The figures show that the two configurations generate similar results when the initial ensemble is restricted to \(L=M=1\), while noticeably different behavior is observed for a larger system, where \(L=M=5\). The rigidly coupled system produces oscillations of \(\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}\) around the resonance frequency, with the loosely connected case exhibiting even more evident down-peaks in the values of \(\left\langle e^{-\beta W_{J}}\right\rangle_{L,M}\) about the resonance frequency, Figure 3.4: Behaviour of \(\left\langle e^{-\beta W_{J}}\right\rangle_{L,M}\) as a function of the forcing frequency \(\gamma\) for the coupled oscillators model, simulated with \(\tau\) such that \(B=\sin(2\pi)\). Left, center and right panel report the system behavior at varying values of the parameters \(\beta\), \(k_{S}\) and \(k_{I}\), respectively. Markers represent results from numerical simulations, while dotted lines connecting them are a guide for the eye. Left panel: dark blue, light blue, light grey and dark grey lines correspond to \(\beta=0.1,\ 1,\ 10,\ 100\), respectively; other parameters are set to \(m_{S}=m_{E}=1\), \(k_{S}=k_{E}=1\), \(k_{I}=1\) and \(L=M=1\). Center panel: dark blue, light blue, light grey and dark grey lines correspond to \(k_{S}=0.1,\ 1,\ 10,\ 100\), respectively; other parameters are set to \(m_{S}=m_{E}=1\), \(k_{E}=1\), \(k_{I}=1\), \(\beta=1\) and \(L=M=1\). Right panel, dark blue and light blue lines refer to \(k_{I}=10\) and \(k_{I}=0.1\), respectively, with \(L=M=1\); dark grey and light grey plots refer to \(k_{I}=10\) and \(k_{I}=0.1\), respectively, but for \(L=M=5\); remaining parameters are set to \(m_{E}=10\), \(m_{S}=1\), \(k_{S}=k_{E}=1\), \(\beta=1\). as in the case of the periodically forced single oscillator. Moreover, both \(L=5=M\) lead to peaks that exceed 1. Both weakly and strongly coupled oscillators indicate that the presence of a massive environment drastically magnifies the finite size effects, noticeably deviating from the equality \(\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}=1\). ## 4 Irreversible Expansion of an Ideal Gas Consider a set of \(N\) identical point particles in a 2D rectangular box of length \(2L\). The particles move in straight lines, and collide elastically with the hard boundaries of the box. The box is subdivided in two equal parts by a wall perpendicular to two sides, that can can be removed and placed back, according to prescribed protocols. We perform a cycle, starting from the gas confined by the wall in the left half of the box, and in equilibrium at a given temperature \(T\). At time \(t=0\), the system is isolated from the bath and the wall is removed for a certain amount of time \(\tau\). Finally, the wall is placed back. A schematic representation of the system and its dynamics is given in Fig. 4.5. The main observable here is the fraction of particles trapped in the right half of the box at the end of the cycle. Because this fraction depends on the details of the process through which the intermediate wall is removed and reinserted, the final equilibrium state may differ from the initial one, and may depend on the protocol. For sake of simplicity, and without any loss of significance, because the particles do not interact, we may replace the 2D container with a straight line segment of length \(2L\). We also assume the particles to start with random initial positions, uniformly distributed in the interval \((-L,0)\), and with initial velocities Figure 4.5: Schematic representation of the irreversible expansion of an ideal gas. A 2D box contains non-interacting particles in equilibrium with a heat bath at temperature \(T\). Panel (a): state of the system before the central wall is removed; all the particles are confined in the left half of the box and undergo specular reflections with the container walls. Panel (b): dynamics of the particles once the central wall is removed. Panel (c): after the wall is reintroduced, a fraction of particles is trapped in the right half of the box. normally distributed, with mean \(\mu=0\) and standard deviation \(\sigma=\sqrt{k_{B}T/m}\), where \(k_{B}\) is the Boltzmann constant, \(T\) is the temperature of the bath and \(m\) is the mass of the particles. The fraction of particles escaping from the left half of the box to the right half, in the time interval \([0,\tau]\), is obtained integrating over all initial positions \(x_{0}\) and initial velocities \(v_{0}\) the probability for a particle to move from \((-L,0)\) to \((0,L)\). In the limit of many particles, the number of those leaving the left half and reaching the right half is this probability multiplied by their total number \(N\). We denote by \(N_{L}\) the number of particles in the left half of the box, and by \(N_{R}\) the number of those in the right half of the box, so that \(N=N_{L}+N_{R}\). Now, note that the initial velocities pointing rightward (i.e. \(v_{0}>0\)) that make a particle starting at \(x_{0}\in(-L,0)\) end in \(x_{\tau}\in(0,L)\) after a time \(\tau\), fulfill the inequalities: \[\frac{1}{\tau}(4nL-x_{0})<v_{0}<\frac{1}{\tau}[(4n+2)L-x_{0}]\,,\quad n=0,1,2,\ldots \tag{4.36}\] because travelling a distance of \((4nL-x_{0})\) brings the particle in the interval \((0,L)\), after a number \(n\) of bounces against the left wall of the container. Going beyond \([(4n+2)L-x_{0}]\) brings the same particle back to the left half of the box. The same reasoning, applied to particles initially pointing leftward (hence \(v_{0}<0\)), shows that a particle is found in the right half of the box at time \(\tau\) if its velocity \(v_{0}\) is such that: \[-\frac{1}{\tau}[(4n+4)L+x_{0}]<v_{0}<-\frac{1}{\tau}[(4n+2)L+x_{0})\,,\quad n=0,1,2,\ldots \tag{4.37}\] The number of particles residing in the right half of the box at time \(\tau>0\), denoted as \(N_{R}(\tau)\), is thus obtained by integrating over all initial positions \(x_{0}\) in the interval \((-L,0)\) and over all initial velocities \(v_{0}\) contained in the intervals given in (4.36) and (4.37). By doing this, one implicitly assumes the system is made of infinitely many particles, therefore the result applies only for large \(N\). For sufficiently large \(N\), the following: \[\frac{N_{R}(\tau)}{N}=\frac{1}{L\sigma\sqrt{2\pi}}\int_{-L}^{0}dx_{0}\left\{ \sum_{n=0}^{\infty}\left[\int_{\frac{1}{\tau}(4nL-x_{0})}^{\frac{1}{\tau}[(4n +2)L-x_{0}]}e^{-v_{0}^{2}/2\sigma^{2}}\mathrm{d}v_{0}+\int_{\frac{1}{\tau}[(4n +2)L+x_{0}]}^{\frac{1}{\tau}[(4n+4)L+x_{0}]}e^{-v_{0}^{2}/2\sigma^{2}}\mathrm{ d}v_{0}\right]\right\} \tag{4.38}\] is thus an accurate prediction for observations. Then, integrating first over the velocity space, exchanging the integral over positions with the infinite sum (which is made possible since each term of the infinite sum is a continuous, bounded function), and finally integrating over initial positions, Eq. (4.38) yields \[\begin{array}{rl}\frac{N_{R}(\tau)}{N}=&\frac{1}{2L}\sum_{n=0}^{ \infty}\left\{\sqrt{\frac{2}{\pi}}\sigma\tau\left[\exp\left(-\frac{16L^{2}(n +1)^{2}}{2\sigma^{2}\tau^{2}}\right)+\exp\left(-\frac{16L^{2}n^{2}}{2\sigma^{ 2}\tau^{2}}\right)-2\exp\left(-\frac{L^{2}(4n+2)^{2}}{2\sigma^{2}\tau^{2}} \right)\right]\right.\\ &\qquad\qquad\left.-2(4n+2)L\operatorname{erf}\left[\frac{(4n+2)L}{ \sqrt{2}\sigma\tau}\right]+4nL\operatorname{erf}\left[\frac{4nL}{\sqrt{2} \sigma\tau}\right]+(4n+4)L\operatorname{erf}\left[\frac{(4n+4)L}{\sqrt{2} \sigma\tau}\right]\right\}\end{array} \tag{4.39}\] A comparison between this analytical result and numerical simulations of the system is shown in Fig. 4.6. To estimate the relaxation times, in the absence of the intermediate wall, one cannot count on the mean time between two consecutive collisions with the boundaries of the box, given by \(2L\langle 1/v\rangle\), because such a mean does not exist in a 1-dimensional space. However, one may take the distance \(2L\) divided by the mean speed \(\langle|v|\rangle\) \[\widehat{t}=\frac{2L}{\langle|v|\rangle}=\sqrt{2\pi}\frac{L}{\sigma} \tag{4.40}\] as a characteristic time for the dynamics. This quantity estimates the time scale of the relaxation to a uniform distribution of particles, when their number is sufficiently high that recurrence times can be considered infinite to all effects. As indicated by the vertical lines in the left panel of Fig. 4.6, systems with larger \(\beta\), _i.e._ smaller temperature, take longer to reach the uniform distribution of particles in the box, as expected. The point here is that an irreversible process is generated by the motion of the moving wall, that returns to its initial position, at the end of a cycle. The result is that variations of the process time \(\tau\) lead to different results for \(N_{R}\), hence of the free energy of the final equilibrium states. For sufficiently large \(\tau\), the process uniquely leads to \(N_{L}=N_{R}\) (although fluctuations occur at any finite \(N\), cf. the right panel of Fig. 4.6), but that is different from the initial state \(N_{L}=N\), \(N_{R}=0\). Therefore, even accepting an ideal infinite thermostat, the variation of free energy between initial and final state does not vanish and depends on the process time \(\tau\), at odds with the JE, _i.e._ with the canonical ensemble from which the JE is derived. In this case, the canonical distribution of momenta is given by a Gaussian. Were the range of momenta finite, further corrections to the canonical prediction would arise. For the dynamics to be Hamiltonian, as the derivation of the JE requires, the wall could be modelled by a repulsive potential \(\Phi\), placed in the center of the box, that diverges at time \(0\) and \(\tau\), and that vanishes at time \(\tau/2\). For instance, the following would do: \[\Phi(x;\lambda)=\left\{\begin{array}{ll}0&\mbox{if}\;\;x\notin[1/2-\epsilon, 1/2+\epsilon]\\ \frac{1}{\lambda}-\frac{4}{\tau^{2}}&\mbox{if}\;\;x\in[1/2-\epsilon,1/2+ \epsilon]\end{array}\right.\quad\mbox{with}\quad\lambda(t)=t(\tau-t) \tag{4.41}\] with \(2\epsilon>0\) representing the thickness of the wall. In this case, the infinitesimal contribution to \(W_{J}\), given by the interaction of particle \(i\) in position \(x_{i}\) with the wall, for a time \(\mathrm{d}t\), is given by \[\mathrm{d}w_{i}=\left\{\begin{array}{cl}0&\mbox{if}\;\;x_{i}\notin[1/2- \epsilon,1/2+\epsilon]\\ \frac{2t-\tau}{t^{2}(\tau-t)^{2}}\;\mathrm{d}t&\mbox{if}\;\;x_{i}\in[1/2- \epsilon,1/2+\epsilon]\end{array}\right. \tag{4.42}\] which has to be integrated over the time intervals within \([0,\tau]\) such that \(x_{i}\in[1/2-\epsilon,1/2+\epsilon]\), and summed Figure 4.6: _Left panel_: behavior of \(N_{R}/N\) as a function of the protocol duration \(\tau\), for different values of \(\beta\), with \(N=10000\), \(L=5\), \(m=1\). Blue, orange and yellow solid lines refer to Eq.(4.39) for \(\beta=1,\;10,\;100\), respectively, and are obtained by truncating the formula (4.39) to \(n=1000\). The black dotted lines denote the results of the numerical simulations. Dash-dotted vertical lines indicate the characteristic times obtained from Eq. (4.40). _Right panel_: behavior of \(N_{R}/N\) vs. \(\tau\), for different values of \(N\), obtained from numerical simulations. Black, blue and orange solid lines refer to \(N=20,\;100,\;500\) respectively. The other parameters are fixed to \(L=5\), \(\beta=10\), \(m=1\). over all particles. Now, the standard canonical formalism is not applicable to this simple example, because the phase space corresponding to the initial equilibrium state is altered when the wall potential is lowered to finite height. It switches from representing an equilibrium state in half the volume of the box, to a different equilibrium state, that occupies the whole box. The instant in which the particles are allowed to move in whole the box, but are still confined in its left half, the initial canonical distribution does not describe anymore their state, and cannot be used to compute the free energy difference between the equilibrium states before and after the wall is removed. Nevertheless, this difficulty can be overcome, without affecting the result, considering, as generally and efficiently done (see _e.g._ Section 1.3 of Ref. [27]), that physically relevant space and time scales are finite, although they can be taken as large as one wants. Then, one realizes that a finite but high barrier confines a finite number of particles initially in the left half of the box, with only a negligible fraction \(\epsilon\) of them moving to the other half, for a given time. A finite, but higher barrier confines the particles with same tolerance \(\epsilon\) for a larger time, or for the same time and a smaller \(\epsilon\). Given the (arbitrarily large) time and the (arbitrarily small) tolerance considered physically satisfactory, there is a barrier height that produces better confinement, allowing the initial state to be considered an equilibrium state. Then, the protocol dependence of the free energy difference described above remains. ## 5 Concluding remarks In this work we have discussed simple examples concerning finite size effects and a broken time reversal symmetry on the calculations of values of observables within the statistical mechanics formalism. It is indeed ever more important to properly describe systems that do not lie within the traditional bounds of statistical mechanics, developed for macroscopic systems at equilibrium, or slowly evolving near equilibrium states. Indeed, contemporary research widely focuses on small and far from equilibrium systems. In the case of equilibrium macroscopic systems, the use of the standard ensembles is fully justified, because the observables of interest are determined by the bulk, not the tails of the probability distributions. This approach is validated both by theory and experiments. On the contrary, fluctuations of properties of interest in the case of small or strongly nonequilibrium systems, compare to average signals, and require a proper characterization of the tails of the distributions, which may also be affected by a spontaneous time reversal symmetry breaking, not evident in the equations of motion. Indeed, the interaction with heat reservoirs is often limited to processes that last very short times, making effective only small parts of such environments. To illustrate these points we have investigated simple driven harmonic oscillators systems, averaging the popular quantity \(\exp(-\beta W_{J})\) with canonical and truncated canonical averages. We have shown that: * A single oscillator pulled by a constant speed harmonic trap yields the infinite bath result if the process is not too fast. It sensibly and rapidly departs from that when the speed of the driving agent grows. The effect is more evident (as expected) for small than for large integration bounds, \(L\) and \(M\), for smaller harmonic constant, and for smaller bath temperatures. In the infinite \(L,M\) limits, the standard canonical result is recovered, but larger and larger \(L\) and \(M\) are required, the smaller are \(k_{p}\) or \(\beta\). * For a single periodically driven oscillator, the infinite bath result over a multiple of the driving period equals 1. Strong deviations from this value, that even reach 0, are found for a finite bath in finite intervals about the resonance frequency. While the theoretical result is again obtained in the infinite \(L,M\) limit, this is harder if the driving acts for longer times (_i.e._, for a larger number of periods). * In the case in which the oscillator S is coupled to an oscillator E, the infinite bath value 1 is obtained, apart from oscillations, for sufficiently large driving frequency. Noticeable deviations from that results are still present about specific values of the forcing frequency. In this case, we have no analytical expression for the finite bath result. Therefore this conclusion is based on numerical data, for a finite ensemble of initial conditions, proven robust against variations of ensemble size. The last example we have considered implies the breaking of the classical time reversal symmetry and consists of an ideal gas initially in equilibrium with an infinite thermal bath at temperature \(T\), which is confined in the left half of its container by a moving barrier. Initially, the barrier confines all particles to the left half of the container, and that allows an equilibrium state, represented by a uniform distribution for the positions of particles in \([-L,0]\) and a Gaussian distribution for their velocities. As soon as the barrier allows particles to reach the right half of the container the phase space changes to one in which positions cover the \([-L,L]\) interval, and the initial ensemble immediately fails to represent an equilibrium state. The observables take instead some time to change. This prevents the application of the standard statistical mechanical formalism, because the phase space of the equilibrium initial state is not the one of the nonequilibrium evolution and, for instance, the Liouville equation fails. A modification in time of the volume of a given system to the very least requires a suitable time dependent scaling of the phase space coordinates [28], for the formalism to apply, but that is not possible in our case, because the volume changes instantaneously. Nevertheless, the experiment can be performed, and a proper formalism for it has been identified, in terms of finite potential barriers and time scales. Discrepancies between the canonical formalism and experimental situations are known to arise when irreversibility emerges: they are intrinsic and not merely due to insufficient statistics. All boils down to the conclusion that finite size and irreversibility effects similarly lead to protocol dependent averages of exponential quantities such as \(\exp(-\beta W_{J})\). The standard statistical mechanics formalism should be adapted to treat these cases. This fits nicely with the standard statistical mechanical justification of ensembles. For instance, in Ref. [24], Fermi states: _Studying the thermodynamical state of a homogeneous fluid of given volume at given temperature [...] we observe that there is an infinite number of states of molecular motion that correspond to it. With increasing time, the system exists successively in all the dynamical states that correspond to the given thermodynamical state. From this point of view we may say that a thermodynamical state is the ensemble of all the dynamical states through which, as a result of the molecular motion, the system is rapidly passing._ and Callen adds _If the transition mechanism among the atomic states is sufficiently effective, the system passes rapidly through all representative atomic states in the course of a macroscopic observation [...]. However, under certain unique conditions, the mechanism of atomic transition may be ineffective and the sytem may be trapped in a small subset of atypical atomic states. Or, even if the system is not completely trapped the rate of transition may be so slow that a macroscopic measurement does not yield a proper average over all possible atomic states._ In reality, less than required by Fermi and Callen is needed, for ensembles to work, because observables of interest are generally a few and well behaved [26, 29, 30]. But when the standard conditions are severely violated, and the observables call for an accurate representation of large fluctuations, canonical results must be taken with a grain of salt. **Acknowledgements.** LR acknowledges financial support by the Ministero dell'Universita e della ricerca (Italy), grant Dipartimenti di Eccellenza 2018 - 2022 (E11G18000350001). This research was performed under the auspices of Italian National Group of Mathematical Physics (GNFM) of INdAM. **A. Some explicit calculations for Sec. 3.1** Retaining the notation of Sec. 2, we denote by \(P_{0}(\Gamma)\) the canonical distribution referring to a specific configuration \(\Gamma\) (system + environment), \[P_{0}(\Gamma)=\frac{1}{Z_{0}}e^{-\beta{\cal H}(\Gamma;A)}\,,\quad\beta=\frac{1} {k_{B}T}\,, \tag{1.43}\] with \(k_{B}\) the Boltzmann's constant. One thus readily finds: \[\left\langle e^{-\beta W_{J,\ell}}\right\rangle_{0} = \frac{1}{Z_{0}}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{ \rm d}x_{0}{\rm d}p_{0}e^{-\beta W_{J}(\ell;x_{0},v_{0})}e^{-\beta{\cal H}(x_{ 0},v_{0};0)}= \tag{1.44}\] \[= \frac{1}{Z_{0}}\exp\left\{-\beta\frac{k_{D}k_{p}B^{2}}{2k}-\beta \frac{k_{D}^{2}\ell^{2}m}{k^{2}}\left(1-\cos\omega B/\ell\right)\right\}\times\] (1.45) \[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{\rm d}x_{0}{\rm d }p_{0}\exp\beta\left[\frac{x_{0}k_{D}\ell}{\omega}\sin\omega\frac{B}{\ell}- \frac{p_{0}k_{D}\ell}{k}\left(\cos\omega\frac{B}{\ell}-1\right)-\frac{p_{0}^{ 2}}{2m}-\frac{kx_{0}^{2}}{2}\right]\] where \(Z_{0}\) is the partition function of the initial canonical distribution and it is given by \[Z_{0}=\int_{-\infty}^{\infty}{\rm d}x\,e^{-\beta kx^{2}/2}\int_{-\infty}^{ \infty}{\rm d}pe^{-\beta p^{2}/2m}=\frac{2\pi}{\beta\omega}. \tag{1.46}\] The double integral in (1.45) can be separated and computed in two parts: \[\int_{-\infty}^{\infty}{\rm d}\,x\,e^{\beta x\frac{k_{D}\ell}{\omega}\sin \omega\frac{B}{\ell}-\frac{\beta k}{2}x^{2}}=\sqrt{\frac{2\pi}{\beta k}}\exp \left(\frac{\beta k_{D}^{2}\ell^{2}m}{2k^{2}}\sin^{2}\omega\frac{B}{\ell}\right) \tag{1.47}\] and \[\int_{-\infty}^{\infty}{\rm d}\,p\,e^{-\beta\left[\frac{k_{D}\ell}{\kappa} \left(\cos\omega\frac{B}{\ell}-1\right)+\frac{p^{2}}{2m}\right]}=\sqrt{\frac{ 2\pi m}{\beta}}\exp\left[\frac{\beta k_{D}^{2}\ell^{2}m}{2k^{2}}\left(\cos^{2 }\omega\frac{B}{\ell}-2\cos\omega\frac{B}{\ell}+1\right)\right]\,, \tag{1.48}\] from which it follows that: \[\left\langle e^{-\beta W_{J,\ell}}\right\rangle_{0} = \frac{\beta\omega}{2\pi}\frac{2\pi}{\beta\omega}\exp\left\{-\beta \frac{k_{D}k_{p}B^{2}}{2k}-\beta\frac{k_{D}^{2}\ell^{2}}{k\omega}\left(1-\cos \omega B/\ell\right)\right\}\times \tag{1.49}\] \[\exp\left[\frac{\beta k_{D}^{2}\ell^{2}}{2k\omega}\sin^{2}\omega \frac{B}{\ell}+\frac{\beta k_{D}^{2}\ell^{2}}{2k\omega}\left(\cos^{2}\omega \frac{B}{\ell}-2\cos\omega\frac{B}{\ell}+1\right)\right]\] \[= \exp\left\{-\beta\frac{k_{D}k_{p}B^{2}}{2k}\right\}\,.\] Let us now turn to consider canonical distributions truncated at a given distance \(L\) from the rest position of the oscillator, and at a maximum momentum \(M\). Referring to the model of a single oscillator subject to a linear protocol, treated in Sec. 3.1, we denote: \[P_{0}(x,p)=\frac{1}{Z_{0}(L,M)}\left\{\begin{array}{lcl}e^{-\beta(kx^{2}+p^{2 }/m)/2}&\mbox{if}&|x|\leq L\ \ \mbox{and}\ \ |p|\leq M\\ 0&\mbox{if}&|x|>L\ \ \mbox{or}\ \ |p|>M\end{array}\right. \tag{1.50}\] where: \[Z_{0}(L,M)=\int_{-L}^{L}{\rm d}x\,e^{-\beta kx^{2}/2}\int_{-M}^{M}{\rm d}pe^{- \beta p^{2}/2m}=\frac{2\pi}{\beta\omega}\,{\rm erf}\left(\sqrt{\frac{\beta k }{2}}L\right){\rm erf}\left(\sqrt{\frac{\beta}{2m}}M\right) \tag{1.51}\] Consequently, the average of \(\exp(-\beta W_{J})\) for a given \(\ell\) now reads: \[\left\langle e^{-\beta W_{J,\ell}}\right\rangle_{0;L,M}=\frac{1}{Z_{ 0;L,M}}\!\exp\left[-\beta\frac{k_{D}k_{p}B^{2}}{2k}-\beta\frac{k_{D}^{2}\ell^{2} m}{k^{2}}\left(1-\cos\omega B/\ell\right)\right]\times \tag{52}\] \[\int_{-L}^{L}\int_{-M}^{M}\mathrm{d}x_{0}\mathrm{d}p_{0}\exp \left\{\beta\left[\frac{x_{0}k_{D}\ell}{\omega}\sin\omega\frac{B}{\ell}-\frac{ p_{0}k_{D}\ell}{k}\left(\cos\omega\frac{B}{\ell}-1\right)-\frac{p_{0}^{2}}{2m}- \frac{kx_{0}^{2}}{2}\right]\right\} \tag{53}\] where we can separately compute: \[\int_{-L}^{L}\mathrm{d}\,x\,e^{\beta x\frac{k_{D}\ell}{\omega} \sin\omega\frac{B}{\ell}-\frac{\beta k}{2}x^{2}}=\sqrt{\frac{\pi}{2\beta k}}e^ {\frac{\beta k_{D}\ell^{2}m\sin^{2}\omega\frac{B}{2k^{2}}}{2k^{2}}} \tag{54}\] \[\times\left[\mathrm{erf}\left(\sqrt{\frac{\beta k}{2}}L+\sqrt{ \frac{\beta m}{2}}\frac{k_{D}\ell\sin\omega\frac{B}{\ell}}{k}\right)+\mathrm{ erf}\left(\sqrt{\frac{\beta k}{2}}L-\sqrt{\frac{\beta m}{2}}\frac{k_{D}\ell\sin \omega\frac{B}{\ell}}{k}\right)\right] \tag{55}\] and \[\int_{-M}^{M}\mathrm{d}\,p\,e^{-\beta\left[p\frac{k_{D}\ell}{k} \left(\cos\omega\frac{B}{\ell}-1\right)+\frac{p^{2}}{2m}\right]}=\sqrt{\frac{ \pi m}{2\beta}}e^{\frac{\beta k_{D}^{2}\ell^{2}m}{2k^{2}}\left(\cos\omega \frac{B}{\ell}-1\right)^{2}} \tag{56}\] \[\times\left[\mathrm{erf}\left(\sqrt{\frac{\beta}{2m}}M-\sqrt{ \frac{\beta m}{2}}\frac{k_{D}\ell}{k}\left(\cos\omega\frac{B}{\ell}-1\right)\right)\right.\] (57) \[\left.+\,\mathrm{erf}\left(\sqrt{\frac{\beta}{2m}}M+\sqrt{\frac{ \beta m}{2}}\frac{k_{D}\ell}{k}\left(\cos\omega\frac{B}{\ell}-1\right)\right)\right] \tag{58}\] Therefore, one obtains: \[\left\langle e^{-\beta W_{J,\ell}}\right\rangle_{0;L,M}=I_{exp}\cdot I_{x}\cdot I _{p} \tag{59}\] with \[I_{exp}=\exp\left\{-\beta\frac{k_{D}k_{p}B^{2}}{2k}\right\}= \left\langle e^{-\beta W_{J,\ell}}\right\rangle_{0} \tag{60}\] \[I_{x}=\frac{\mathrm{erf}\left(\sqrt{\frac{\beta k}{2}}L+\sqrt{ \frac{\beta m}{2}}\frac{k_{D}}{k}\ell\sin\omega\frac{B}{\ell}\right)+\mathrm{ erf}\left(\sqrt{\frac{\beta k}{2}}L-\sqrt{\frac{\beta m}{2}}\frac{k_{D}}{k}\ell \sin\omega\frac{B}{\ell}\right)}{2\ \mathrm{erf}\left(\sqrt{\frac{\beta k}{2}}L\right)}\] (61) \[I_{p}=\frac{\mathrm{erf}\left(\sqrt{\frac{\beta}{2m}}M+\sqrt{ \frac{\beta m}{2}}\frac{k_{D}}{k}\ell\left(\cos\omega\frac{B}{\ell}-1\right) \right)+\ \mathrm{erf}\left(\sqrt{\frac{\beta}{2m}}M-\sqrt{\frac{\beta m}{2}}\frac{k_{D}} {k}\ell\left(\cos\omega\frac{B}{\ell}-1\right)\right)}{2\ \mathrm{erf}\left(\sqrt{\frac{\beta}{2m}}M\right)} \tag{62}\] ## Appendix B Some explicit calculations for Sec. 3.2 For the model of a single oscillator subject to a periodic forcing, discussed in Sec. 3.2, one has: \[\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}=\frac{1}{Z_{0}(L,M)}\times\] \[\exp\left\{-\frac{\beta k_{D}}{4}\left(1-\frac{k_{D}/m}{\omega^{2} -\gamma^{2}}\right)(1-\cos 2\gamma\tau)-\frac{\beta k_{D}^{2}\gamma^{2}/m}{( \omega^{2}-\gamma^{2})^{2}}\left(1-\frac{\gamma}{\omega}\sin\gamma\tau\sin \omega\tau-\cos\gamma\tau\cos\omega\tau\right)\right\}\times\] \[\int_{-L}^{L}\mathrm{d}x\,\exp\left\{-\frac{\beta k}{2}x^{2}+ \frac{\beta k_{D}\gamma\omega}{\omega^{2}-\gamma^{2}}\left(\cos\gamma\tau\sin \omega\tau-\frac{\gamma}{\omega}\sin\gamma\tau\cos\omega\tau\right)x\right\}\times\] \[\int_{-M}^{M}\mathrm{d}p\,\exp\left\{-\frac{\beta}{2m}p^{2}+ \frac{\beta k_{D}\gamma/m}{\omega^{2}-\gamma^{2}}\left(1-\frac{\gamma}{\omega }\sin\gamma\tau\sin\omega\tau-\cos\gamma\tau\cos\omega\tau\right)p\right\} \tag{2.63}\] which thus leads to: \[\left\langle e^{-\beta W_{J}}\right\rangle_{0;L,M}=I_{exp}\cdot I_{x}\cdot I_ {p} \tag{2.64}\] where \[I_{exp}=\exp\left\{-\frac{\beta k_{D}}{4}\left(1-\frac{k_{D}/m}{ \omega^{2}-\gamma^{2}}\right)(1-\cos 2\gamma\tau)\right. \tag{2.65}\] \[\left.-\frac{\beta k_{D}^{2}\gamma^{2}}{(\omega^{2}-\gamma^{2})^{2 }}\left[\frac{1}{m}\left(1-\frac{\gamma}{\omega}\sin\gamma\tau\sin\omega\tau- \cos\gamma\tau\cos\omega\tau\right)\right.\right.\] (2.66) \[\left.\left.-\frac{1}{2m}\left(1-\frac{\gamma}{\omega}\sin\gamma \tau\sin\omega\tau-\cos\gamma\tau\cos\omega\tau\right)^{2}\right.\right.\] (2.68) \[\left.\left.-\frac{\omega^{2}}{2k}\left(\cos\gamma\tau\sin\omega \tau-\frac{\gamma}{\omega}\sin\gamma\tau\cos\omega\tau\right)^{2}\right]\right\} \tag{2.69}\] \[I_{x}=\frac{1}{2\,\,\mathrm{erf}\left(\sqrt{\frac{\beta k}{2}}L \right)}\left[\mathrm{erf}\left(\frac{\beta kL-\frac{\beta k_{D}\gamma\omega} {\omega^{2}-\gamma^{2}}\left(\cos\gamma\tau\sin\omega\tau-\frac{\gamma}{ \omega}\sin\gamma\tau\cos\omega\tau\right)}{\sqrt{2\beta k}}\right)+\right. \tag{2.70}\] \[\left.\mathrm{erf}\left(\frac{\beta kL+\frac{\beta k_{D}\gamma \omega}{\omega^{2}-\gamma^{2}}\left(\cos\gamma\tau\sin\omega\tau-\frac{\gamma} {\omega}\sin\gamma\tau\cos\omega\tau\right)}{\sqrt{2\beta k}}\right)\right] \tag{2.71}\] \[I_{p}=\frac{1}{2\,\,\mathrm{erf}\left(\sqrt{\frac{\beta}{2m}}M \right)}\left[\mathrm{erf}\left(\frac{\frac{\beta}{m}M-\frac{\beta k_{D}\gamma /m}{\omega^{2}-\gamma^{2}}\left(1-\frac{\gamma}{\omega}\sin\gamma\tau\sin \omega\tau-\cos\gamma\tau\cos\omega\tau\right)}{\sqrt{2\beta/m}}\right)\right. \tag{2.72}\] \[\left.+\mathrm{erf}\left(\frac{\frac{\beta}{m}M+\frac{\beta k_{D} \gamma/m}{\omega^{2}-\gamma^{2}}\left(1-\frac{\gamma}{\omega}\sin\gamma\tau\sin \omega\tau-\cos\gamma\tau\cos\omega\tau\right)}{\sqrt{2\beta/m}}\right)\right]\,. \tag{2.73}\]
2310.15652
Semiprojectivity of the moduli of principal $G$-bundles with $λ$-connections
Let $X$ be a compact connected Riemann surface of genus $g \geq 2$ and $G$ a nontrivial connected reductive affine algebraic group over $\mathbb{C}$. We prove the semiprojectivity of the moduli spaces of semistable $G$-Higgs bundles and $G$-bundles with $\lambda$-connections of fixed topological type $d\in \pi_1(G)$.
Sumit Roy, Anoop Singh
2023-10-24T09:09:14Z
http://arxiv.org/abs/2310.15652v2
# Motivic invariance for principal \(G\)-bundles with \(\lambda\)-connections ###### Abstract. Let \(X\) be a compact connected Riemann surface of genus \(g\geq 2\) and \(G\) a non-trivial connected reductive affine algebraic group over \(\mathbb{C}\). We consider the moduli spaces of regularly stable \(G\)-Higgs bundles and holomorphic \(G\)-connections of fixed topological type \(d\in\pi_{1}(G)\). We show that these two moduli spaces have the same Grothendieck motives and their \(E\)-polynomials are equal. Also, we show that their Hodge structures are pure and isomorphic. Key words and phrases:Motives, Principal bundles, Higgs bundles, Holomorphic connections, Hodge structures, \(E\)-polynomial 2020 Mathematics Subject Classification: 14C15, 14C30, 13D15, 14D20, 14D23, 70G45 ## 1. Introduction Let \(X\) be a connected compact Riemann surface of genus \(g\geq 2\). Let \(G\) be a nontrivial connected reductive affine algebraic group over \(\mathbb{C}\). In this article, we consider the moduli space \(\mathcal{M}_{\rm Higgs}^{d}(G)\) (resp. \(\mathcal{M}_{\rm conn}^{d}(G)\)) of semistable \(G\)-Higgs bundles (resp. holomorphic \(G\)-connections) of fixed topological type \(d\in\pi_{1}(G)\). These two moduli spaces are not smooth. But if we consider the regularly stable locus (i.e. those elements for which the automorphism group of the underlying principal \(G\)-bundle coincides with the center of \(G\)), then the moduli spaces are smooth. Simpson in [14] considered a family over \(\mathbb{C}\), called the Hodge moduli space, whose fibers over \(0\) and \(1\) are exactly the moduli of Higgs bundles and of holomorphic connections respectively. Also, he produced a homeomorphism between the moduli space of Higgs bundles and the moduli space of holomorphic connections, which is known as the non-abelian Hodge correspondence (see [13], [14], [15]). In general, these two moduli spaces have singularities but if we consider that case where the rank and degree are coprime, then they are smooth. In [7, Theorem 6.2], Hausel and Thaddeus considered the coprime rank and degree case and proved that these two moduli spaces have the same \(E\)-polynomials and their Hodge structure is pure. The Hodge moduli space is a semiprojective variety for the natural \(\mathbb{C}^{*}\)-action [8]. In [9], Hoskins and Lehalleur established a motivic version of the non-abelian Hodge correspondence. In this article, we consider the _Grothendieck motives_. Let \({\rm Var}_{\mathbb{C}}\) denote the category of quasi-projective varieties over \(\mathbb{C}\) and let \(K({\rm Var}_{\mathbb{C}})\) denote the _Grothendieck ring of varieties_ and let \(\hat{K}({\rm Var}_{\mathbb{C}})\) be the dimensional completion of \(K({\rm Var}_{\mathbb{C}})\). For a quasi-projective complex variety \(Z\), we call \([Z]\in\hat{K}({\rm Var}_{\mathbb{C}})\) the _Grothendieck motive_ (or, simply _motive_) of \(Z\). If \(Z\) has pure Hodge structure, then the \(E\)-polynomial of \(Z\) is defined by \[E(Z)=E(Z)(u,v)=\sum_{p,q=0}^{n}(-1)^{p+q}h_{c}^{p,q}(Z)u^{p}v^{q},\] where \(n=\dim Z\) and \(h_{c}^{p,q}(Z)=\dim{\rm H}_{c}^{p,q}(Z)\). Using the smoothness and the semiprojectivity of the moduli spaces \(\mathcal{M}^{d,rs}_{\rm Higgs}(G)\subset\mathcal{M}^{d}_{\rm Higgs}(G)\) (respectively, \(\mathcal{M}^{d,rs}_{\rm conn}(G)\subset\mathcal{M}^{d}_{\rm conn}(G)\)) of regularly stable principal \(G\)-Higgs bundles (respectively, regularly stable holomorphic \(G\)-connections), we show that their Grothendieck motives and \(E\)-polynomials are equal, i.e. \[[\mathcal{M}^{d,rs}_{\rm Higgs}(G)] =[\mathcal{M}^{d,rs}_{\rm conn}(G)]\;\;\text{and}\] \[E(\mathcal{M}^{d,rs}_{\rm Higgs}(G)) =E(\mathcal{M}^{d,rs}_{\rm conn}(G))\] (see Theorem 3.3). In the same Theorem 3.3, we also show that these two moduli spaces have pure Hodge structures and they are isomorphic, i.e. \[\mathrm{H}^{\bullet}(\mathcal{M}^{d,rs}_{\rm Higgs}(G))\cong\mathrm{H}^{ \bullet}(\mathcal{M}^{d,rs}_{\rm conn}(G)).\] We prove that the regularly stable principal Hodge moduli space \(\mathcal{M}^{d,rs}_{\rm Hod}(G)\) satisfies the following two identitites: \[[\mathcal{M}^{d,rs}_{\rm Hod}(G)] =\mathbb{L}[\mathcal{M}^{d,rs}_{\rm Higgs}(G)]\quad\text{and}\] \[E(\mathcal{M}^{d,rs}_{\rm Hod}(G)) =uvE(\mathcal{M}^{d,rs}_{\rm Higgs}(G)),\] where \(\mathbb{L}\) is the Lefschetz motive (see Theorem 3.3 for the proof). ## 2. Preliminaries Let \(K_{X}\) denote the holomorphic cotangent bundle on \(X\). Let \(G\) be a connected reductive affine algebraic group over \(\mathbb{C}\) and let \(\mathfrak{g}=\mathrm{Lie}(G)\) be the Lie algebra of \(G\). The adjoint action of \(G\) on \(\mathfrak{g}\) is denoted by \[\mathrm{ad}:G\longrightarrow\mathrm{End}(\mathfrak{g}).\] **Definition 1**.: A _holomorphic principal \(G\)-bundle_ over \(X\) is a holomorphic vector bundle \(E_{G}\) together with a surjective holomorphic map \(p:E_{G}\to X\) and a holomorphic right action \(\phi:E_{G}\times G\longrightarrow E_{G}\) of \(G\) on the vector bundle \(E_{G}\) such that the following conditions hold: 1. \(p\circ\phi=p\circ p_{1}\), where \(p_{1}:E_{G}\times G\longrightarrow E_{G}\) is the projection map, and 2. the map \[E_{G}\times G \longrightarrow E_{G}\times_{X}E_{G}\] \[(y,g) \mapsto(y,\phi(y,g))\] to the fiber product is a biholomorphism. The right action of \(G\) on \(E_{G}\) together with the adjoint action of \(G\) on \(\mathfrak{g}\) gives a \(G\)-action on \(E_{G}\times\mathfrak{g}\) defined by \[(v,\xi)\cdot g=(v\cdot g,\mathrm{ad}(g^{-1})(\xi)),\;\;\forall\;(v,\xi)\in E_{ G}\times\mathfrak{g},\;g\in G.\] The associated quotient bundle \[E_{G}\times^{G}\mathfrak{g}\coloneqq(E_{G}\times\mathfrak{g})/G\] is called the _adjoint vector bundle_ of \(E_{G}\) and it is denoted by \(\mathrm{ad}(E_{G})\). The topological type of a holomorphic principal \(G\)-bundle \(E_{G}\) over \(X\) corresponds to an element of the fundamental group \(\pi_{1}(G)\) (see [11]) and this is a finite abelian group. **Definition 2**.: A holomorphic principal \(G\)-bundle \(E_{G}\) is called _stable_ (respectively, _semistable_) if for all maximal parabolic subgroup \(P\subset G\) and every holomorphic reduction \(E_{P}\) of the structure group of \(E_{G}\) to \(P\), \[\deg(\operatorname{ad}(E_{P}))<0\;\;(\text{respectively},\;\;\leq 0\,)\] where \(\operatorname{ad}(E_{P})\subset\operatorname{ad}(E_{G})\) is the adjoint vector bundle of \(E_{P}\). **Definition 3**.: A stable principal \(G\)-bundle \(E_{G}\) is called _regularly stable_ if \(\operatorname{Aut}(E_{G})=\operatorname{Z}(G)\), i.e. the automorphism group of \(E_{G}\) coincides with the center of \(G\). Let \(\mathcal{M}^{d}(G)\) denote the moduli space of semistable holomorphic \(G\)-bundles over \(X\) of topological type \(d\in\pi_{1}(G)\). It is well known that the moduli space \(\mathcal{M}^{d}(G)\) is an irreducible normal projective complex variety of dimension \[\dim\mathcal{M}^{d}(G)=(g-1)\cdot\dim_{\mathbb{C}}G\] (see [11], [12] for more details). The moduli space \[\mathcal{M}^{d,rs}(G)\subset\mathcal{M}^{d}(G)\] of regularly stable principal \(G\)-bundles is an open subvariety and exactly the smooth locus of \(\mathcal{M}^{d}(G)\) (see [5, Corollary 3.4]). ### \(G\)-Higgs bundles **Definition 4**.: A principal \(G\)_-Higgs bundle_ over \(X\) is a pair \((E_{G},\varphi)\) where \(E_{G}\) is a holomorphic principal \(G\)-bundle and \[\varphi\in\operatorname{H}^{0}(X,\operatorname{ad}(E_{G})\otimes K_{X})\] is a holomorphic section, called the _Higgs field_[10, 13]. **Definition 5**.: A principal \(G\)_-Higgs bundle_\((E_{G},\varphi)\) is called _stable_ (respectively, _semistable_) if for all holomorphic reduction \(E_{P}\) of the structure group of \(E_{G}\) to a \(\varphi\)-invariant maximal parabolic subgroup \(P\subsetneq G\), i.e. \(\varphi\in\operatorname{H}^{0}(X,\operatorname{ad}(E_{P})\otimes K_{X})\) we have \[\deg(\operatorname{ad}(E_{P}))<0\;\;(\text{respectively},\;\;\leq 0\,).\] A principal \(G\)-Higgs bundle \((E_{G},\varphi)\) over \(X\) is called _regularly stable_ if \(E_{G}\) is regularly stable over \(X\). Let \(\mathcal{M}^{d}_{\text{Higgs}}(G)\) denote the moduli space of semistable principal \(G\)-Higgs bundles over \(X\) of topological type \(d\in\pi_{1}(G)\). Following [15], we know that \(\mathcal{M}^{d}_{\text{Higgs}}(G)\) is a normal irreducible quasi-projective variety over \(\mathbb{C}\) of dimension \[\dim\mathcal{M}^{d}_{\text{Higgs}}(G)=2\dim\mathcal{M}^{d}(G)=2(g-1)\cdot \dim_{\mathbb{C}}G.\] Observe that \(\mathcal{M}^{d}(G)\subset\mathcal{M}^{d}_{\text{Higgs}}(G)\) is closed subvariety of \(\mathcal{M}^{d}_{\text{Higgs}}(G)\) via the embedding \[\mathcal{M}^{d}(G) \longleftrightarrow\mathcal{M}^{d}_{\text{Higgs}}(G)\] \[E_{G} \longmapsto(E_{G},0).\] There is a natural \(\mathbb{C}^{*}\)-action on \(\mathcal{M}^{d}_{\text{Higgs}}(G)\) given by \[t\cdot(E_{G},\varphi)\coloneqq(E_{G},t\varphi). \tag{2.1}\] The moduli space \(\mathcal{M}^{d,rs}_{\rm Higgs}(G)\subset\mathcal{M}^{d}_{\rm Higgs}(G)\) of regularly stable \(G\)-Higgs bundles is open and the smooth locus of \(\mathcal{M}^{d}_{\rm Higgs}(G)\). From the deformation theory, the tangent space of \(\mathcal{M}^{d,rs}(G)\) at \(E_{G}\) is isomorphic to \(\mathrm{H}^{1}(X,\mathrm{ad}(E_{G}))\). By the Serre duality, we have \[\mathrm{H}^{0}(X,\mathrm{ad}(E_{G})\otimes K_{X})\cong\mathrm{H}^{1}(X, \mathrm{ad}(E_{G}))^{*}.\] Thus the cotangent bundle of \(\mathcal{M}^{d,rs}_{\rm Higgs}(G)\), \[T^{*}\mathcal{M}^{d,rs}(G)\subset\mathcal{M}^{d,rs}_{\rm Higgs}(G)\] is an open dense subvariety of \(\mathcal{M}^{d,rs}_{\rm Higgs}(G)\). Thus, we have \[\dim\mathcal{M}^{d,rs}_{\rm Higgs}(G)=2\dim\mathcal{M}^{d,rs}(G).\] ### Holomorphic \(G\)-connections Let \(p\) denote the projection morphism from the total space of \(E_{G}\) to \(X\). For any open subset \(U\subset X\), let \(\mathcal{A}(U)\) denote the space of \(G\)-equivariant holomorphic vector fields on \(p^{-1}(U)\). Let \(\mathcal{A}\) be the coherent sheaf on \(X\) which associates to any \(U\) to the vector space \(\mathcal{A}(U)\). The corresponding vector bundle is called the _Atiyah bundle_ for \(E_{G}\) and it is denoted by \(\mathrm{At}(E_{G})\) (see [2]). In fact, it is given by the quotient \[\mathrm{At}(E_{G})\coloneqq(TE_{G})/G\] where \(TE_{G}\) is the holomorphic tangent bundle of \(E_{G}\); so \(\mathrm{At}(E_{G})\) is a holomorphic vector bundle over \(E_{G}/G=X\). Consequently, we have an exact sequence of vector bundles \[0\longrightarrow\mathrm{ad}(E_{G})\longrightarrow\mathrm{At}(E_{G}) \stackrel{{\eta}}{{\longrightarrow}}TX\longrightarrow 0, \tag{2.2}\] where \(TX\) is the holomorphic tangent bundle of \(X\). The morphism \(\eta\) is defined using the differential \(dp\) of \(p:E_{G}\to X\). Also, note that the adjoint bundle \(\mathrm{ad}(E_{G})\) is the subbundle of the tangent bundle \(TE_{G}\) defined by the kernel of \(dp\). The above short exact sequence (2.2) is known as the _Atiyah exact sequence_ for the principal \(G\)-bundle \(E_{G}\). A _holomorphic connection_ on \(E_{G}\) is a holomorphic splitting of the Atiyah exact sequence, i.e., a holomorphic homomorphism \[\mathcal{D}:TX\longrightarrow\mathrm{At}(E_{G})\] such that \[\eta\circ\mathcal{D}=\mathrm{id}_{TX}\] for the morphism \(\eta\) in the Atiyah sequence (2.2). If \(\mathcal{D}^{\prime}\) is an another splitting of (2.2), then \(\mathcal{D}-\mathcal{D}^{\prime}\) is a holomorphic homomorphism from \(TX\) to \(\mathrm{ad}(E_{G})\). Conversely, for any holomorphic section \(s\in\mathrm{H}^{0}(X,K_{X}\otimes\mathrm{ad}(E_{G}))\), if \(\mathcal{D}\) is a splitting of (2.2) then so is \(\mathcal{D}+s\). Therefore, the space of all holomorphic connections on \(E_{G}\) is an affine space for the vector space \(\mathrm{H}^{0}(X,K_{X}\otimes\mathrm{ad}(E_{G}))\). Since \(X\) has complex dimension one, any holomorphic connection on \(E_{G}\) is automatically a flat holomorphic connection on \(E_{G}\) compatible with its holomorphic structure and since \(\mathrm{H}^{1}(X,K_{X}\otimes\mathrm{ad}(E_{G}))\) parametrizes the space of all extensions of \(TX\) by \(\mathrm{ad}(E_{G})\), the condition required for the existence of a flat holomorphic connection on the principal bundle \(E_{G}\) is equivalent to the condition that if \(\alpha\in\mathrm{H}^{1}(X,K_{X}\otimes\mathrm{ad}(E_{G}))\) corresponds to the sequence (2.2) then \(\alpha=0\) (see [3]). By [3, Theorem 4.1], a holomorphic connection on \(E_{G}\) always exists if \(E_{G}\) is semistable. **Definition 6**.: A _holomorphic \(G\)-connection_ is a pair \((E_{G},\mathcal{D})\) where \(E_{G}\) is a holomorphic principal \(G\)-bundle and \(\mathcal{D}\) is holomorphic connection on \(E_{G}\). Since the degree of a flat vector bundle is zero, a holomorphic \(G\)-connection \((E_{G},\mathcal{D})\) is automatically semistable. Let \(\mathcal{M}^{d}_{\mathrm{conn}}(G)\) denote the moduli space of holomorphic \(G\)-connections over \(X\) of fixed topological type \(d\in\pi_{1}(G)\). By [4], the moduli space \(\mathcal{M}^{d}_{\mathrm{conn}}(G)\) is a normal irreducible quasi-projective variety over \(\mathbb{C}\) of dimension \[\dim\mathcal{M}^{d}_{\mathrm{conn}}(G)=\dim\mathcal{M}^{d}_{\mathrm{Higgs}}(G )=2(g-1)\cdot\dim_{\mathbb{C}}G.\] The moduli space \(\mathcal{M}^{d,rs}_{\mathrm{conn}}(G)\subset\mathcal{M}^{d}_{\mathrm{conn}}(G)\) of regularly stable holomorphic \(G\)-connections is open and the smooth locus of \(\mathcal{M}^{d}_{\mathrm{conn}}(G)\). ### \(\lambda\)-connections Let \(p:E_{G}\to X\) be a holomorphic principal \(G\)-bundle over \(X\) and let \(\lambda\in\mathbb{C}\). **Definition 7**.: A \(\lambda\)_-connection_ on \(E_{G}\) over \(X\) is a holomorphic map of vector bundles \[\nabla:TX\longrightarrow\mathrm{At}(E_{G})\] such that \(\eta\circ\nabla=\lambda\cdot\mathrm{id}_{TX}\) for the morphism \(\eta\) in the Atiyah sequence (2.2). If \(\nabla\) is a \(\lambda\)-connection on \(E_{G}\) with \(\lambda\neq 0\), then \(\lambda^{-1}\nabla\) is a holomorphic \(G\)-connection on \(E_{G}\). Therefore, \((E_{G},\nabla)\) is automatically semistable for \(\lambda\neq 0\). Let \(\mathcal{M}^{d}_{\mathrm{Hod}}(G)\) be the moduli space consisting of triples \((E_{G},\lambda,\nabla)\), where \(\lambda\in\mathbb{C}\), \(E_{G}\) is a principal \(G\)-bundle over \(X\) of topological type \(d\in\pi_{1}(G)\) and \(\nabla\) is a semistable \(\lambda\)-connection on \(E_{G}\) (see [15], [4] for details). Let \[\mathcal{M}^{d,rs}_{\mathrm{Hod}}(G)\subset\mathcal{M}^{d}_{\mathrm{Hod}}(G)\] denote the open smooth locus of triples \((E_{G},\lambda,\nabla)\) such that \(E_{G}\) is regularly stable. There is a canonical surjective algebraic map \[\pi:\mathcal{M}^{d}_{\mathrm{Hod}}(G) \longrightarrow\mathbb{C} \tag{2.3}\] \[(E_{G},\lambda,\nabla) \longmapsto\lambda.\] The fiber \(\pi^{-1}(0)\) over \(0\in\mathbb{C}\) is actually the moduli space of semistable \(G\)-Higgs bundles over \(X\), i.e. \[\mathcal{M}^{d}_{\mathrm{Higgs}}(G)=\pi^{-1}(0)\subset\mathcal{M}^{d}_{ \mathrm{Hod}}(G).\] The natural \(\mathbb{C}^{*}\)-action (2.1) on \(\mathcal{M}^{d}_{\mathrm{Higgs}}(G)\) extends to a \(\mathbb{C}^{*}\)-action on the Hodge moduli space \(\mathcal{M}^{d}_{\mathrm{Hod}}(G)\) defined by \[t\cdot(E_{G},\lambda,\nabla)\coloneqq(E_{G},t\lambda,t\nabla). \tag{2.4}\] If we consider the case \(\lambda=1\), then the fiber \(\pi^{-1}(1)\) is the moduli space \(\mathcal{M}^{d}_{\mathrm{conn}}(G)\) of holomorphic \(G\)-connections on \(X\). ### Semiprojective varieties **Definition 8**.: Let \(V\) be a quasi-projective variety over \(\mathbb{C}\), equipped with a \(\mathbb{C}^{*}\)-action \(v\mapsto t\cdot v\), \(v\in V,t\in\mathbb{C}^{*}\). We call that \(V\) is _semiprojective_ if it satisfies: 1. for all \(v\in V\), the limit \[\lim_{t\to 0}(t\cdot v)\in V\] exists in \(V\), 2. the fixed point subvariety \(V^{\mathbb{C}^{*}}\subset V\) is proper in \(V\). ### Semiprojectivity of the moduli space of \(G\)-Higgs bundles Recall that the moduli space \(\mathcal{M}^{d}_{\rm Higgs}(G)\) of \(G\)-Higgs bundles admits a standard \(\mathbb{C}^{*}\)-action \[t\cdot(E_{G},\varphi)=(E_{G},t\varphi),\] i.e. if \((E_{G},\varphi)\) is semistable (resp. stable) then \((E_{G},t\varphi)\) is semistable (resp. stable) for all \(t\in\mathbb{C}^{*}\). Let \(\operatorname{rank}(G)=r\). Then the Hitchin map is given by \[h:\mathcal{M}^{d}_{\rm Higgs}(G) \longrightarrow\mathcal{H}:=\bigoplus_{i=1}^{r}\mathrm{H}^{0}(X, K_{X}^{d_{i}})\] \[(E_{G},\varphi) \mapsto(p_{1}(\varphi),\ldots,p_{r}(\varphi))\] where \(\{p_{1},\ldots,p_{r}\}\) is a homogeneous basis for the ring of invariant polynomials on \(\operatorname{Lie}(G)=\mathfrak{g}\) and \(d_{i}\)'s are degrees of \(p_{i}\)'s. **Lemma 2.1**.: _The Hitchin map \(h:\mathcal{M}^{d}_{\rm Higgs}(G)\to\mathcal{H}\) is \(\mathbb{C}^{*}\)-equivariant._ Proof.: The Hitchin base \(\mathcal{H}\) admits a standard \(\mathbb{C}^{*}\)-action which is given by \[t\cdot(s_{1},s_{2},\ldots,s_{r})=(t^{d_{1}}s_{1},t^{d_{2}}s_{2},\ldots,t^{d_{r }}s_{r}).\] Let \(h(E_{G},\varphi)=(s_{1},s_{2},\ldots,s_{r})\). Then, \[h(t\cdot(E_{G},\varphi)) =h(E_{G},t\varphi)\] \[=(p_{1}(t\varphi),\ldots,p_{r}(t\varphi))\] \[=(t^{d_{1}}p_{1}(\varphi),t^{d_{2}}p_{2}(\varphi),\ldots,t^{d_{r }}p_{r}(\varphi))\] \[=(t^{d_{1}}s_{1},t^{d_{2}}s_{2},\ldots,t^{d_{r}}s_{r})\] \[=t\cdot(s_{1},s_{2},\ldots,s_{r})\] \[=t\cdot h(E_{G},\varphi).\] Hence, \(h\) is \(\mathbb{C}^{*}\)-equivariant. To prove the semiprojectivity of the moduli space, we need to show that the moduli space the \(\mathcal{M}^{d}_{\rm Higgs}(G)\) satisfies the conditions of the Definition 8. **Lemma 2.2**.: _Let \((E_{G},\varphi)\in\mathcal{M}^{d}_{\rm Higgs}(G)\) be a semistable \(G\)-Higgs bundle. Then the limit \(\lim_{t\to 0}(E_{G},t\varphi)\) exists in \(\mathcal{M}^{d}_{\rm Higgs}(G)\)._ Proof.: Consider the morphism \[f:\mathbb{C}^{*}\longrightarrow\mathcal{M}^{d}_{\rm Higgs}(G)\] given by \(t\mapsto(E_{G},t\varphi)\). Since \(h\) is \(\mathbb{C}^{*}\)-equivariant (by 2.1), we have \[\lim_{t\to 0}h(E_{G},t\varphi)=\lim_{t\to 0}t\cdot h(E_{G},\varphi)=0.\] Thus, the composition map \(F\coloneqq h\circ f:\mathbb{C}^{*}\longrightarrow\mathcal{H}\) extends to a morphism \(\hat{F}:\mathbb{C}\longrightarrow\mathcal{H}\). By valuative criterion of properness (since \(h\) is proper) \(f\) extend to a morphism \[\hat{f}:\mathbb{C}\longrightarrow\mathcal{M}^{d}_{\rm Higgs}(G).\] Hence, \(\lim_{t\to 0}(E_{G},t\varphi)\) exists in \(\mathcal{M}^{d}_{\rm Higgs}(G)\). **Lemma 2.3**.: _The fixed point locus under the \(\mathbb{C}^{*}\)-action on \(\mathcal{M}^{d}_{\rm Higgs}(G)\) is proper in \(h^{-1}(0)\)._ Proof.: Note that the origin is the only point on the Hitchin base \(\mathcal{H}\) which is fixed under the \(\mathbb{C}^{*}\)-action. Thus, the fixed point subvariety \(\mathcal{H}^{\mathbb{C}^{*}}\) is the singleton set \(\{0\}\). Since \(h\) is \(\mathbb{C}^{*}\)-equivariant, the fixed point locus \(\mathcal{M}^{d}_{\rm Higgs}(G)^{\mathbb{C}^{*}}\) must be closed in \(h^{-1}(\mathcal{H}^{\mathbb{C}^{*}})=h^{-1}(0)\). Also, since \(h\) is proper, so is \(h^{-1}(0)\). Hence, \(\mathcal{M}^{d}_{\rm Higgs}(G)^{\mathbb{C}^{*}}\) is proper in \(h^{-1}(0)\). **Proposition 1**.: _The moduli space \(\mathcal{M}^{d}_{\rm Higgs}(G)\) of semistable \(G\)-Higgs bundles is a semiprojective variety._ Proof.: Since the moduli space \(\mathcal{M}^{d}_{\rm Higgs}(G)\) is a quasi-projective variety, semiprojectivity follows from the Lemma 2.2 and 2.3. ### Semiprojectivity of principal Hodge moduli space Recall the \(\mathbb{C}^{*}\)-action on \(\mathcal{M}^{d}_{\rm Hod}(G)\) given as in (2.4). **Lemma 2.4**.: _Let \((E_{G},\lambda,\nabla)\in\mathcal{M}^{d}_{\rm Hod}(G)\) be a \(\lambda\)-connection on \(E_{G}\). Then the limit_ \[\lim_{t\to 0}(E_{G},t\lambda,t\nabla)\] _exists in \(\pi^{-1}(0)\subset\mathcal{M}^{d}_{\rm Hod}(G)\), where \(\pi:\mathcal{M}^{d}_{\rm Hod}(G)\longrightarrow\mathbb{C}\) is the projection map (2.3)._ Proof.: The proof is similar to [16, Corollary 10.2]. Consider the following projections \[\pi_{1}:X\times\mathbb{C}^{*}\longrightarrow X\ \ \text{and}\ \ \pi_{2}:X\times \mathbb{C}\longrightarrow\mathbb{C}.\] Now consider the \(\mathbb{C}^{*}\)-flat family over \(\pi_{2}:X\times\mathbb{C}\longrightarrow\mathbb{C}\) given by \[(\mathcal{E},t\lambda,\nabla_{\pi_{2}})\coloneqq(\pi_{1}^{*}E_{G},t\lambda,t \pi_{1}^{*}\nabla)\] For any \(t\neq 0\), we know that a principal \(t\lambda\)-connection \((E_{G},t\lambda,t\nabla)\) is semistable if and only if \((E_{G},\lambda,\nabla)\) is semistable. Therefore, the fibers of the above family are semistable for \(t\neq 0\). Following [16, Theorem 10.1], there exist a \(\mathbb{C}\)-flat family \((\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}})\) over \(\pi_{2}:X\times\mathbb{C}\longrightarrow\mathbb{C}\) such that \[(\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\mathbb{C}^{*}}\cong(\pi_{1}^{*}E_{G},t\lambda,t\pi_{1}^{*}\nabla)\] and \((\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\{0\}}\) is semistable. Therefore, \[(\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\{0\}}\in\pi^{-1}(0)\] is the limit of the \(\mathbb{C}^{*}\)-orbit of \((E_{G},\lambda,\nabla)\) at \(t=0\) in the moduli space \(\mathcal{M}^{d}_{\rm Hod}(G)\). **Lemma 2.5**.: _The fixed point subvariety \(\mathcal{M}^{d}_{\rm Hod}(G)^{\mathbb{C}^{*}}\) of \(\mathcal{M}^{d}_{\rm Hod}(G)\) is proper in \(\mathcal{M}^{d}_{\rm Hod}(G)\)._ Proof.: The \(\mathbb{C}^{*}\)-action \(\mathcal{M}^{d}_{\rm Hod}(G)\) is given by \[t\cdot(E_{G},\lambda,\nabla)=(E_{G},t\lambda,t\nabla).\] Thus the fixed point subvariety is exactly same as the fixed point subvariety under the \(\mathbb{C}^{*}\)-action on \(\pi^{-1}(0)=\mathcal{M}^{d}_{\rm Higgs}(G)\). Hence by Lemma 2.3, the fixed point subvariety \[\mathcal{M}^{d}_{\rm Hod}(G)^{\mathbb{C}^{*}}\subset\mathcal{M}^{d}_{\rm Hod}(G)\] is proper. Since the moduli space \(\mathcal{M}^{d,rs}_{\rm Hod}(G)\subset\mathcal{M}^{d}_{\rm Hod}(G)\) of regularly stable Hodge moduli is smooth, we have the following proposition: **Proposition 2**.: _The moduli space \(\mathcal{M}^{d,rs}_{\rm Hod}(G)\) is a smooth semiprojective variety over \(\mathbb{C}\)._ _Moreover, the restriction map_ \[\pi_{\rm rs}\mathrel{\mathop{:}}=\left.\pi\right|_{\mathcal{M}^{d,rs}_{\rm Hod}( G)}:\mathcal{M}^{d,rs}_{\rm Hod}(G)\longrightarrow\mathbb{C}\] _given as in (2.3) is a \(\mathbb{C}^{*}\)-equivariant surjective submersion which also covers the scaling action on \(\mathbb{C}\)_ Proof.: Following similar arguments as in Lemma 2.4 and 2.5, we conclude that \(\mathcal{M}^{d,rs}_{\rm Hod}(G)\) is semiprojective. The second part follows immediately from the smoothness property of the moduli space \(\mathcal{M}^{d,rs}_{\rm Hod}(G)\). ## 3. Grothendieck motives of the moduli spaces In this section, we recall some basic properties of the Grothendieck ring of varieties and define the meaning of the Grothendieck motive. ### Grothendieck ring of varieties We denote by \(\operatorname{Var}_{\mathbb{C}}\) the category of quasi-projective varieties over \(\mathbb{C}\). Let \(G\) be the quotient group of the free abelian group generated by isomorphism classes of (quasi-projective) varieties \([Z]\) (where \(Z\in\operatorname{Var}_{\mathbb{C}}\)), modulo the relation \[[Z]=[Z^{\prime}]+[Z\setminus Z^{\prime}].\] In \(G\), the additive and multiplicative structures are given by \[[Z_{1}]+[Z_{2}]\mathrel{\mathop{:}}=[Z_{1}\sqcup Z_{2}],\] and \[[Z_{1}]\cdot[Z_{2}]\mathrel{\mathop{:}}=[Z_{1}\times Z_{2}].\] This gives us a commutative ring \((G,+,\cdot)\), called the _Grothendieck ring of varieties_. We will denote it by \(K(\operatorname{Var}_{\mathbb{C}})\). The additive and multiplicative identities of the ring \(K(\operatorname{Var}_{\mathbb{C}})\) are \(0=[\emptyset]\) and \(1=[\operatorname{Spec}(\mathbb{C})]\) respectively. We denote the class of the affine line \(\mathbb{A}^{1}\) by \[\mathbb{L}\mathrel{\mathop{:}}=[\mathbb{A}^{1}]=[\mathbb{C}].\] The class \(\mathbb{L}\) is called the _Lefschetz object_ and its \(n\)-th power is given by \[\mathbb{L}^{n}=[\mathbb{A}^{n}]=[\mathbb{C}^{n}].\] Let \(K(\operatorname{Var}_{\mathbb{C}})[\mathbb{L}^{1}_{\mathbb{L}}]\) denote the localization of \(K(\operatorname{Var}_{\mathbb{C}})\). It has the filtration defined by the subgroups that are generated by \([Z]\mathbb{L}^{-k}\) with \(\dim(Z)-k\leq n\) for a fixed natural number \(n\). Let \[\hat{K}(\operatorname{Var}_{\mathbb{C}})=\left\{\,\sum_{k\geq 0}[Z_{k}]\mathbb{L} ^{-k}\,\,\left|\,\,\right.\,[Z_{k}]\in K(\operatorname{Var}_{\mathbb{C}})\, \operatorname{with}\,\dim Z_{k}-k\longrightarrow-\infty\right\}\] be the dimensional completion of \(K(\operatorname{Var}_{\mathbb{C}})\). By the Grothendieck motive, we mean the following **Definition 9**.: For a quasi-projective complex variety \(Z\), the class \([Z]\in K(\operatorname{Var}_{\mathbb{C}})\) or in \(\hat{K}(\operatorname{Var}_{\mathbb{C}})\) is called the _Grothendieck motive_, or just the _motive_ of \(Z\). ### Mixed Hodge structure and \(E\)-polynomial Let \(Z\) be a quasi-projective variety over \(\mathbb{C}\) and \(d=\dim(Z)\) be its dimension. In [6], Deligne showed that the compactly supported \(k\)-th cohomology \(\mathrm{H}^{k}_{c}(Z)\coloneqq\mathrm{H}^{k}_{c}(Z,\mathbb{C})\) is equipped with a mixed Hodge structure for all \(k\in\{0,\dots,2d\}\). Let \[h^{k,p,q}(Z)\coloneqq\dim\mathrm{Gr}^{p}_{F}\mathrm{Gr}^{W}_{p+q}(\mathrm{H}^ {k}_{c}(Z)),\] where \(p,q\in\{0,\dots,k\}\), \(\mathrm{Gr}^{W}_{p+q}(\mathrm{H}^{k}_{c}(Z))\) is the \((p+q)\)-th graded piece of increasing weight filtration of \(\mathrm{H}^{k}_{c}(Z)\) and \(\mathrm{Gr}^{p}_{F}\mathrm{Gr}^{W}_{p+q}(\mathrm{H}^{k}_{c}(Z))\) is the \(p\)-th graded piece of the induced decreasing Hodge filtration of \(\mathrm{Gr}^{W}_{p+q}(\mathrm{H}^{k}_{c}(Z))\). One can easily verify that \(h^{k,p,q}(Z)\,=\,h^{k,q,p}(Z)\) and \(\dim\mathrm{H}^{k}_{c}(Z)\,=\,\sum_{p,q=0}^{d}h^{k,p,q}(Z)\). The \(E\)_-polynomial_ of \(Z\) is defined by \[E(Z)\,=\,E(Z;u,v)\,\coloneqq\,\sum_{p,q}\sum_{k}(-1)^{k}h^{k,p,q}(Z)u^{p}v^{q} \in\mathbb{Z}[u,v].\] Note that \(E(Z;1,1)=\chi(Z)\) is exactly the Euler characteristic of the variety \(Z\). **Examples 3.1**.: * \(E(\mathbb{C})=E(\mathbb{A}^{1})=E(\mathbb{P}^{1})-E(\mathrm{pt})=uv\eqqcolon x\), * \(E(\mathbb{P}^{n})=E(\mathbb{A}^{n})+E(\mathbb{A}^{n-1})+\dots+E(\mathbb{A}^{1} )+E(\mathbb{A}^{0})=x^{n}+x^{n-1}+\dots+x+1\). If \(Z\) has pure Hodge structure, then the \(E\)-polynomial of \(Z\) is given by \[E(Z)=\sum_{p,q=0}^{d}(-1)^{p+q}h^{p,q}(Z)u^{p}v^{q} \tag{3.1}\] where \(d=\dim Z\) and \(h^{p,q}(Z)=\dim\mathrm{H}^{p,q}_{c}(Z)\). **Remark 3.2**.: _The \(E\)-polynomial of a quasiprojective variety can be seen as a ring homomorphism_ \[E:\hat{K}(\mathrm{Var}_{\mathbb{C}})\longrightarrow\mathbb{Z}[u,v]\] _which extends to the completion of the Grothendieck ring of varieties_ \[E:\hat{K}(\mathrm{Var}_{\mathbb{C}})\longrightarrow\mathbb{Z}[u,v]\left[ \left\|\frac{1}{uv}\right\|\right]\] _(also denoted by \(E\)) with values in the Laurent series in \(uv\). Therefore we can conclude that \(E\)-polynomials of two quasiprojective varieties with the same motives are equal._ We will apply the following result for a smooth semiprojective complex variety in our setup. **Proposition 3** ([1], Theorem 5.6).: _Let \(Z\) be a smooth semiprojective complex variety endowed with a \(\mathbb{C}^{*}\)-equivariant surjective submersion \(\tau:Z\to\mathbb{C}\) covering the standard scaling action on \(\mathbb{C}\). Then the following motivic equalities hold in the Grothendieck ring \(\hat{K}(\mathrm{Var}_{\mathbb{C}})\),_ \[[\tau^{-1}(0)]=[\tau^{-1}(1)]\;\;\mathrm{and}\;\;[Z]=\mathbb{L}[\tau^{-1}(0)],\] _where \(\mathbb{L}\) is the Lefschetz motive._ Proof.: See [1, Theorem 5.6] for details. **Theorem 3.3**.: _In \(\hat{K}(\operatorname{Var}_{\mathbb{C}})\) the following equalities hold,_ \[[\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)]=[\mathcal{M}^{d,rs}_{ \operatorname{conn}}(G)]\,\,\,\text{and}\,\,\,[\mathcal{M}^{d,rs}_{ \operatorname{Hod}}(G)]=\mathbb{L}[\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G )].\] _Therefore, their \(E\)-polynomials satisfies the following_ \[E(\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G))=E(\mathcal{M}^{d,rs}_{ \operatorname{conn}}(G))\,\,\,\text{and}\,\,\,E(\mathcal{M}^{d,rs}_{ \operatorname{Hod}}(G))=uvE(\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)).\] _Moreover, the Hodge structures of the moduli spaces \(\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)\) and \(\mathcal{M}^{d,rs}_{\operatorname{conn}}(G)\) are pure and isomorphic, i.e._ \[\operatorname{H}^{\bullet}(\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)) \cong\operatorname{H}^{\bullet}(\mathcal{M}^{d,rs}_{\operatorname{conn}}(G)).\] Proof.: By Proposition 2, the moduli space \(\mathcal{M}^{d,rs}_{\operatorname{Hod}}(G)\) is a smooth semiprojective variety equipped with the \(\mathbb{C}^{*}\)-action given in 2.4. Also, from the same Proposition 2 it follows that the projection map \[\pi:\mathcal{M}^{d,rs}_{\operatorname{Hod}}(G)\longrightarrow\mathbb{C}\] is a \(\mathbb{C}^{*}\)-equivariant submersion covering the natural scaling action of \(\mathbb{C}^{*}\) on \(\mathbb{C}\). Therefore, by Proposition 3, we have \[[\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)]=[\pi^{-1}(0)]=[\pi^{-1}(1)]=[ \mathcal{M}^{d,rs}_{\operatorname{conn}}(G)]\] and \[[\mathcal{M}^{d,rs}_{\operatorname{Hod}}(G)]=\mathbb{L}[\pi^{-1}(0)]=\mathbb{ L}[\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)].\] Also, from the Remark 3.2, we have the following equalities of \(E\)-polynomials, \[E(\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)) =E(\mathcal{M}^{d,rs}_{\operatorname{conn}}(G))\,\,\,\text{and}\] \[E(\mathcal{M}^{d,rs}_{\operatorname{Hod}}(G)) =uvE(\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)).\] Finally, following [8, Corollary 1.3.2 and Corollary 1.3.3], we conclude that \[\operatorname{H}^{\bullet}(\mathcal{M}^{d,rs}_{\operatorname{Higgs}}(G)) \cong\operatorname{H}^{\bullet}(\mathcal{M}^{d,rs}_{\operatorname{conn}}(G)).\] ## Acknowledgment The first author is supported by the INSPIRE faculty fellowship (Ref No.: IFA22-MA 186) funded by the DST, Govt. of India.
2301.03771
Chatbots in a Honeypot World
Question-and-answer agents like ChatGPT offer a novel tool for use as a potential honeypot interface in cyber security. By imitating Linux, Mac, and Windows terminal commands and providing an interface for TeamViewer, nmap, and ping, it is possible to create a dynamic environment that can adapt to the actions of attackers and provide insight into their tactics, techniques, and procedures (TTPs). The paper illustrates ten diverse tasks that a conversational agent or large language model might answer appropriately to the effects of command-line attacker. The original result features feasibility studies for ten model tasks meant for defensive teams to mimic expected honeypot interfaces with minimal risks. Ultimately, the usefulness outside of forensic activities stems from whether the dynamic honeypot can extend the time-to-conquer or otherwise delay attacker timelines short of reaching key network assets like databases or confidential information. While ongoing maintenance and monitoring may be required, ChatGPT's ability to detect and deflect malicious activity makes it a valuable option for organizations seeking to enhance their cyber security posture. Future work will focus on cybersecurity layers, including perimeter security, host virus detection, and data security.
Forrest McKee, David Noever
2023-01-10T03:43:35Z
http://arxiv.org/abs/2301.03771v1
# Chatbots in a Honeypot World ###### Abstract Question-and-answer agents like ChatGPT offer a novel tool for use as a potential honeypot interface in cyber security. By imitating Linux, Mac, and Windows terminal commands and providing an interface for TeamViewer, nmap, and ping, it is possible to create a dynamic environment that can adapt to the actions of attackers and provide insight into their tactics, techniques, and procedures (TTPs). The paper illustrates ten diverse tasks that a conversational agent or large language model might answer appropriately to the effects of command-line attacker. The original result features feasibility studies for ten model tasks meant for defensive teams to mimic expected honeypot interfaces with minimal risks. Ultimately, the usefulness outside of forensic activities stems from whether the dynamic honeypot can extend the time-to-conquer or otherwise delay attacker timelines short of reaching key network assets like databases or confidential information. While ongoing maintenance and monitoring may be required, ChatGPT's ability to detect and deflect malicious activity makes it a valuable option for organizations seeking to enhance their cyber security posture. Future work will focus on cybersecurity layers, including perimeter security, host virus detection, and data security. Transformers, Text Generation, Malware Generation, Generative Pre-trained Transformers, GPT ## 1 Introduction A honeypot is a significant cyber security tool that is used to detect, deflect, and study malicious activity on a computer network [1-4]. It is essentially a trap set up to lure in potential attackers, who are then observed and their actions are recorded for later threat analysis. Honeypots can be used in a variety of ways, including for research, to gather intelligence on new or emerging threats, or to distract and mislead attackers while security teams work to defend against an ongoing attack [1]. A spectrum exists between low-interaction honeypots that may expose only ports and no real services to high-interaction honeypots that virtualize entire networks using VMWare or User-mode Linux with application-, network- and system-layer features [5]. Making realistic traps relies on the realism of the honeypot. Attackers may quickly discover the static elements or missing functional files that tip off a fake asset or operating system facade. Probing services and ports can reveal a fake network asset [6-7]. The rise of cloud and virtual machine images has exacerbated the challenge to mimic real networks with a passive store-front approach [2]. More dynamic approaches to building honeypots that feature real applications but host fake data [6]. An example dynamic honeypot deploys a real SQL database capable of real hacking attempts, all of which culminate in revealing fake personnel or salary data. A hybrid version of the real vs. simulated honeypot problem involves creating a digital twin that behaves like the real network but which underneath remains a simulation based on a large language model [8] that anticipates the output of the operating system and applications [9]. This hardware and software stack together presents a sufficiently deep environment that a large language model simulates the expected outcomes when queried by an intruder [9-11]. This hybrid option provides a novel experimental platform for the current study and assessments of its capabilities. In this paper, we will explore the concept of using ChatGPT, a natural language processing tool [12-14], as a honeypot in the field of cyber security. One potential use of ChatGPT as a honeypot is to issue various commands that simulate Linux [9] and Windows terminals. This can be used to lure in attackers who are specifically targeting these types of systems, and allow security teams to observe and study their actions [15-16]. By issuing commands through ChatGPT, it is possible to create a realistic and dynamic environment that can adapt to the actions of the attacker [6]. As an attacker explores this new network asset, their commands reveal ever more sophisticated emulation patterns derived from the internet-scale training data underpinning the OpenAI GPT series of transformer architectures [13]. Historically, honeypot logs provide valuable insights into the tactics, techniques, and procedures (TTPs) used by attackers, as well as help security teams to identify patterns and trends in malicious activity [17-20]. Additionally, issuing commands through ChatGPT can also help to distract and mislead attackers, giving security teams more time to defend against an ongoing attack. The latest generation of ChatGPT (Dec 2022 update) [21] now sustains its memory of initial instructions for up to 8000 tokens (or around 5600 words, 20-25 text pages). To translate this coherent "command-driven" conversation to a typical intrusion, the attacker might interact with emulated honeypot (aka, chatbot interface) for hours before the simulation required an instructional reset. ## 2 Methods The structure of the paper closely follows the detailed instructions and attacker interactions outlined in Appendices A-J as ten tasks related to honeypot construction, detection, or harvesting [1]. As shown in Table 1, each appendix section outlines the initial ChatGPT instructions or prompt followed by a simple proof of principle illustrating the degree of dynamic emulation achievable. The ten tasks demonstrate plausible command-level interactions with an adversary who breaches a network consisting of all major operating systems (Windows, Linux, Mac). We simulate application-level interactions with a python-driven Jupyter notebook and a Team Viewer installation. We simulate network-level interactions using network mapping tools (nmap) and launch a simulated distributed denial-of-service (DDoS) attack using ping. We simulate an attacker's deception by changing the time-stamp on a malicious file ("time-stomping") so forensic analysis might fail to uncover the file changes. We simulate the modification of a Windows registry key, such that a malicious file or scheduled task might transform into an installed application or routine network operation. We finally emulate nefarious actions or spoofing of IP or machine addresses (MAC) using realistic output from a Linux terminal and the "arp" command interactions from ChatGPT. The paper approaches these ten tasks empirically and methodically, first constructing the prompt to convince the chatbot that its behavior follows the pattern of a given operating system (OS) or application, followed by repeated probing of expected responses or honeypot output. The simplest realization arises for a directory listing unique to each OS. The most sophisticated simulation mirrors both the OS, an example application like ping (Appendix G) or Jupyter notebook (Appendix B), and a model network with realistic connection maps. \begin{table} \begin{tabular}{l|l|l} \multicolumn{3}{c}{**Interactions.**} \\ \multicolumn{3}{c}{**OS**} \\ \multicolumn{3}{c}{**OS**} \\ \multicolumn{3}{c}{**Comp1**} & \multicolumn{1}{c}{**Comp2**} \\ \hline **Appendix A.** & I want you to act as a Linux terminal. I will type commands and you will & Linux \\ **Emulate an** & reply with what the terminal should show. I want you only to reply with the & \\ **Operating System** & terminal output inside one unique code block, and nothing else. Do not & \\ **Terminal** & write explanations. Do not type commands unless I instruct you to do so. & \\ & When I need to tell you something in English I will do so by putting text & \\ & inside only curly brackets [like this]. My first command is “pwd” & \\ \hline **Appendix B.** & I want you to act as a jupyter notebook. I will type commands and you will & Application \\ **Emulate a Jupyter** & reply with what the notebook should show. I want you only to reply with & \\ **Notebook** & the notebook output inside one unique code block, and nothing else. Do not & \\ & write explanations. Do not type commands unless I instruct you to do so. & \\ & When I need to tell you something in English I will do so by putting text & \\ \end{tabular} \end{table} Table 1: Ten tasks for dynamic honeypot demonstrations including OS, Application, and Network Interactions. **Oss** \begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline **Creal** & **Round** \\ \hline & inside only curly brackets {like this}. My first command is "print('hello world') & Windows \\ \hline \hline **Appendix C.** & I want you to act as a Windows DOS terminal running as admin. I will type commands and you will reply with what the terminal should show. I want you only to reply with the terminal output inside one unique code block, and & Windows \\ **and Delete a Registry Key** & I instruct you to do so. When I need to tell you something in English I will do so by putting text inside only curly brackets {like this}. My first & \\ & command is reg /? & \\ \hline **Appendix D.** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ **Emulate a DOS** & you will reply with what the DOS command line should show. I want you only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & not writing text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix E.** & I want you to act as a MacIntosh Terminal app. I will type shell commands and you will reply with what the Mac command line should show. I want & Mac \\ **MacIntosh** & you only to reply with the Terminal output inside one unique code block, and & \\ **Terminal as User** & and nothing else. Do not write explanations. Do not type commands unless I & \\ & I instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "ls" & \\ \hline **Appendix F:** & I want you to act as a Linux terminal. I will type commands and you will & Linux \\ **Installing TeamViewer from the Terminal** & reply with what the terminal should show. I want you only to reply with the & \\ & terminal output inside one unique code block, and & nothing else. Do not & \\ & write explanations. Do not type commands unless I & \\ & When I need to tell you something in English I will do so by putting text & \\ & inside only curly brackets {like this}. My first command is "pwd" & \\ & ChatGPT: /home/user & \\ \hline **Appendix G:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ **Launching a DDoS Attack from the Windows CMD** & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix H:** & I want you to act as a Windows Powershell terminal. I will type commands & Windows \\ **Changing the Write Time of a File Using Powershell** & and you will reply with what the Powershell command line should show. I want you only to reply with the CLI output inside one unique code block, and & \\ & and nothing else. Do not write explanations. Do not type commands unless & \\ & I instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I: Poison the ARP Network Table on Windows** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix J:** & I want you to act as a Linux terminal. I will type commands and you will & Linux \\ **Emulate an Attacker’s Lateral** & reply with what the Linux command line should show. I want you only to & \\ & reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix J:** & I want you to act as a Linux terminal. I will type commands and you will & \\ & reply with what the Linux command line should show. I want you only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix J:** & I want you to act as a Linux terminal. I will type commands and you will reply with what the Linux command line should show. I want you & \\ & reply with the CLI output inside one unique code block, and & \\ & not writing text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & the ARP Network Table on Windows & you will reply with what the DOS command line should show. I want you & \\ & **Table on Windows Table on Windows** & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & **Table on Windows** & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & **Table on Windows** & only to reply with what the DOS command line should show. I want you & \\ & **Table on Windows** & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & you will reply with what the DOS command line should show. I want you & \\ & only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type commands unless I & \\ & instruct you to do so. When I need to tell you something in English I will & \\ & do so by putting text inside only curly brackets {like this}. My first & \\ & command is "dir" & \\ \hline **Appendix I:** & I want you to act as a Windows DOS terminal. I will type commands and & Windows \\ & **Table on Windows** & only to reply with what the DOS command line should show. I want you & \\ & not writing text inside one unique code block, and & \\ & not writing text inside only only only to reply with the CLI output inside one unique code block, and & \\ & nothing else. Do not write explanations. Do not type ## 3 Results The main results feature the demonstration for each of the ten honeypot tasks. Appendices A-J summarize the output of the command-line interactivity for honeypots as emulated conversations between a sophisticated attacker and a trained chatbot [9-10,12]. For concreteness, we group the honeypot tasks into three categories based on their focus addressing layers of modern enterprises: operating systems [Appendices A,D,E], applications [Appendices B,F], or networks [Appendices C,G-J]. As a dynamic honeypot interface, the large language model emulates the expected "prompt-response" sequence that real applications and operating systems would generate when queried. Unlike previous models, the ChatGPT interface not only provides sufficient API memory to carry forward previous instructions without defaulting to repeated introductory tasks but also provides a responsive honeypot welcome to sustain the attacker's interest over multiple queries. Based on previous pentesting results,an external attacker can breach 93% of of company networks [22]. The initial intrusion, on average, takes two days [22] usually based on some credential access derived from email phishing campaigns, brute force attacks, or leakage to the cloud, code repositories, and poor training in social engineering tactics. Among the new security tools (encryption, threat intel and detection, firewalls, etc.) decoys and honeypots disguise the real crown jewels of an organization (such as HR or financial information) while also delaying attackers beyond their economic horizon or patience. ### Operating Systems Appendices A,D,E describe the front-facing command line interface for the major operating systems: Linxu, Windows and MacIntosh. Unlike virtual machines or containerized honeypot frameworks [23-24], the overhead for emulating a conversational agent that answers all command line inquiries with correct or expected responses remains a simple API call rather than an installation or download option. The major commands illustrated reveal expected directory structures specific to each default in the three major operating systems. The conversational agent knows the file structure and at increasing depths of the expected file tree can traverse between user documents and root or administrator programs. ### Applications Appendices B,F describe the appropriate responses that an application might yield to an intruder who breaches a running application like Jupyter notebooks or installs a Linux program like TeamViewer. These application level responses illustrate the diversity of underlying cybersecurity knowledge from ChatGPT as a zero-shot or few-shot learner. No explicit context guides the conversational responses, although the model continues to produce the expected application-specific responses that an intruder might expect when probing for application functionality. Among the ten tasks these concrete examples rank highest in diversity such that they respond correctly in two ways, both to understand the default states ("out-of-the-box") but also the modified states following a new program installation (apt-get install TeamViewer2017.asc). ### Attacker Tactics Appendices C,G-J describe the network behavior for common command-line tools that provide key attacker inputs, such as network maps (nmap, App. J), responsive services (ping, App. G), and program installation registry (regedit, App. C). Nmap particularly provides an attacker with an expected output in a honeypot setting that simulates lateral movement and reconnaissance to discover new network assets. Appendix H highlights a frequent attacker deception that changes the creation or modification time stamp on a program change, such that any malicious insertions fail to trigger later discovery as outliers or recent modifications to the operating system. Appendix I illustrates a chat conversation that an unaware attacker modifies the ARP network table and provisions spoofed IP addresses or MAC identifiers. Appendix G provides an example of launching a network-wide denial of service (ping flood) with the expected feedback provided by a large language model placed as the flat front to a would-be attacker probing the honeypot for new resources. ## 4 Discussion and Conclusions In conclusion, ChatGPT has the potential to be a valuable tool as a honeypot in the field of cyber security. By issuing commands that simulate Linux, Mac and Windows terminals, provide a seamless application interface for TeamViewer, nmap, and ping, and finally log the attacker traversal path as new fake assets get owned or discovered. It is possible to create a realistic and dynamic environment that can adapt to the actions of attackers and provide valuable insights into their TTPs. While there are potential limitations to using ChatGPT as a honeypot, such as the need for ongoing maintenance and monitoring, the benefits of having a dynamic and adaptable tool for detecting and deflecting malicious activity make it a promising option for organizations looking to improve their cyber security posture. Overall, ChatGPT offers a unique and innovative approach to the use of honeypots and is worth considering as a component of a comprehensive cybersecurity strategy. Future work explores the cybersecurity layers with an initiative to investigate the firewall or router emulation steps (perimeter security), endpoint steps (host virus detection), and data security (credentials, human behavior, and mission-critical assets). ## Acknowledgments The authors thank the PeopleTec Technical Fellows program for encouragement and project assistance. The authors thank the researchers at OpenAI for developing large language models and allowing public access to ChatGPT.
2305.01491
Electromagnetic and gravitational local spatial densities for spin-1 systems
The matrix elements of the electromagnetic current and the energy-momentum tensor for sharply localized states of spin-1 systems are considered. Their interpretation as local spatial densities of various characteristics of the considered system is discussed.
J. Yu. Panteleeva, E. Epelbaum, J. Gegelia, U. -G. Meißner
2023-05-02T15:13:56Z
http://arxiv.org/abs/2305.01491v1
# Electromagnetic and gravitational local spatial densities for spin-1 systems ###### Abstract The matrix elements of the electromagnetic current and the energy-momentum tensor for sharply localized states of spin-1 systems are considered. Their interpretation as local spatial densities of various characteristics of the considered system is discussed. ## I Introduction Since the development of quantum mechanics it is well known that classical physics is not adequate for describing atomic and subatomic objects. Still, our intuition and the language are so strongly dominated by the classical picture of the world, that we often trade rigorous mathematical expressions for less accurate but better understandable concepts. The charge density of the nucleon serves as a good example. While hadrons certainly possess complicated electromagnetic properties, low-energy electron-hadron scattering can be well described utilizing the one-photon-exchange approximation parameterized in terms of electromagnetic form factors. Motivated by this approximation, three-dimensional Fourier transforms of the form factors in the Breit frame are often interpreted as spatial densities of the corresponding hadrons. This picture fits well to our classical intuition. It originates from the seminal papers on electron-proton scattering by Hofstadter, Sachs and others in the 60ties of the last century [1; 2; 3]. Similar interpretations have also been proposed for the Fourier transforms of the gravitational form factors and for various local distributions [4; 5; 6]. While the classical analogy implies that, e.g., electromagnetic properties of the nucleon can, to some extent, be described by the charge and magnetization densities, in reality there is no "true charge density" which characterizes the actual distribution of the charge "inside" the nucleon. In this sense the spatial densities depend on the adopted definition. It has been repeatedly pointed out that the identification of spatial density distributions with the Fourier transforms of the corresponding form factors in the Breit frame suffers from conceptual problems [7; 8; 9; 10; 11; 12; 13]. In Ref. [11], it was shown on the example of a spin-0 system that the expression for the charge density in terms of the Breit frame distribution follows only in the static limit of an infinitely heavy particle. The issue of a proper definition of the spatial distributions of matrix elements of local operators has attracted much attention in the last few years. For example, the light-front approach allows one to define purely intrinsic spatial densities, which have probabilistic interpretation [7; 8; 9; 10; 14; 15], however, the corresponding densities are obtained only as two-dimensional distributions. The relationship between these densities and the non-relativistic three-dimensional distributions in the Breit frame in terms of the Abel transforms was studied in Refs. [16; 17; 18; 19; 20; 21]. Alternatively, the phase-space approach [22; 23; 24; 25; 26; 27] allows one to define fully relativistic and unambiguous spatial densities, which in contrast to the light-front ones are three-dimensional. However, these densities do not possess a strict probabilistic interpretation. A proper definition of the three-dimensional charge density by using sharply localized states has been revisited for a spin-0 system in Ref. [28]. It turned out that the same definition was actually suggested long ago in the largely overlooked work by Fleming in Ref. [29]. In Ref. [28], closely following the logic of Ref. [11], the charge density possessing the usual probabilistic interpretation has been defined in the zero average momentum frame (ZAMF) of the system as well as in moving frames by using spherically symmetric sharply localized wave packets.1 This definition has also been generalized to spin-1/2 and spin-3/2 systems and to the gravitational densities [30; 31; 32; 33]. Footnote 1: The ZAMF is defined as a Lorentz frame with the vanishing expectation value of the three-momentum for the state, specified by a spherically symmetric packet. For wave packets with a sharp localization around an eigenstate of the four-momentum operator, the ZAMF coincides with the rest-frame of the system. The aim of the current paper is to work out the details of the novel definition for spin-1 systems for the electromagnetic as well as the gravitational local spatial densities. The electromagnetic densities of spin-1 systems have attracted much attention. In Ref. [34] the relativistic 2D charge densities of the deuteron and their frame dependence have been studied in the phase-space approach with the result that less frame dependence compared to the case of the spin-1/2 systems has been found. In Ref. [20] the relation between the 3D and 2D the 2D infinite-momentum frame (IMF) charge densities have been investigated using the definition of the Wigner distributions and the Abel transformation. The densities were expressed in terms of multipole expansions which provide a more clear physical meaning than the helicity-amplitudes of the form factors usually used [34; 35; 36; 37]. The spatial gravitational densities for spin-1 systems have been also extensively discussed in recent years. In particular, in Ref. [38] the multipole expansion of the gravitational densities for spin-1 systems was suggested and computed in the Breit frame. At the same time, important properties of the EMT of spin-1 systems were derived and another parametrization of this quantity was suggested in Ref. [39]. Further, the multipole expansion of the densities for the \(\rho\)-meson in the light-cone quark model was studied in Ref. [40]. Recently the two-dimensional light front densities of spin-1 systems were calculated and discussed in Ref. [41]. In this paper we express the spatial densities of spin-1 systems in terms of the multipole expansion. Analogously to other cases, we consider sharply localized and _spherically symmetric_ wave packets and obtain local spatial distributions for the ZAMF and moving frames, as well as traditional distributions for the Breit frame. Our work is organized as follows. In Sect. II we specify the details of the localized states used in the definitions of local spatial densities. In Section III we define the electromagnetic densities corresponding to the matrix elements of the electromagnetic current in the ZAMF and discuss the static approximation. Gravitational spatial densities of the EMT operator in the ZAMF and the static approximation are considered in Section IV. In Sect. V we obtain the expressions of spatial densities in moving frames, and Sect. VI contains our summary. ## II Sharply localized states We are interested in matrix elements of the electromagnetic current and the EMT operators in spatially localized normalizable Heisenberg-picture states. Such states can be specified in terms of wave packets \[|\Phi,\mathbf{X},\sigma\rangle=\int\frac{d^{3}p}{\sqrt{2E(2\pi)^{3}}}\,\phi( \sigma,\mathbf{p})\,e^{-i\mathbf{p}\cdot\mathbf{X}}|p,\sigma\rangle\,, \tag{1}\] with the eigenstates of the four-momentum \(|p,\sigma\rangle\), characterizing our spin-1 system with momentum \(p\) and polarization \(\sigma\), normalized as \[\langle p^{\prime},\sigma^{\prime}|p,\sigma\rangle=2E(2\pi)^{3}\delta_{\sigma ^{\prime}\sigma}\delta^{(3)}(\mathbf{p}^{\prime}-\mathbf{p})\,. \tag{2}\] Here, \(p=(E,\mathbf{p})\), \(E=\sqrt{m^{2}+\mathbf{p}^{2}}\) and \(m\) is the mass of the system. The spatial translation vectors \(\mathbf{X}\) can be interpreted as the position of the electromagnetic or the gravitational center of the system depending on the considered distributions, see Refs. [30; 28; 29]. It follows from the normalization of the wave packet that the profile function satisfies the condition \[\int d^{3}p\,|\phi(\sigma,\mathbf{p})|^{2}=1\,. \tag{3}\] To _define_ the spatial density distributions of a physical system we use spherically symmetric wave packets with profile functions \(\phi(\sigma,\mathbf{p})=\phi(\mathbf{p})=\phi(|\mathbf{p}|)\) that are also spin-independent in case of systems of non-zero spin. The average momentum of the system in the state specified by such a packet is equal to zero. Therefore, we identify the corresponding density distributions as characterizing the system in the ZAMF. For our calculations below it is convenient to define dimensionless profile functions \[\phi(\mathbf{p})=R^{3/2}\,\tilde{\phi}(R\mathbf{p})\,, \tag{4}\] where \(R\) specifies the size of the wave packet with small values of \(R\) corresponding to sharp localization. ## III Electromagnetic densities The matrix element of the electromagnetic current operator for a spin-1 system for momentum eigenstates can be parameterized in terms of three form-factors [42] \[\langle p^{\prime},\sigma^{\prime}|\hat{j}^{\mu}(\mathbf{r},0)|p,\sigma \rangle=-e^{-i(\mathbf{p}^{\prime}-\mathbf{p})\cdot\mathbf{r}}\epsilon_{ \alpha}^{\star}(p^{\prime},\sigma^{\prime})\epsilon_{\beta}(p,\sigma)\left[2P ^{\mu}g^{\alpha\beta}G_{1}(q^{2})+(q^{\alpha}g^{\mu\beta}-q^{\beta}g^{\mu \alpha})G_{2}(q^{2})-P^{\mu}q^{\alpha}q^{\beta}\frac{G_{3}(q^{2})}{M^{2}} \right], \tag{5}\] where \(q=p^{\prime}-p\) and \(M\) is an arbitrary mass parameter, which is introduced to make the form factors dimensionless. It is natural to take \(M\) equal to the physical mass \(m\) of the system. However, to avoid the mixing of terms of different orders of \(1/m\), when considering the static limit below, it is important to distinguish between \(m\) and \(M\). Therefore we put the parameter \(M\) equal to \(m\) only at the end of calculations, i.e. after performing the systematic expansion in \(1/m\), whenever applicable. In appendix A, an explicit example is given to further corroborate this issue. The polarization 4-vectors in Eq. (5) are defined in standard way [43]: \[\epsilon^{\mu}(p,\sigma)=\left(\frac{\mathbf{p}\cdot\mathbf{\hat{\epsilon}}_{ \mathbf{\sigma}}}{m},\mathbf{\hat{\epsilon}}_{\mathbf{\sigma}}+\frac{\mathbf{p}\cdot\mathbf{ \hat{\epsilon}}_{\mathbf{\sigma}}}{m(m+E)}\mathbf{p}\right), \tag{6}\] where \(\sigma\in\{+,-,0\}\) and the three-dimensional polarization basis vectors in the spherical representation are given by \[\mathbf{\hat{\epsilon}}_{\mathbf{\pm}}=\mp\frac{1}{\sqrt{2}}(1,\pm i,0),\ \ \mathbf{\hat{\epsilon}}_{\mathbf{0}}=(0,0,1). \tag{7}\] The matrix element of the electromagnetic current operator for the state defined in Eq. (1) takes the following form \[j^{\mu}_{\phi}(\mathbf{r}) \equiv \langle\Phi,\mathbf{X},\sigma^{\prime}|\hat{j}^{\mu}(\mathbf{x},0 )|\Phi,\mathbf{X},\sigma\rangle \tag{8}\] \[= -\int\frac{d^{3}P\,d^{3}q}{(2\pi)^{3}\sqrt{4EE^{\prime}}}\, \epsilon^{*}_{\alpha}(p^{\prime},\sigma^{\prime})\epsilon_{\beta}(p,\sigma) \left[2P^{\mu}g^{\alpha\beta}G_{1}(q^{2})+(q^{\alpha}g^{\mu\beta}-q^{\beta}g^{ \mu\alpha})G_{2}(q^{2})-P^{\mu}q^{\alpha}q^{\beta}\frac{G_{3}(q^{2})}{M^{2}}\right]\] \[\times \phi\bigg{(}\mathbf{P}-\frac{\mathbf{q}}{2}\bigg{)}\,\phi^{*} \bigg{(}\mathbf{P}+\frac{\mathbf{q}}{2}\bigg{)}\,e^{-i\mathbf{q}\cdot\mathbf{ r}},\] where \(\mathbf{P}=(\mathbf{p}^{\prime}+\mathbf{p})/2\), \(\mathbf{q}=\mathbf{p}^{\prime}-\mathbf{p}\), \(E=\sqrt{m^{2}+\mathbf{P}^{2}-\mathbf{P}\cdot\mathbf{q}+\mathbf{q}^{2}/4}\), \(E^{\prime}=\sqrt{m^{2}+\mathbf{P}^{2}+\mathbf{P}\cdot\mathbf{q}+\mathbf{q}^{ 2}/4}\) and \(\mathbf{r}=\mathbf{x}-\mathbf{X}\). ### Electromagnetic densities in the ZAMF To obtain the electromagnetic spatial densities corresponding to internal structure of a spin-1 system we consider sharply localized wave packets in Eq. (8). Using the method of dimensional counting of Ref. [44] for the form factors \(G_{1}(q^{2})\), \(G_{2}(q^{2})\) and \(G_{3}(q^{2})\) decaying for large \(q^{2}\) as \(1/q^{4}\), \(1/q^{4}\) and \(1/q^{6}\) (or faster), respectively, the \(R\to 0\) limit in Eq. (8) can be taken as discussed in Ref. [28]. The final result for spherically symmetric wave packets with \(\phi(\mathbf{P})=\phi(|\mathbf{P}|)\), takes the form \[j^{0}(\mathbf{r}) = \int\frac{d^{2}\hat{n}}{4\pi}\frac{d^{3}q}{(2\pi)^{3}}\Bigg{\{} \delta_{\sigma^{\prime}\sigma}\mathcal{G}_{0}(-\mathbf{q}_{\perp}^{2})+\hat{ Q}^{kl}_{\sigma^{\prime}\sigma}\hat{n}^{k}\hat{n}^{l}\frac{\mathbf{q}_{\perp}^{2}}{2m^ {2}}\mathcal{G}_{1}(-\mathbf{q}_{\perp}^{2})+\hat{Q}^{kl}_{\sigma^{\prime} \sigma}q_{\perp}^{k}q_{\perp}^{l}\frac{\mathcal{G}_{2}(-\mathbf{q}_{\perp}^{2} )}{2m^{2}}\Bigg{\}}e^{-i\mathbf{q}\cdot\mathbf{r}},\] \[\mathbf{j}(\mathbf{r}) = \frac{1}{m}\int\frac{d^{2}\hat{n}}{4\pi}\frac{d^{3}q}{(2\pi)^{3}} \,\,\mathbf{\hat{n}}\,\,\mathbf{\hat{n}}\cdot(i\mathbf{\hat{S}}_{\sigma^{ \prime}\sigma}\times\mathbf{q})\mathcal{M}(-\mathbf{q}_{\perp}^{2})e^{-i \mathbf{q}\cdot\mathbf{r}}, \tag{9}\] where \(\mathbf{\hat{S}}_{\sigma^{\prime}\sigma}\) and \(\hat{Q}^{kl}_{\sigma^{\prime}\sigma}\) are the spin and the quadrupole operators, respectively, defined in Appendix B, and \(\mathbf{\hat{n}}\) is a three-dimensional unit vector. Here and in what follows, \(\mathbf{a}_{\parallel}\equiv\mathbf{a}\cdot\mathbf{\hat{n}}\,\mathbf{\hat{n}}\) and \(\mathbf{a}_{\perp}\equiv\mathbf{a}-\mathbf{a}\cdot\mathbf{\hat{n}}\,\mathbf{ \hat{n}}\) denote the components of a vector \(\mathbf{a}\) parallel and perpendicular to the unit vector \(\mathbf{\hat{n}}\), respectively, and \(a_{\parallel}\equiv|\mathbf{a}_{\parallel}|\), \(a_{\perp}\equiv|\mathbf{a}_{\perp}|\). The spatial densities defined via Eq. (9) do not depend on the form of the radial profile function of the wave packet. The combinations of the form factors appearing in Eq. (9) are given by \[\mathcal{G}_{0}(-\mathbf{q}_{\perp}^{2}) = G_{1}(-\mathbf{q}_{\perp}^{2})\left(1-\frac{\mathbf{q}_{\perp}^{2 }}{6m^{2}}\right)+G_{2}(-\mathbf{q}_{\perp}^{2})\frac{\mathbf{q}_{\perp}^{2}}{6 m^{2}}+G_{3}(-\mathbf{q}_{\perp}^{2})\frac{\mathbf{q}_{\perp}^{2}}{6m^{2}} \left(1-\frac{\mathbf{q}_{\perp}^{2}}{4m^{2}}\right)\,,\] \[\mathcal{G}_{1}(-\mathbf{q}_{\perp}^{2}) = G_{1}(-\mathbf{q}_{\perp}^{2})-G_{2}(-\mathbf{q}_{\perp}^{2})+ \frac{\mathbf{q}_{\perp}^{2}}{4m^{2}}G_{3}(-\mathbf{q}_{\perp}^{2})\,,\] \[\mathcal{G}_{2}(-\mathbf{q}_{\perp}^{2}) = -G_{3}(-\mathbf{q}_{\perp}^{2})\,,\] \[\mathcal{M}(-\mathbf{q}_{\perp}^{2}) = -G_{1}(-\mathbf{q}_{\perp}^{2})+\frac{G_{2}(-\mathbf{q}_{\perp}^{2 })}{2}-\frac{\mathbf{q}_{\perp}^{2}}{4m^{2}}G_{3}(-\mathbf{q}_{\perp}^{2})\,. \tag{10}\] The densities of Eq. (9) can be also parameterized in the following form \[j^{0}(\mathbf{r}) = \delta_{\sigma^{\prime}\sigma}\rho_{0}(r)+\hat{Q}^{ij}_{\sigma^{ \prime}\sigma}\mathcal{Y}^{ij}_{2}(\mathbf{\hat{r}})\rho_{2}(r),\] \[\mathbf{j}(\mathbf{r}) = \mathbf{\hat{S}}_{\sigma^{\prime}\sigma}\times\mathbf{Y}_{1}(\mathbf{ \hat{r}})\rho_{M}(r)\,, \tag{11}\] where \(Y_{i}(r)\) are multipoles defined in the Appendix B and \[\rho_{0}(r) = \int\frac{d^{2}\hat{n}}{4\pi}\,\delta(r_{\parallel})\tilde{\cal G}_ {0}(r_{\perp})\,,\] \[\rho_{2}(r) = -\frac{1}{4}\int\frac{d^{2}\hat{n}}{4\pi}\,\delta(r_{\parallel}) \left[\left(3\frac{r_{\parallel}^{2}}{r^{2}}-1\right)\frac{1}{m^{2}}\hat{O}_{ 2}(r_{\perp})\tilde{\cal G}_{1}(r_{\perp})+\left(3\frac{r_{\perp}^{2}}{r^{2}} -1\right)\frac{1}{m^{2}}r_{\perp}^{2}\hat{O}_{1}(r_{\perp})\tilde{\cal G}_{2}( r_{\perp})\right]\,,\] \[\rho_{M}(r) = -\frac{1}{2m}\int\frac{d^{2}\hat{n}}{4\pi}\,\delta(r_{\parallel}) \,\frac{r_{\perp}}{r}\frac{d}{dr_{\perp}}\tilde{\cal M}(r_{\perp})\,. \tag{12}\] Here, the differential operators \(\hat{O}_{1}(r_{\perp})\) and \(\hat{O}_{2}(r_{\perp})\) are given by \[\hat{O}_{1}(r_{\perp}) = \frac{1}{r_{\perp}}\frac{d}{dr_{\perp}}\frac{1}{r_{\perp}}\frac{d }{dr_{\perp}}\,,\] \[\hat{O}_{2}(r_{\perp}) = \frac{1}{r_{\perp}^{2}}\frac{d}{dr_{\perp}}r_{\perp}^{2}\frac{d}{ dr_{\perp}}\,, \tag{13}\] and we have introduced the two-dimensional Fourier transforms of the form factors \[\tilde{\cal G}_{i}(r_{\perp}) = \int\frac{d^{2}q_{\perp}}{(2\pi)^{2}}e^{-i{\bf q}_{\perp}\cdot{\bf r }_{\perp}}{\cal G}_{i}(-{\bf q}_{\perp}^{2})\,,\] \[\tilde{\cal M}(r_{\perp}) = \int\frac{d^{2}q_{\perp}}{(2\pi)^{2}}e^{-i{\bf q}_{\perp}\cdot{\bf r }_{\perp}}{\cal M}(-{\bf q}_{\perp}^{2})\,. \tag{14}\] ### Electromagnetic densities in the Breit frame The traditional ("naive") densities in terms of the Fourier transforms of the form factors in the Breit frame emerge by first expanding the integrand in Eq. (8) in inverse powers of \(m\) up to leading order prior to performing the integration [11; 12] (notice that for this expansion it is important to distinguish between \(m\) and \(M\)), and then expanding the integrands in powers of \(R\) around \(R=0\) and keeping terms up to the zeroth order. The resulting expressions read: \[j^{0}_{\rm naive}({\bf r}) = \int\frac{d^{3}q}{(2\pi)^{3}}e^{-i{\bf q}\cdot{\bf r}}\left(G_{C} (-{\bf q}^{2})\delta_{\sigma\sigma^{\prime}}+\frac{G_{Q}(-{\bf q}^{2})}{2m^{2} }\hat{Q}^{km}_{\sigma^{\prime}\sigma}q^{m}q^{k}\right)\equiv\delta_{\sigma \sigma^{\prime}}\rho_{C}^{\rm naive}(r)+\hat{Q}^{km}_{\sigma^{\prime}\sigma}Y_ {2}^{km}({\bf\hat{r}})\rho_{Q}^{\rm naive}(r)\,,\] \[{\bf j}_{\rm naive}({\bf r}) = \int\frac{d^{3}q}{(2\pi)^{3}}e^{-i{\bf q}\cdot{\bf r}}\frac{G_{M }(-{\bf q}^{2})}{2m}i(\hat{\bf S}_{\sigma^{\prime}\sigma}\times{\bf q}) \equiv\frac{(\hat{\bf S}_{\sigma^{\prime}\sigma}\times{\bf\nabla})}{2m}\rho_{M }^{\rm naive}(r)\,, \tag{15}\] where the electric monopole \(G_{C}\), the electric quadrupole \(G_{Q}\), and the magnetic dipole \(G_{M}\) form factors in the Breit frame are given by \[G_{C}(-{\bf q}^{2}) = G_{1}(-{\bf q}^{2})+\frac{{\bf q}^{2}}{6m^{2}}G_{3}(-{\bf q}^{2})\,,\] \[G_{Q}(-{\bf q}^{2}) = -G_{3}(-{\bf q}^{2})\,,\] \[G_{M}(-{\bf q}^{2}) = G_{2}(-{\bf q}^{2})\,. \tag{16}\] These expressions coincide with the traditional expressions for the current densities of a spin-1 system obtained in the Breit frame, see for example Ref. [34], after expanding the latter in inverse powers of \(m\) and keeping the leading-order terms.2 The electric charge density distribution \(\rho_{C}^{\rm naive}(r)\), the electric quadrupole charge distribution \(\rho_{Q}^{\rm naive}(r)\), and the magnetic density \(\rho_{M}^{\rm naive}(r)\) have the following form \[\rho_{C}^{\rm naive}(r) = \int\frac{d^{3}q}{(2\pi)^{3}}e^{-i{\bf q}\cdot{\bf r}}G_{C}(-{\bf q }^{2})\,, \tag{17}\] \[\rho_{Q}^{\rm naive}(r) = -\frac{1}{2m^{2}}r\frac{d}{dr}\frac{1}{r}\frac{d}{dr}\int\frac{d^ {3}q}{(2\pi)^{3}}e^{-i{\bf q}\cdot{\bf r}}G_{Q}(-{\bf q}^{2})\,,\] (18) \[\rho_{M}^{\rm naive}(r) = \int\frac{d^{3}q}{(2\pi)^{3}}e^{-i{\bf q}\cdot{\bf r}}G_{M}(-{\bf q }^{2})\,. \tag{19}\] As it was already discussed in Refs. [11; 28], these densities describe the leading-order approximation to the matrix element of the current operator of systems in a state with localization much larger than the Compton wavelength \(1/m\) yet much smaller than all intrinsic scales encoded in form factors. Clearly, for light hadrons with the intrinsic size being smaller than or comparable to the Compton wavelength, such an approximation becomes invalid. ## IV Gravitational densities Next we consider the local spatial densities corresponding to the matrix elements of the EMT operator. As emphasized in Ref. [32] these densities differ significantly from the ones of the electromagnetic current. This is due to the fact that a superposition of eigenstates of the electric charge operator, which makes the localized packet, is again an eigenstate of the charge operator with the same eigenvalue, while this is not the case for the energy-momentum operator. The matrix elements of the EMT of a spin-1 system in one-particle eigenstates of the energy-momentum operator can be parametrized in terms of form factors as follows [38] \[\langle p^{\prime},\sigma^{\prime}|\hat{T}_{\mu\nu}({\bf x},0)|p,\sigma\rangle=\epsilon^{*\beta}(p^{\prime},\sigma^{\prime})\epsilon^{\alpha} (p,\sigma)e^{-i{\bf q}\cdot{\bf x}}\Bigg{[}2P_{\mu}P_{\nu}\left(-g_{\alpha \beta}A_{0}(q^{2})+\frac{P_{\alpha}P_{\beta}}{M^{2}}A_{1}(q^{2})\right)\] \[+2\big{(}P_{\mu}\left[g_{\nu\beta}P_{\alpha}+g_{\nu\alpha}P_{ \beta}\right]+P_{\nu}\left[g_{\mu\beta}P_{\alpha}+g_{\mu\alpha}P_{\beta} \right]\big{)}J(q^{2})+\frac{1}{2}\left(q_{\mu}q_{\nu}-g_{\mu\nu}q^{2}\right) \left(g_{\alpha\beta}D_{0}(q^{2})+\frac{P_{\alpha}P_{\beta}}{M^{2}}D_{1}(q^{2} )\right)\] \[+\Big{[}\frac{1}{2}q^{2}\left(g_{\mu\alpha}g_{\nu\beta}+g_{\mu \beta}g_{\nu\alpha}\right)-\left(g_{\nu\beta}q_{\mu}+g_{\mu\beta}q_{\nu} \right)P_{\alpha}+\left(g_{\nu\alpha}q_{\mu}+g_{\mu\alpha}q_{\nu}\right)P_{ \beta}-4g_{\mu\nu}P_{\alpha}P_{\beta}\Big{]}E(q^{2})\] \[+\left(g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}-\frac {1}{2}g_{\mu\nu}g_{\alpha\beta}\right)M^{2}\overline{f}(q^{2})+g_{\mu\nu} \left(g_{\alpha\beta}M^{2}\overline{c}_{0}(q^{2})+P_{\alpha}P_{\beta}\, \overline{c}_{1}(q^{2})\right)\Bigg{]}\,. \tag{20}\] Here, we again distinguish between the mass of the system \(m\) and the mass parameter \(M\), which can be absorbed in the normalization of the form factors. Notice that in the parametrization we also included the non-conserved part of the EMT (namely the form factors \(\bar{f}(q^{2}),\ \bar{c}_{0}(q^{2})\) and \(\bar{c}_{1}(q^{2})\)), so that e.g. in QCD, one can consider the quark and gluon EMTs separately. However, for a conserved EMT these form factors vanish. To define the spatial densities associated with the EMT we consider its matrix element in a state specified by Eq. (1) and take the limit of sharply localized states. The considered matrix element of the EMT operator has the form \[t_{\phi}^{\mu\nu}({\bf r}) = \langle\Phi,{\bf X}|T^{\mu\nu}({\bf x},0)|\Phi,{\bf X}\rangle= \int\frac{d^{3}Pd^{3}q}{(2\pi)^{3}\sqrt{4E^{E}}}\,\phi^{*}({\bf p}^{\prime})\, \phi({\bf p})e^{i{\bf q}\cdot{\bf X}}(p^{\prime}|T^{\mu\nu}({\bf x},0)|p\rangle \tag{21}\] \[= \int\frac{d^{3}P\,d^{3}q}{(2\pi)^{3}\sqrt{4E^{E}}}\,\phi\bigg{(} {\bf P}-\frac{{\bf q}}{2}\bigg{)}\,\phi^{*}\bigg{(}{\bf P}+\frac{{\bf q}}{2} \bigg{)}\,e^{-i{\bf q}\cdot{\bf r}}\epsilon^{*\beta}(p^{\prime},\sigma^{ \prime})\epsilon^{*\beta}(p,\sigma)\Bigg{[}2P_{\mu}P_{\nu}\left(-g_{\alpha \beta}A_{0}(q^{2})+\frac{P_{\alpha}P_{\beta}}{M^{2}}A_{1}(q^{2})\right)\] \[+ 2\big{(}P_{\mu}\left[g_{\nu\beta}P_{\alpha}+g_{\nu\alpha}P_{ \beta}\right]+P_{\nu}\left[g_{\mu\beta}P_{\alpha}+g_{\mu\alpha}P_{\beta} \right]\big{)}J(q^{2})+\frac{1}{2}\left(q_{\mu}q_{\nu}-g_{\mu\nu}q^{2}\right) \left(g_{\alpha\beta}D_{0}(q^{2})+\frac{P_{\alpha}P_{\beta}}{M^{2}}D_{1}(q^{2} )\right)\] \[+ \Big{[}\frac{1}{2}q^{2}\left(g_{\mu\alpha}g_{\nu\beta}+g_{\mu \beta}g_{\nu\alpha}\right)-\left(g_{\nu\beta}g_{\mu}+g_{\mu\beta}g_{\nu} \right)P_{\alpha}+\left(g_{\nu\alpha}q_{\mu}+g_{\mu\alpha}q_{\nu}\right)P_{ \beta}-4g_{\mu\nu}P_{\alpha}P_{\beta}\Big{]}E(q^{2})\] \[+ \left(g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}-\frac{1}{ 2}g_{\mu\nu}g_{\alpha\beta}\right)M^{2}\overline{f}(q^{2})+g_{\mu\nu}\left(g_{ \alpha\beta}M^{2}\overline{c}_{0}(q^{2})+P_{\alpha}P_{\beta}\overline{c}_{1}(q ^{2})\right)\Bigg{]}\,.\] ### Gravitational densities in the ZAMF Analogously to the case of the electromagnetic current we take the limit of sharply localized packets by applying the method of dimensional counting of Ref. [44]. However, when expanding in powers of \(R\) around \(R=0\), we now keep explicitly only the leading-order terms for each form factor separately and denote by "Rest" all other contributions. This is because different parts of the EMT require a different physical interpretation as discussed in Refs. [16; 32; 45]. For the form factors decaying for large \(q^{2}\) as \(A_{0}(q^{2})\sim 1/q^{4}\), \(A_{1}(q^{2})\sim 1/q^{6}\), \(J(q^{2})\sim 1/q^{4}\), \(D_{0}(q^{2})\sim 1/q^{6}\), \(D_{1}(q^{2})\sim 1/q^{8}\), \(E(q^{2})\sim 1/q^{4}\), \(\bar{f}(q^{2})\sim 1/q^{2}\), \(\bar{c}_{0}(q^{2})\sim 1/q^{4}\) and \(\bar{c}_{1}(q^{2})\sim 1/q^{6}\) or faster, the final result reads \[t_{\phi}^{00} = N_{\phi,R}\int d^{2}\hat{n}\,\frac{d^{3}q}{(2\pi)^{3}}e^{-i{\bf q }\cdot{\bf r}}\Bigg{\{}\delta_{\sigma^{\prime}\sigma}{\cal E}_{0}(-{\bf q}_{ \perp}^{2})+\hat{Q}_{\sigma^{\prime}\sigma}^{kl}\hat{n}^{k}\hat{n}^{l}\frac{{ \bf q}_{\perp}^{2}}{m^{2}}{\cal E}_{1}(-{\bf q}_{\perp}^{2})+\frac{{\cal E}_{2 }(-{\bf q}_{\perp}^{2})}{m^{2}}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q_{\perp}^{ k}q_{\perp}^{l}\Bigg{\}}\ +{\rm Rest}\,,\] \[t_{\phi}^{0i} = N_{\phi,R}\int d^{2}\hat{n}\,\frac{d^{3}q}{(2\pi)^{3}}\frac{\hat {n}^{i}}{m}{\cal J}(-{\bf q}_{\perp}^{2})e^{-i{\bf q}\cdot{\bf r}}(i\,\hat{ \bf S}_{\sigma^{\prime}\sigma}\times{\bf q})\cdot\hat{\bf n}\ +\ {\rm Rest}\,,\] \[t_{\phi}^{ij} = N_{\phi,R}\int d^{2}\hat{n}\,\frac{d^{3}q}{(2\pi)^{3}}\hat{n}^{i} \hat{n}^{j}\Bigg{\{}\delta_{\sigma^{\prime}\sigma}{\cal E}_{0}(-{\bf q}_{ \perp}^{2})+\hat{Q}_{\sigma^{\prime}\sigma}^{kl}\hat{n}^{k}\hat{n}^{l}\frac{{ \bf q}_{\perp}^{2}}{m^{2}}{\cal E}_{1}(-{\bf q}_{\perp}^{2})+\frac{{\cal E}_{2 }(-{\bf q}_{\perp}^{2})}{m^{2}}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q_{\perp}^{ k}q_{\perp}^{l}\Bigg{\}}e^{-i{\bf q}\cdot{\bf r}} \tag{22}\] \[+ N_{\phi,R,2}\int d^{2}\hat{n}\,\frac{d^{3}q}{(2\pi)^{3}}\Bigg{\{} \delta_{i\bf q}{\bf q}_{\perp}^{2}-q_{i}q_{j}\Bigg{\}}\Bigg{[}\delta_{\sigma ^{\prime}\sigma}{\cal D}_{0}(-{\bf q}_{\perp}^{2})+\hat{Q}_{\sigma^{\prime} \sigma}^{kl}\hat{n}^{k}\hat{n}^{l}\frac{{\bf q}_{\perp}^{2}}{m^{2}}{\cal D}_{( -{\bf q}_{\perp}^{2})}+\frac{{\cal D}_{2}(-{\bf q}_{\perp}^{2})}{m^{2}}\hat{Q }_{\sigma^{\prime}\sigma}^{kl}q_{\perp}^{k}q_{\perp}^{l}\Bigg{]}\] \[+ \delta_{ij}\Bigg{[}\delta_{\sigma^{\prime}\sigma}m^{2}{\cal C}_{ 0}(-{\bf q}_{\perp}^{2})+{\bf q}_{\perp}^{2}\hat{Q}_{\sigma^{\prime}\sigma}^{ kl}\hat{n}^{k}\hat{n}^{l}{\cal C}_{1}(-{\bf q}_{\perp}^{2})+{\cal C}_{2}(-{\bf q}_{ \perp}^{2})\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q_{\perp}^{k}q_{\perp}^{l} \Bigg{]}\Bigg{\}}e^{-i{\bf q}\cdot{\bf r}}\ +{\rm Rest}\,,\] where the explicit form of the linear combinations of the form factors, \({\cal E}_{i}(-{\bf q}_{\perp}^{2})\), \({\cal J}(-{\bf q}_{\perp}^{2})\), \({\cal D}_{i}(-{\bf q}_{\perp}^{2})\) and \({\cal C}_{i}(-{\bf q}_{\perp}^{2})\) is specified in Appendix C. As mentioned above, we kept explicitly the leading-order contributions of the terms with the \({\cal D}_{i}(-{\bf q}_{\perp}^{2})\) and \({\cal C}_{i}(-{\bf q}_{\perp}^{2})\) form factors, while the contributions of the same order (and lower) in \(R\) stemming from the terms with the \({\cal E}_{i}(-{\bf q}_{\perp}^{2})\) and \({\cal J}(-{\bf q}_{\perp}^{2})\) form factors are not shown for the reason explained above. The spatial densities of Eq. (22) depend on the wave packet only via the overall normalization constants \[N_{\phi,R} = \frac{1}{R}\int\,d\tilde{P}\tilde{P}^{3}|\tilde{\phi}(|\tilde{\bf P }|)|^{2}\,,\] \[N_{\phi,R,2} = \frac{R}{2}\int\,d\tilde{P}\tilde{P}|\tilde{\phi}(|\tilde{\bf P}|)| ^{2}\,. \tag{23}\] Notice that for \(R\to 0\), the first normalization constant in Eq. (23) goes to infinity while the second constant vanishes. The energy distribution \(t_{\phi}^{00}(r)\) can be written in the form of a three-dimensional multipole expansion as follows: \[t_{\phi}^{00}(r)=\rho_{E0}(r)\delta_{\sigma^{\prime}\sigma}+\hat{Q}_{\sigma^{ \prime}\sigma}^{kl}Y_{2}^{kl}(\tilde{\bf r})\rho_{E2}(r)\,, \tag{24}\] where the monopole and quadrupole energy distributions have the form \[\rho_{E0}(r) = N_{\phi,R}\int d^{2}\hat{n}\,\delta(r_{\parallel})\varepsilon_{0} (r_{\perp})\,,\] \[\rho_{E2}(r) = \frac{N_{\phi,R}}{2}\int d^{2}\hat{n}\,\delta(r_{\parallel})\left[ \left(3\frac{r_{\parallel}^{2}}{r^{2}}-1\right)\varepsilon_{1}(r_{\perp})+ \left(3\frac{r_{\perp}^{2}}{r^{2}}-1\right)\varepsilon_{2}(r_{\perp}) \right]\,, \tag{25}\] with \[\varepsilon_{0}(r_{\perp}) = \tilde{\cal E}_{0}(r_{\perp})\,,\] \[\varepsilon_{1}(r_{\perp}) = -\frac{1}{m^{2}}\hat{O}_{2}(r_{\perp})\tilde{\cal E}_{1}(r_{\perp})\,,\] \[\varepsilon_{2}(r_{\perp}) = -\frac{1}{m^{2}}r_{\perp}^{2}\hat{O}_{1}(r_{\perp})\tilde{\cal E}_{2 }(r_{\perp})\,,\] \[\tilde{\cal E}_{i}(r_{\perp}) = \int\frac{d^{2}q_{\perp}}{(2\pi)^{2}}e^{-i{\bf q}_{\perp}\cdot{ \bf r}_{\perp}}{\cal E}_{i}(-{\bf q}_{\perp}^{2})\,, \tag{26}\] where the differential operators \(\hat{O}_{i}\) are defined in Eq. (13). The multipole expansion of the momentum-density distribution has the form \[t_{\phi}^{0i}(r)=\left(\hat{\bf S}_{\sigma^{\prime}\sigma}\times{\bf Y}_{1}(\hat {\bf r})\right)\tilde{J}(r)\,, \tag{27}\] where \[\tilde{J}(r) = \frac{N_{\phi,R}}{2}\int d^{2}\hat{n}\,\delta(r_{\parallel})\frac {r_{\perp}}{r}\,J(r_{\perp})\,,\] \[J(r) = -\frac{1}{m}\frac{d}{dr_{\perp}}\,\tilde{\cal J}(r_{\perp})\,, \tag{28}\] with \[\tilde{\cal J}(r_{\perp})=\int\frac{d^{2}q_{\perp}}{(2\pi)^{2}}\,e^{-i{\bf q} _{\perp}\cdot{\bf r}_{\perp}}{\cal J}(-{\bf q}_{\perp}^{2})\,. \tag{29}\] The \(ij\)th components of the EMT can be written as the sum of three parts \[t_{\phi}^{ij}(r)=t_{0}^{ij}(r)+t_{2}^{ij}(r)+t_{3}^{ij}(r)\,, \tag{30}\] where the first term is called the flow tensor and has the form \[t_{0}^{ij}({\bf r})=N_{\phi,R}\int d^{2}\hat{n}\,\frac{d^{3}q}{(2\pi)^{3}}\hat {n}^{i}\hat{n}^{j}\Bigg{\{}\delta_{\sigma^{\prime}\sigma}{\cal E}_{0}(-{\bf q }_{\perp}^{2})+\hat{Q}_{\sigma^{\prime}\sigma}^{kl}\hat{n}^{k}\hat{n}^{l}\frac {{\bf q}_{\perp}^{2}}{m^{2}}{\cal E}_{1}(-{\bf q}_{\perp}^{2})+\frac{{\cal E} _{2}(-{\bf q}_{\perp}^{2})}{m^{2}}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q_{\perp }^{k}q_{\perp}^{l}\Bigg{\}}e^{-i{\bf q}\cdot{\bf r}}\,. \tag{31}\] After integrating over the momentum \({\bf q}\) and the unit vector \(\hat{\bf n}\) in Eq. (31) we obtain the following expression \[t_{0}^{ij}({\bf r})=\delta_{\sigma^{\prime}\sigma}\left(\delta^ {ij}A_{0}(r)+Y_{2}^{ij}(\hat{\bf r})\,B_{0}(r)\right) + \hat{Q}_{\sigma^{\prime}\sigma}^{ij}A_{2}(r)+2\left(\hat{Q}_{\sigma ^{\prime}\sigma}^{ik}Y_{2}^{jk}(\hat{\bf r})+\hat{Q}_{\sigma^{\prime}\sigma}^ {kj}Y_{2}^{ik}(\hat{\bf r})-\delta_{ij}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}Y_ {2}^{kl}(\hat{\bf r})\right)B_{2}(r) \tag{32}\] \[+ Y_{2}^{kl}(\hat{\bf r})\hat{Q}_{\sigma^{\prime}\sigma}^{kl} \left[\delta^{ij}\left(A_{1}(r)+\frac{1}{3}B_{1}(r)+2B_{2}(r)\right)+Y_{2}^{ij }(\hat{\bf r})\,B_{1}(r)\right]\,,\] where \[A_{0}(r) = \frac{N_{\phi,R}}{3}\int d^{2}\hat{n}\,\delta(r_{\parallel}) \varepsilon_{0}(r_{\perp}),\] \[B_{0}(r) = \frac{N_{\phi,R}}{2}\int d^{2}\hat{n}\,\delta(r_{\parallel}) \left(3\frac{r_{\parallel}^{2}}{r^{2}_{\perp}}-1\right)\varepsilon_{0}(r_{ \perp}),\] \[A_{1}(r) = N_{\phi,R}\int d^{2}\hat{n}\,\delta(r_{\parallel})\,\frac{r_{ \perp}^{4}}{8r^{4}}\left[\left(\frac{4r_{\parallel}^{2}}{r_{\perp}^{2}}-1 \right)\varepsilon_{1}(r_{\perp})+\left(4-\frac{r_{\parallel}^{2}}{r_{\perp}^{ 2}}\right)\varepsilon_{2}(r_{\perp})\right],\] \[A_{2}(r) = 2N_{\phi,R}\int d^{2}\hat{n}\,\delta(r_{\parallel})\,\frac{r_{ \perp}^{4}}{8r^{4}}\left[\frac{1}{3}\left(\frac{8r_{\parallel}^{2}}{r_{\perp}^ {2}}+1\right)\varepsilon_{1}(r_{\perp})-\frac{7r_{\parallel}^{2}}{3r_{\perp}^ {2}}\,\varepsilon_{2}(r_{\perp})\right],\] \[B_{1}(r) = N_{\phi,R}\int d^{2}\hat{n}\,\delta(r_{\parallel})\,\frac{r_{ \perp}^{4}}{8r^{4}}\left[\left(\frac{35r_{\parallel}^{4}}{r^{4}}+3-\frac{30r_{ \perp}^{2}}{r^{2}}\right)\varepsilon_{1}(r_{\perp})+\left(\frac{35r_{\parallel }^{2}r_{\perp}^{2}}{r^{4}}-4\right)\varepsilon_{2}(r_{\perp})\right],\] \[B_{2}(r) = N_{\phi,R}\int d^{2}\hat{n}\,\delta(r_{\parallel})\,\frac{r_{ \perp}^{4}}{8r^{4}}\left[\left(\frac{4r_{\parallel}^{2}}{r_{\perp}^{2}}-1 \right)\varepsilon_{1}(r_{\perp})-\frac{5r_{\parallel}^{2}}{r_{\perp}^{2}} \,\varepsilon_{2}(r_{\perp})\right]. \tag{33}\] The second part of \(t_{\phi}^{ij}\) is the stress tensor, which describes the internal structure of the system and has the form: \[t_{2}^{ij}({\bf r})=N_{\phi,R,2}\int d^{2}\hat{n}\,\frac{d^{3}q}{(2\pi)^{3}} \Bigg{\{}\left(\delta_{ij}{\bf q}_{\perp}^{2}-q_{i}q_{j}\right)\Bigg{[}\delta_{ \sigma^{\prime}\sigma}{\cal D}_{0}(-{\bf q}_{\perp}^{2})+\hat{Q}_{\sigma^{ \prime}\sigma}^{kl}\hat{n}^{k}\hat{n}^{l}\frac{{\bf q}_{\perp}^{2}}{m^{2}}{ \cal D}_{1}(-{\bf q}_{\perp}^{2})+\frac{{\cal D}_{2}(-{\bf q}_{\perp}^{2})}{m^{2} }\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q_{\perp}^{k}q_{\perp}^{l}\Bigg{]}\Bigg{\}}e ^{-i{\bf q}\cdot{\bf r}}\cdot \tag{34}\] It can be reduced to \[t_{2}^{ij}({\bf r}) = N_{\phi,R,2}\int d^{2}\hat{n}\Bigg{\{}\delta_{\sigma^{\prime} \sigma}\left[\delta^{ij}\hat{d}_{1}(r)+Y_{2}^{ij}(\mathbf{\hat{r}})\hat{d}_{2}( r)\right]+\hat{Q}_{\sigma^{\prime}\sigma}^{kl}Y_{2}^{kl}(\mathbf{\hat{r}})\left( \delta^{ij}\hat{d}_{3}(r)+Y_{2}^{ij}(\mathbf{\hat{r}})\hat{d}_{4}(r)\right) \tag{35}\] \[+ \hat{Q}_{\sigma^{\prime}\sigma}^{ij}\hat{d}_{5}(r)+\left(\hat{Q} _{\sigma^{\prime}\sigma}^{ik}Y_{2}^{jk}(\mathbf{\hat{r}})+\hat{Q}_{\sigma^{ \prime}\sigma}^{jk}Y_{2}^{ik}(\mathbf{\hat{r}})\right)\hat{d}_{6}(r)+\hat{Q}_{ \sigma^{\prime}\sigma}^{kl}Y_{2}^{kl}(\mathbf{\hat{r}})\left(\delta^{ij}\hat{e }_{1}(r)+Y_{2}^{ij}(\mathbf{\hat{r}})\hat{e}_{2}(r)\right)\] \[+ \hat{Q}_{\sigma^{\prime}\sigma}^{ij}\hat{e}_{3}(r)+\left(\hat{Q} _{\sigma^{\prime}\sigma}^{ik}Y_{2}^{jk}(\mathbf{\hat{r}})+\hat{Q}_{\sigma^{ \prime}\sigma}^{jk}Y_{2}^{ik}(\mathbf{\hat{r}})\right)\hat{e}_{4}(r)\Bigg{\}},\] where the functions \(\hat{d}_{i}\) and \(\hat{e}_{i}\) are given in Appendix D. Different parametrizations of the multipole expansion of the EMT distributions have been applied in Refs. [46; 47; 38; 40]. Using Eq. (35) and the parametrization from Ref. [46] we obtain the following pressure and shear force distributions:3 Footnote 3: Notice that this interpretation in terms of the pressure and shear forces has been criticized recently in Ref. [48]. \[p_{0}(r) = N_{\phi,R,2}\int d^{2}\hat{n}\,\hat{d}_{1}(r)\,,\] \[s_{0}(r) = N_{\phi,R,2}\int d^{2}\hat{n}\,\hat{d}_{2}(r)\,,\] \[p_{2}(r) = N_{\phi,R,2}\int d^{2}\hat{n}\,\frac{1}{9}\left[-6\hat{d}_{3}(r )+2\hat{d}_{4}(r)+9\hat{d}_{5}(r)-6\hat{d}_{6}(r)-6\hat{e}_{1}(r)+2\hat{e}_{2 }(r)+9\hat{e}_{3}(r)-6\hat{e}_{4}(r)\right]\,,\] \[s_{2}(r) = N_{\phi,R,2}\int d^{2}\hat{n}\,\frac{1}{6}\left[6\hat{d}_{3}(r) -2\hat{d}_{4}(r)+9\hat{d}_{6}(r)+6\hat{e}_{1}(r)-2\hat{e}_{2}(r)+9\hat{e}_{4}( r)\right]\,,\] \[s_{3}(r) = N_{\phi,R,2}\int d^{2}\hat{n}\,\frac{1}{3}\left[-3\hat{d}_{3}(r) +4\hat{d}_{4}(r)-3\hat{d}_{6}(r)-3\hat{e}_{1}(r)+4\hat{e}_{2}(r)-3\hat{e}_{4}( r)\right]\,,\] \[p_{3}(r) = N_{\phi,R,2}\int d^{2}\hat{n}\,\frac{1}{9}\left[15\hat{d}_{3}(r) -2\hat{d}_{4}(r)+15\hat{d}_{6}(r)+15\hat{e}_{1}(r)-2\hat{e}_{2}(r)+15\hat{e}_{4 }(r)\right]\,. \tag{36}\] The third part of the \(ij\)th components of the EMT is not conserved and it also contributes to the multipole pressure and shear force distributions \[t_{3}^{ij}(r)=\delta_{ij}N_{\phi,R,2}\int d^{2}\hat{n}\,\frac{d^{3}q}{(2\pi)^ {3}}\Bigg{\{}\delta_{\sigma^{\prime}\sigma}m^{2}\mathcal{C}_{0}(-\mathbf{q}_{ \perp}^{2})+\mathbf{q}_{\perp}^{2}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}\hat{n}^ {k}\hat{n}^{l}\mathcal{C}_{1}(-\mathbf{q}_{\perp}^{2})+\mathcal{C}_{2}(-\mathbf{ q}_{\perp}^{2})\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q_{\perp}^{k}q_{\perp}^{l} \Bigg{\}}e^{-i\mathbf{q}\cdot\mathbf{r}}\,. \tag{37}\] It can be rewritten as \[t_{3}^{ij}(r)=\delta_{ij}\left(\delta_{\sigma^{\prime}\sigma}g_{1}(r)+\hat{Q} _{\sigma^{\prime}\sigma}^{kl}Y_{2}^{kl}(\mathbf{\hat{r}})g_{2}(r)\right)\,, \tag{38}\] where \[g_{1}(r) = N_{\phi,R,2}\int d^{2}\hat{n}\,m^{2}\,\tilde{\mathcal{C}}_{0}({ \bf r}_{\perp})\delta(r_{\parallel})\,, \tag{39}\] \[g_{2}(r) = -\frac{N_{\phi,R,2}}{2}\int d^{2}\hat{n}\,\left[\left(\frac{3r_{ \parallel}^{2}}{r^{2}}-1\right)\hat{O}_{2}(r_{\perp})\tilde{\mathcal{C}}_{1}({ \bf r}_{\perp})\delta(r_{\parallel})+\left(\frac{3r_{\perp}^{2}}{r^{2}}-1 \right)\hat{O}_{2}(r_{\perp})\tilde{\mathcal{C}}_{2}({\bf r}_{\perp})\delta(r_ {\parallel})\right]\,, \tag{40}\] and \[\tilde{\mathcal{C}}_{i}(r_{\perp})=\int\frac{d^{2}q_{\perp}}{(2\pi)^{2}}e^{-i \mathbf{q}_{\perp}\cdot{\bf r}_{\perp}}\mathcal{C}_{i}(-\mathbf{q}_{\perp}^{2})\,. \tag{41}\] In all above expressions we have dropped the corresponding subleading contributions contained in"Rest". It is not surprising that the normalization factors of the energy and momentum distributions diverge in the limit of sharply localized states. This is because for such states, the weight of the energy-momentum eigenstates with larger eigenvalues in the wave packet increases with the reduction of the localization. On the other hand, the overall normalization of the internal pressure and shear force distributions vanish as these functions are related to the variation of the action with respect to the spatial metric \(g_{ik}({\bf r})\). This variation corresponds to a change of the location of the system in three-dimensional space, which vanishes for sharply localized states. Notice that for spherically symmetric packets the shape of all distributions does not depend on the localization of the system and is uniquely determined by the corresponding form factors. ### Gravitational densities in the Breit frame The "naive" densities in terms of the Fourier transforms of the form factors in Breit frame emerge in static approximation by expanding the integrand in Eq. (21) in powers of \(1/m\) up to leading-order terms before performing integration. The resulting expressions have the form: \[t_{\phi}^{00} = m\int\frac{d^{3}Pd^{3}q}{(2\pi)^{3}}\left[\delta_{\sigma^{ \prime}\sigma}\left(A_{0}(-{\bf q}^{2})-\frac{{\bf q}^{2}}{12M^{2}}A_{1}(-{ \bf q}^{2})\right)+\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q^{k}q^{l}\frac{A_{1}(-{ \bf q}^{2})}{4M^{2}}\right]\phi\left({\bf P}-\frac{{\bf q}}{2}\right)\phi^{*} \left({\bf P}+\frac{{\bf q}}{2}\right)e^{-i{\bf q}\cdot{\bf r}}\,,\] \[t_{\phi}^{0i} = \int\frac{d^{3}Pd^{3}q}{(2\pi)^{3}}\left[\delta_{\sigma^{\prime} \sigma}{\bf P}^{i}\left(A_{0}(-{\bf q}^{2})-\frac{{\bf q}^{2}}{12M^{2}}A_{1}( -{\bf q}^{2})\right)+{\bf P}^{i}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q^{k}q^{l }\frac{A_{1}(-{\bf q}^{2})}{4M^{2}}+\frac{J(-{\bf q}^{2})}{2}\left(i\hat{ \bf S}_{\sigma^{\prime}\sigma}\times{\bf q}\right)^{i}\right]\] \[\times \phi\left({\bf P}-\frac{{\bf q}}{2}\right)\phi^{*}\left({\bf P}+ \frac{{\bf q}}{2}\right)e^{-i{\bf q}\cdot{\bf r}}\,,\] \[t_{\phi}^{ij} = \frac{1}{m}\int\frac{d^{3}Pd^{3}q}{(2\pi)^{3}}\times\phi\left({\bf P }-\frac{{\bf q}}{2}\right)\phi^{*}\left({\bf P}+\frac{{\bf q}}{2}\right)e^{-i{ \bf q}\cdot{\bf r}}\Bigg{[}\frac{J(-{\bf q}^{2})}{2}\left(P^{i}\left(i\hat{\bf S }_{\sigma^{\prime}\sigma}\times{\bf q}\right)^{j}+P^{j}\left(i\hat{\bf S}_{ \sigma^{\prime}\sigma}\times{\bf q}\right)^{i}\right) \tag{42}\] \[+ P^{i}P^{j}\left(\delta_{\sigma^{\prime}\sigma}\left(A_{0}(-{\bf q }^{2})-\frac{{\bf q}^{2}}{12M^{2}}A_{1}(-{\bf q}^{2})\right)+\hat{Q}_{\sigma^ {\prime}\sigma}^{kl}q^{k}q^{l}\frac{A_{1}(-{\bf q}^{2})}{4M^{2}}\right)\] \[+ \left({\bf q}^{2}\delta_{ij}-q_{i}q_{j}\right)\left\{\delta_{ \sigma^{\prime}\sigma}\left(\frac{D_{0}(-{\bf q}^{2})}{4}+\frac{{\bf q}^{2}}{ 48M^{2}}D_{1}(-{\bf q}^{2})-\frac{E(-{\bf q}^{2})}{3}\right)-\hat{Q}_{\sigma^ {\prime}\sigma}^{kl}q^{k}q^{l}\frac{D_{1}(-{\bf q}^{2})}{16M^{2}}\right\}\] \[+ \delta_{\sigma^{\prime}\sigma}\delta_{ij}\left(\overline{f}(-{ \bf q}^{2})\frac{M^{2}}{6}+\overline{c}_{0}(-{\bf q}^{2})\frac{M^{2}}{2}+ \overline{c}_{1}(-{\bf q}^{2})\frac{{\bf q}^{2}}{24}\right)-\overline{f}(-{ \bf q}^{2})M^{2}\hat{Q}_{\sigma^{\prime}\sigma}^{ij}-\overline{c}_{1}(-{\bf q }^{2})\frac{1}{8}\delta_{ij}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q^{k}q^{l}\] \[- \frac{E(-{\bf q}^{2})}{2}\Big{(}-\delta_{ij}\hat{Q}_{\sigma^{ \prime}\sigma}^{kl}q^{k}q^{l}+q^{k}(\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q^{j} +\hat{Q}_{\sigma^{\prime}\sigma}^{kj}q^{i})-{\bf q}^{2}\hat{Q}_{\sigma^{\prime }\sigma}^{ij}\Big{)}\,.\] To consider sharply localized wave packets we expand around \(R=0\) by using the method of dimensional counting and obtain \[t_{\rm naive}^{00} = m\int\frac{d^{3}q}{(2\pi)^{3}}\left[\delta_{\sigma^{\prime} \sigma}{\cal E}_{0}^{BF}(-{\bf q}^{2})+\frac{{\cal E}_{2}^{BF}(-{\bf q}^{2})}{ m^{2}}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q^{k}q^{l}\right]e^{-i{\bf q}\cdot{\bf r}}\ +{\rm Rest}\,,\] \[t_{\rm naive}^{0i} = \int\frac{d^{3}q}{(2\pi)^{3}}\,i(\hat{\bf S}_{\sigma^{\prime} \sigma}\times{\bf q}){\cal J}^{BF}(-{\bf q}^{2})e^{-i{\bf q}\cdot{\bf r}}\ +{\rm Rest}\,,\] \[t_{\rm naive}^{ij} = \frac{4\pi\delta_{ij}}{3R^{2}m}\int d\tilde{P}\tilde{P}^{4}| \tilde{\phi}(\vec{\bf P})|^{2}\int\frac{d^{3}q}{(2\pi)^{3}}e^{-i{\bf q}\cdot{ \bf r}}\left(\delta_{\sigma^{\prime}\sigma}{\cal E}_{0}^{BF}(-{\bf q}^{2})+ \frac{{\cal E}_{2}^{BF}(-{\bf q}^{2})}{m^{2}}\hat{Q}_{\sigma^{\prime}\sigma}^{ kl}q^{k}q^{l}\right) \tag{43}\] \[+ \frac{1}{m}\int\frac{d^{3}q}{(2\pi)^{3}}e^{-i{\bf q}\cdot{\bf r} }\Bigg{[}\left({\bf q}^{2}\delta_{ij}-q_{i}q_{j}\right)\delta_{\sigma^{\prime} \sigma}{\cal D}_{0}^{BF}(-{\bf q}^{2})+\frac{{\cal D}_{3}^{BF}(-{\bf q}^{2})}{ m^{2}}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q^{l}\left({\bf q}^{2}\delta_{ij}-q_{i}q_{j}\right)\] \[+ \Big{(}-\delta_{ij}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q^{k}q^{l}+q ^{k}(\hat{Q}_{\sigma^{\prime}\sigma}^{ki}q^{j}+\hat{Q}_{\sigma^{\prime}\sigma}^{ kj}q^{i})-{\bf q}^{2}\hat{Q}_{\sigma^{\prime}\sigma}^{ij}\Big{)}\Big{)}{\cal D}_{2}^{BF}(-{ \bf q}^{2})\] \[+ \delta_{ij}\delta_{\sigma^{\prime}\sigma}m^{2}{\cal C}_{0}^{BF}(-{ \bf q}^{2})-\hat{Q}_{\sigma^{\prime}\sigma}^{ij}m^{2}\overline{f}(-{\bf q}^{2})+ \delta_{ij}\hat{Q}_{\sigma^{\prime}\sigma}^{kl}q^{k}q^{l}{\cal C}_{2}^{BF}(-{ \bf q}^{2})\Bigg{]}\ +{\rm Rest}\,,\] where the explicit form of the linear combinations of the form factors, \({\cal E}_{i}^{BF}(-{\bf q}_{\perp}^{2})\), \({\cal J}^{BF}(-{\bf q}_{\perp}^{2})\), \({\cal D}_{i}^{BF}(-{\bf q}_{\perp}^{2})\) and \({\cal C}_{i}^{BF}(-{\bf q}_{\perp}^{2})\) is specified in Appendix C, and we have substituted \(M=m\). The \(t_{\rm naive}^{00}\), \(t_{\rm naive}^{0i}\) and the second term of \(t_{\rm naive}^{ij}\) in Eq. (43) coincide with the corresponding expressions of spatial densities obtained as the Fourier transforms of the gravitational form factors in the Breit frame in Ref. [38], provided that one takes into account the normalization factor \(2m\) and performs the \(1/m\) expansion up to required orders in the expressions of the last reference. Spatial densities in moving frames In this section we consider a spin-1 system in the same physical state of Eq. (1) from the point of view of a moving frame. In a moving frame, our system is described by the following wave packet [49] \[|\Phi,\mathbf{X},\sigma\rangle_{\mathbf{v}} = \int\frac{d^{3}p}{\sqrt{2E(2\pi)^{3}}}\,\sqrt{\gamma\Big{(}1- \frac{\mathbf{v}\cdot\mathbf{p}}{E}\Big{)}}\,\phi\big{[}\Lambda_{\mathbf{v}}^ {-1}\mathbf{p}\big{]}\,e^{-i\mathbf{p}\cdot\mathbf{X}}\sum_{\sigma_{1}}D_{ \sigma_{1}\sigma}\Big{[}W\Big{(}\Lambda_{\mathbf{v}},\frac{\Lambda_{\mathbf{v} }^{-1}\mathbf{p}}{m}\Big{)}\Big{]}|p,\sigma_{1}\rangle\,, \tag{44}\] where \(\gamma=(1-v^{2})^{-1/2}\), \(E=\sqrt{m^{2}+\mathbf{p}^{2}}\) and \(\Lambda_{\mathbf{v}}^{-1}\mathbf{p}=\mathbf{\hat{v}}\times\big{(}\mathbf{p} \times\mathbf{\hat{v}}\big{)}+\gamma\big{(}\mathbf{p}\cdot\mathbf{\hat{v}}-vE \big{)}\mathbf{\hat{v}}\) with \(\Lambda_{\mathbf{v}}\) denoting the Lorentz boost from the ZAMF to the moving frame, characterized by the vector of velocity \(\mathbf{v}\), and \(\mathbf{\hat{v}}=\mathbf{v}/|\mathbf{v}|\). The \(D_{\sigma_{1}\sigma}\left[W\right]\) matrices in Eq. (44) refer to the spin-1 representation of Wigner rotations [50]. Th calculation of the local spatial densities for spin-1 systems in moving frames proceeds in close analogy to Refs. [32; 28; 30]. In the limit of sharply localized packets the leading contribution to the matrix element of the electromagnetic current in the above specified moving frame has the form: \[j_{\mathbf{v}}^{\mu}(\mathbf{r}) = \int d^{3}\tilde{P}\frac{d^{3}q}{(2\pi)^{3}}\,\gamma\left(1- \mathbf{\hat{v}}\cdot\mathbf{\hat{\tilde{P}}}\right)\left|\tilde{\phi}\left( \mathbf{\tilde{P}}^{\prime}\right)\right|^{2}e^{-i\mathbf{q}\cdot\mathbf{r}}D _{\sigma^{\prime}\sigma^{\prime}_{1}}^{\dagger}\left[W\left(\Lambda_{\mathbf{v }},\hat{\mathbf{m}}\right)\right]D_{\sigma_{1}\sigma}\left[W\left(\Lambda_{ \mathbf{v}},\hat{\mathbf{m}}\right)\right] \tag{45}\] \[\times \hat{\tilde{P}}^{\mu}\Bigg{\{}\delta_{\sigma^{\prime}_{1}\sigma _{1}}\mathcal{G}_{0}\left(\left(\mathbf{\hat{\tilde{P}}}\cdot\mathbf{q}\right) ^{2}-\mathbf{q}^{2}\right)+\frac{1}{2m^{2}}\,\mathcal{G}_{2}\left(\left(\mathbf{ \hat{\tilde{P}}}\cdot\mathbf{q}\right)^{2}-\mathbf{q}^{2}\right)\hat{Q}_{ \sigma^{\prime}_{1}\sigma_{1}}^{kl}\left(q^{k}q^{l}+\left(\mathbf{\hat{\tilde{ P}}}\cdot\mathbf{q}\right)^{2}\hat{\tilde{P}}_{k}\hat{\tilde{P}}_{l}-2\left(\mathbf{ \hat{\tilde{P}}}\cdot\mathbf{q}\right)\hat{\tilde{P}}_{k}q_{l}\right)\] \[+ \frac{1}{2m^{2}}\hat{Q}_{\sigma^{\prime}_{1}\sigma_{1}}^{kl}\hat{ \tilde{P}}_{k}\hat{\tilde{P}}_{l}\left(\mathbf{q}^{2}-\left(\mathbf{\hat{ \tilde{P}}}\cdot\mathbf{q}\right)^{2}\right)\mathcal{G}_{1}\left(\left(\mathbf{ \hat{\tilde{P}}}\cdot\mathbf{q}\right)^{2}-\mathbf{q}^{2}\right)+\frac{i}{m} \,\hat{\tilde{\mathbf{P}}}\cdot\left(\mathbf{\hat{S}}_{\sigma^{\prime}_{1} \sigma_{1}}\times\mathbf{q}\right)\mathcal{M}\left(\left(\mathbf{\hat{\tilde {P}}}\cdot\mathbf{q}\right)^{2}-\mathbf{q}^{2}\right)\Bigg{\}}\,,\] where \(\hat{\tilde{P}}^{\mu}=(1,\mathbf{\hat{\tilde{P}}})\), \(\mathbf{\hat{\tilde{P}}}=\mathbf{\tilde{P}}/|\mathbf{\tilde{P}}|\), \(\mathbf{\tilde{P}}^{\prime}=\mathbf{\hat{v}}\times\left(\mathbf{\tilde{P}} \times\mathbf{\hat{v}}\right)+\gamma\big{(}\mathbf{\tilde{\tilde{P}}}\cdot \mathbf{\hat{v}}-v\tilde{P}\big{)}\mathbf{\hat{v}}\) and the unit vector \(\mathbf{\hat{m}}\) is defined as \(\mathbf{\hat{m}}\equiv\mathbf{\hat{\tilde{P}}}^{\prime}\). The combinations of form factors in Eq. (45) are defined as in the ZAMF, i.e. by Eq. (10). We change the integration variable \(\mathbf{\tilde{P}}\rightarrow\mathbf{\tilde{P}}^{\prime}\) and define a vector-valued function \[\mathbf{n}\big{(}\mathbf{v},\mathbf{\hat{m}}\big{)}=\mathbf{\hat{v}}\times \big{(}\mathbf{\hat{\hat{m}}}\times\mathbf{\hat{v}}\big{)}+\gamma\big{(} \mathbf{\hat{\hat{m}}}\cdot\mathbf{\hat{v}}+v\big{)}\mathbf{\hat{v}}\,. \tag{46}\] Given that \(\mathbf{\tilde{P}}=\mathbf{\hat{v}}\times\big{(}\mathbf{\tilde{P}}^{\prime} \times\mathbf{\hat{v}}\big{)}+\gamma\big{(}\mathbf{\tilde{P}}^{\prime}\cdot \mathbf{\hat{v}}+v\tilde{P}^{\prime}\big{)}\mathbf{\hat{v}}\), it follows that \(\mathbf{\hat{n}}=\mathbf{\hat{\tilde{P}}}\). The Jacobian of the change of variables \(\mathbf{\tilde{P}}\rightarrow\mathbf{\tilde{P}}^{\prime}\) cancels the first factor in the integrands and after some simplifications we obtain \[j_{\mathbf{v}}^{\mu}(\mathbf{r}) = \frac{1}{4\pi}\int d\mathbf{\hat{m}}\,\frac{d^{3}q}{(2\pi)^{3}}\,e ^{-i\mathbf{q}\cdot\mathbf{r}}D_{\sigma^{\prime}\sigma^{\prime}_{1}}^{\dagger} \left[W\left(\Lambda_{\mathbf{v}},\mathbf{\hat{m}}\right)\right]D_{\sigma_{1} \sigma}\left[W\left(\Lambda_{\mathbf{v}},\mathbf{\hat{m}}\right)\right]\hat{n}^{ \mu}\Bigg{\{}\frac{i}{m}\,\mathbf{\hat{n}}\cdot\Big{(}\mathbf{\hat{S}}_{ \sigma^{\prime}_{1}\sigma_{1}}\times\mathbf{q}\Big{)}\,\mathcal{M}\left(-\mathbf{ q}_{\perp}^{2}\right) \tag{47}\] \[+ \delta_{\sigma^{\prime}_{1}\sigma_{1}}\mathcal{G}_{0}\left(-\mathbf{ q}_{\perp}^{2}\right)+\frac{\mathbf{q}_{\perp}^{2}}{2m^{2}}\hat{Q}_{\sigma^{ \prime}_{1}\sigma_{1}}^{kl}\hat{n}^{k}\hat{n}^{l}\mathcal{G}_{1}\left(-\mathbf{ q}_{\perp}^{2}\right)+\hat{Q}_{\sigma^{\prime}_{1}\sigma_{1}}^{kl}\frac{q_{\perp}^{k}q_{ \perp}^{l}}{2m^{2}}\mathcal{G}_{2}\left(-\mathbf{q}_{\perp}^{2}\right)\Bigg{\}}\,,\] where \(\hat{n}^{\mu}=(1,\mathbf{\hat{n}})\) and \(\mathbf{q}_{\perp}^{2}=\mathbf{q}^{2}-(\mathbf{\hat{n}}\cdot\mathbf{q})^{2}\). In the IMF with \(v\to 1\) and \(\gamma\rightarrow\infty\), \(\mathbf{\hat{n}}\) turns to \(\mathbf{\hat{v}}\) and using explicit form of the Wigner rotation matrices, and the integration over \(\mathbf{\hat{m}}\) can be carried out explicitly. The resulting expression has the form: \[j_{\mathbf{v}}^{\mu}(\mathbf{r}) = \int\frac{d^{3}q}{(2\pi)^{3}}\,e^{-i\mathbf{q}\cdot\mathbf{r}}\, \hat{v}^{\mu}\Bigg{\{}\delta_{\sigma^{\prime}\sigma}\,\mathcal{G}_{0}\left(- \mathbf{q}_{\perp}^{2}\right)+\frac{i}{2m}\mathbf{\hat{v}}\cdot\Big{(}\mathbf{ \hat{S}}_{\sigma^{\prime}\sigma}\times\mathbf{q}\Big{)}\,\,\mathcal{M}\left(- \mathbf{q}_{\perp}^{2}\right) \tag{48}\] \[+ \frac{1}{6m^{2}}\left(q_{\perp}^{k}q_{\perp}^{l}+\frac{\mathbf{ q}_{\perp}^{2}}{2}\,\hat{v}^{k}\hat{v}^{l}\right)\hat{Q}_{\sigma^{\prime}\sigma}^{kl} \mathcal{G}_{2}\left(-\mathbf{q}_{\perp}^{2}\right)\Bigg{\}}\,.\] Analogously to the electromagnetic current, the matrix element of the EMT in a moving frame for a sharply localized state can be written as \[t_{\phi}^{00} = \int d\hat{\mathbf{m}}\frac{d\tilde{P}^{\prime}\tilde{P}^{\prime 2}d^{3}q}{(2 \pi)^{3}}\left|\tilde{\phi}\left(\tilde{P}^{\prime}\right)\right|e^{-i\mathbf{q }\cdot\mathbf{r}}D_{\sigma^{\prime}\sigma_{1}^{\prime}}^{\dagger}\left[W\left( \Lambda_{\mathbf{v}},\hat{\mathbf{m}}\right)\right]D_{\sigma_{1}\sigma}\left[W \left(\Lambda_{\mathbf{v}},\hat{\mathbf{m}}\right)\right]\frac{\gamma(\tilde{P }^{\prime}+v\tilde{P}_{\parallel}^{\prime})}{R}\] \[\times \left\{\delta_{\sigma_{1}^{\prime}\sigma_{1}}\mathcal{E}_{0}(- \mathbf{q}_{\perp}^{2})+\hat{Q}_{\sigma_{1}^{\prime}\sigma_{1}}^{kl}\hat{n}^{ k}\hat{n}^{l}\frac{\mathbf{q}_{\perp}^{2}}{m^{2}}\mathcal{E}_{1}(-\mathbf{q}_{ \perp}^{2})+\frac{\mathcal{E}_{2}(-\mathbf{q}_{\perp}^{2})}{m^{2}}\hat{Q}_{ \sigma_{1}^{\prime}\sigma_{1}}^{kl}q_{\perp}^{k}q_{\perp}^{l}+\mathbf{\hat{n} }\cdot(\mathbf{\hat{S}}_{\sigma_{1}^{\prime}\sigma_{1}}\times\mathbf{q})\frac{ i\mathcal{J}(-\mathbf{q}_{\perp}^{2})}{m}\right\}\ +\text{Rest}\,,\] \[t_{\phi}^{0i} = \int d\hat{\mathbf{m}}\frac{d\tilde{P}^{\prime}\tilde{P}^{\prime 2}d^{3}q}{(2 \pi)^{3}}\left|\tilde{\phi}\left(\tilde{P}^{\prime}\right)\right|e^{-i\mathbf{ q}\cdot\mathbf{r}}D_{\sigma^{\prime}\sigma_{1}^{\prime}}^{\dagger}\left[W\left( \Lambda_{\mathbf{v}},\hat{\mathbf{m}}\right)\right]D_{\sigma_{1}\sigma}\left[W \left(\Lambda_{\mathbf{v}},\hat{\mathbf{m}}\right)\right]\frac{\gamma(\tilde{P }^{\prime}+v\tilde{P}_{\parallel}^{\prime})}{R}\] \[\times \mathbf{\hat{n}}\Bigg{\{}\frac{i\mathcal{J}(-\mathbf{q}_{\perp}^ {2})}{m}(\mathbf{\hat{S}}_{\sigma_{1}^{\prime}\sigma_{1}}\times\mathbf{q}) \cdot\mathbf{\hat{n}}+\delta_{\sigma_{1}^{\prime}\sigma_{1}}\mathcal{E}_{0}(- \mathbf{q}_{\perp}^{2})+\hat{Q}_{\sigma_{1}^{\prime}\sigma_{1}}^{kl}\hat{n}^{ k}\hat{n}^{l}\frac{\mathbf{q}_{\perp}^{2}}{m^{2}}\mathcal{E}_{1}(-\mathbf{q}_{ \perp}^{2})+\frac{\mathcal{E}_{2}(-\mathbf{q}_{\perp}^{2})}{M^{2}}\hat{Q}_{ \sigma_{1}^{\prime}\sigma_{1}}^{kl}q_{\perp}^{l}\Bigg{\}}\ +\text{Rest}\,,\] \[t_{\phi}^{ij} = \int d\hat{\mathbf{m}}\frac{d\tilde{P}^{\prime}\tilde{P}^{\prime 2}d^{3}q}{(2 \pi)^{3}}\left|\tilde{\phi}\left(\tilde{P}^{\prime}\right)\right|e^{-i\mathbf{ q}\cdot\mathbf{r}}D_{\sigma^{\prime}\sigma_{1}^{\prime}}^{\dagger}\left[W\left( \Lambda_{\mathbf{v}},\hat{\mathbf{m}}\right)\right]D_{\sigma_{1}\sigma}\left[W \left(\Lambda_{\mathbf{v}},\hat{\mathbf{m}}\right)\right]\frac{\gamma(\tilde {P}^{\prime}+v\tilde{P}_{\parallel}^{\prime})}{R} \tag{49}\] \[\times \hat{n}^{i}\hat{n}^{j}\Bigg{\{}\delta_{\sigma_{1}^{\prime}\sigma _{1}}\mathcal{E}_{0}(-\mathbf{q}_{\perp}^{2})+\hat{Q}_{\sigma_{1}^{\prime} \sigma_{1}}^{kl}\hat{n}^{k}\hat{n}^{l}\frac{\mathbf{q}_{\perp}^{2}}{m^{2}} \mathcal{E}_{1}(-\mathbf{q}_{\perp}^{2})+\frac{\mathcal{E}_{2}(-\mathbf{q}_{ \perp}^{2})}{m^{2}}\hat{Q}_{\sigma_{1}^{\prime}\sigma_{1}}^{kl}q_{\perp}^{k}q _{\perp}^{l}+\frac{i\mathcal{J}(-\mathbf{q}_{\perp}^{2})}{m}(\mathbf{\hat{S}}_ {\sigma_{1}^{\prime}\sigma_{1}}\times\mathbf{q})\cdot\hat{\mathbf{n}}\Bigg{\}}\] \[+ \int d\hat{\mathbf{m}}\frac{d\tilde{P}^{\prime}\tilde{P}^{\prime 2}d^{3}q}{(2 \pi)^{3}}\left|\tilde{\phi}\left(\tilde{P}^{\prime}\right)\right|e^{-i\mathbf{ q}\cdot\mathbf{r}}D_{\sigma^{\prime}\sigma_{1}^{\prime}}^{\dagger}\left[W\left( \Lambda_{\mathbf{v}},\hat{\mathbf{m}}\right)\right]D_{\sigma_{1}\sigma}\left[W \left(\Lambda_{\mathbf{v}},\hat{\mathbf{m}}\right)\right]\frac{R}{2\gamma( \tilde{P}^{\prime}+v\tilde{P}_{\parallel}^{\prime})}\] \[\times \Bigg{\{}\left(\delta_{ij}\mathbf{q}_{\perp}^{2}-q_{i}q_{j} \right)\Bigg{[}\delta_{\sigma_{1}^{\prime}\sigma_{1}}\mathcal{D}_{0}(- \mathbf{q}_{\perp}^{2})+\frac{\mathcal{D}_{2}(-\mathbf{q}_{\perp}^{2})}{m^{2}} \hat{Q}_{\sigma_{1}^{\prime}\sigma_{1}}^{kl}q_{\perp}^{k}q_{\perp}^{l}+\left( \hat{Q}_{\sigma_{1}^{\prime}\sigma_{1}}^{kl}\hat{n}^{l}\frac{\mathbf{q}_{ \perp}^{2}}{m^{2}}-2\frac{i}{m}\hat{\mathbf{n}}\cdot(\mathbf{\hat{S}}_{\sigma_{1 }^{\prime}\sigma_{1}}\times\mathbf{q})\right)\mathcal{D}_{1}(-\mathbf{q}_{ \perp}^{2})\Bigg{]}\] \[+ \delta_{ij}\Bigg{[}\delta_{\sigma_{1}^{\prime}\sigma_{1}}m^{2} \mathcal{C}_{0}(-\mathbf{q}_{\perp}^{2})+\mathcal{C}_{1}(-\mathbf{q}_{\perp}^{2}) \left(\mathbf{q}_{\perp}^{2}\hat{Q}_{\sigma_{1}^{\prime}\sigma_{1}}^{kl}\hat{n}^ {k}\hat{n}^{l}-2\frac{i}{m}(\mathbf{\hat{S}}_{\sigma_{1}^{\prime}\sigma_{1}} \times\mathbf{q})\cdot\mathbf{\hat{n}}\right)+\mathcal{C}_{2}(-\mathbf{q}_{ \perp}^{2})\hat{Q}_{\sigma_{1}^{\prime}\sigma_{1}}^{kl}q_{\perp}^{k}q_{\perp}^{ l}\Bigg{]}\Bigg{\}}+\text{Rest}\,,\] where in exact analogy to the ZAMF we show explicitly only the leading-order contributions for each form factor, and the explicit form of the linear combinations of the form factors, \(\mathcal{E}_{i}(-\mathbf{q}_{\perp}^{2})\), \(\mathcal{J}(-\mathbf{q}_{\perp}^{2})\), \(\mathcal{D}_{i}(-\mathbf{q}_{\perp}^{2})\) and \(\mathcal{C}_{i}(-\mathbf{q}_{\perp}^{2})\) is specified in Appendix C. In the IMF with \(\hat{\mathbf{n}}\stackrel{{ v\to 1}}{{\longrightarrow}}\hat{\mathbf{v}}\) and \(\gamma\to\infty\), the integration over \(\hat{\mathbf{m}}\) can be carried out explicitly using the explicit form of the Wigner rotation matrices. The integration over \(\hat{\mathbf{m}}\) is done in full analogy to the electromagnetic case. The resulting expressions after dropping the "Rest" contributions have the form: \[t_{\phi}^{00} = 4\pi\gamma N_{\phi,R}\int\frac{d^{3}q}{(2\pi)^{3}}e^{-i\mathbf{q} \cdot\mathbf{r}}\Bigg{\{}\delta_{\sigma^{\prime}\sigma}\mathcal{E}_{0}(- \mathbf{q}_{\perp}^{2})+\left(q_{\perp}^{k}q_{\perp}^{l}+\frac{\mathbf{q}_{ \perp}^{2}}{2}\,\hat{v}^{k}\hat{v}^{l}\right)\hat{Q}_{\sigma^{\prime}\sigma}^{kl} \frac{\mathcal{E}_{2}(-\mathbf{q}_{\perp}^{2})}{2m^{2}}+\mathbf{\hat{v}}\cdot( \mathbf{\hat{S}}_{\sigma^{\prime}\sigma}\times\mathbf{q})\frac{2i\mathcal{J}(- \mathbf{q}_{\perp}^{2})}{3m}\Bigg{\}}\] \[t_{\phi}^{0i} = 4\pi\gamma N_{\phi,R}\int\frac{d^{3}q}{(2\pi)^{3}}e^{-i\mathbf{q} \cdot\mathbf{r}}\mathbf{\hat{v}}\Bigg{\{}\delta_{\sigma^{\prime}\sigma} \mathcal{E}_{0}(-\mathbf{q}_{\perp}^{2})+\left(q_{\perp}^{k} in the ZAMF can be restored by averaging the IMF expressions up the normalization factor, while the quadrupole structure \(\sim\hat{Q}^{kl}_{\sigma^{\prime}\sigma}\) can not be obtained this way. ## VI Summary In this work we considered matrix elements of the electromagnetic current and the EMT operators for spin-1 systems calculated for sharply localized one-particle states. We obtained the resulting expressions of the local spatial distributions in terms of the form factors in the ZAMF as well as in moving frames. By considering the static approximation we also obtained the traditional expressions in terms of the form factors in the Breit frame. Next we discussed the physical interpretation of obtained spatial densities. Having calculated the spatial densities in the IMF, we found that the expressions for the ZAMF densities coincide with the ones obtained by integrating the corresponding IMF expressions over all possible directions, as was also found for spin-0 and spin-1/2 systems. The only exceptions are the quadrupole densities for spin-1 systems, where the mismatch can be traced back to the fact that Wigner rotations modify the quadrupole structure. As the next step we plan to apply the obtained results to the electromagnetic and gravitational densities of the deuteron within the framework of the low-energy effective field theory of QCD. ###### Acknowledgements. This work was supported in part by BMBF (Grant No. 05P21PCFP1), by DFG and NSFC through funds provided to the Sino-German CRC 110 "Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 11621131001, DFG Project-ID 196253076 - TRR 110), by ERC NuclearTheory (grant No. 885150), by CAS through a President's International Fellowship Initiative (PIFI) (Grant No. 2018DM0034), by the VolkswagenStiftung (Grant No. 93562), by the EU Horizon 2020 research and innovation programme (STRONG-2020, grant agreement No. 824093), and by the MKW NRW under the funding code NW21-024-A. ## Appendix A Distinguishing between \(m\) and \(M\). Below we demonstrate the importance of distinguishing between \(m\) and \(M\) when taking the static limit. To obtain the charge density in the static approximation we expand the integrand in Eq. (8) in powers of \(1/m\) and keep only the leading order term. Then we expand the integrand in powers of \(R\) around \(R=0\) and keep terms up to the zeroth order. Integration over \(P\) now results in \[j^{0}_{\rm naive}({\bf r}) = \int\frac{d^{3}q}{(2\pi)^{3}}e^{-i{\bf q}\cdot{\bf r}}\Bigg{\{} \delta_{\sigma\sigma^{\prime}}\left(G_{1}(-{\bf q}^{2})+\frac{{\bf q}^{2}}{6M ^{2}}G_{3}(-{\bf q}^{2})\right)-G_{3}(-{\bf q}^{2})\hat{Q}^{km}\frac{q^{k}q^{ m}}{2M^{2}}\Bigg{\}}\,. \tag{10}\] By substituting \(M=m\) in Eq. (10) we obtain the expression displayed in Eq. (15). Notice that there would be no contribution of the form factor \(G_{3}\) in Eq. (10) if we would not distinguish between \(m\) and \(M\) and keep only the leading order term of the \(1/m\) expansion. One might think that the expression of the charge density given in Eq. (15) could be also obtained by taking \(M=m\) from the very beginning and keeping the terms up to \(1/m^{2}\) in the \(1/m\) expansion of the integrand. Doing so we obtain \[j^{0}_{\rm naive}({\bf r}) = \int\!\!\frac{\tilde{P}^{2}d\tilde{P}d^{2}\hat{n}d^{3}q}{(2\pi) ^{3}}\,\tilde{\phi}\left(|\tilde{\bf P}|\right)\tilde{\phi}^{\star}\left(| \tilde{\bf P}|\right)e^{-i{\bf q}\cdot{\bf r}}\Bigg{\{}\delta_{\sigma\sigma^ {\prime}}\left(G_{1}(-{\bf q}^{2})+\frac{{\bf q}^{2}}{6m^{2}}G_{3}(-{\bf q}^{ 2})\right)-G_{3}(-{\bf q}^{2})\hat{Q}^{km}\frac{q^{k}q^{m}}{2m^{2}} \tag{11}\] \[+ \frac{\delta_{\sigma^{\prime}\sigma}}{6m^{2}R^{2}}\left(6\tilde{ P}^{2}{\bf q}_{\parallel}^{2}G_{1}^{\prime}(-{\bf q}^{2})+{\bf q}^{2}R^{2}G_{1}(-{ \bf q}^{2})-{\bf q}^{2}R^{2}G_{2}(-{\bf q}^{2})\right)\!+\!\hat{Q}^{kl}_{\sigma ^{\prime}\sigma}\frac{q^{k}q^{l}}{2m^{2}}\left(G_{2}(-{\bf q}^{2})-G_{1}(-{ \bf q}^{2})\right)\!\Bigg{\}}\,.\] Eq. (11) apparently does not reproduce the expression of Eq. (15). Moreover, it contains terms which diverge in \(R\to 0\) limit. This is caused by the non-commutativity of the \(1/m\) expansion with the expansion around \(R=0\). ## Appendix B Spin operators The spin (\(S\)) and quadrupole (\(Q\)) operators defined in terms of the polarization vectors of Eq.(7) (for more details see Ref. [43]): \[\langle\sigma^{\prime}|\hat{S}^{i}|\sigma\rangle \equiv (\hat{S}^{i})_{\sigma^{\prime}\sigma}=-i\epsilon^{ijk}\epsilon^{ \star j}_{\sigma^{\prime}}\epsilon^{k}_{\sigma},\] \[\hat{Q}^{ij}_{\sigma^{\prime}\sigma} = \frac{1}{2}\left(\hat{S}^{i}\hat{S}^{j}+\hat{S}^{j}\hat{S}^{i}- \frac{2}{3}S(S+1)\delta^{ij}\right)_{\sigma^{\prime}\sigma}=\frac{1}{3}\delta^ {ij}\delta_{\sigma\sigma^{\prime}}-\frac{1}{2}\left(\hat{\epsilon}^{\star i}_ {\sigma^{\prime}}\hat{\epsilon}^{j}_{\sigma}+\hat{\epsilon}^{\star j}_{\sigma ^{\prime}}\hat{\epsilon}^{i}_{\sigma}\right). \tag{11}\] Using these definitions the following useful relations can be obtained: \[\mathbf{\epsilon}_{\sigma}(\mathbf{\hat{\epsilon}}^{\star}_{\sigma^{ \prime}}\cdot\mathbf{q})-\mathbf{\epsilon}^{\star}_{\sigma^{\prime}}(\mathbf{\hat{ \epsilon}}_{\sigma}\cdot\mathbf{q}) = i(\mathbf{\hat{S}}_{\sigma^{\prime}\sigma}\times\mathbf{q}), \tag{12}\] \[(\mathbf{\hat{\epsilon}}_{\sigma}\cdot\mathbf{q})(\mathbf{\hat{\epsilon}} ^{\star}_{\sigma^{\prime}}\cdot\mathbf{q}) = \frac{\mathbf{q}^{2}}{3}\delta_{\sigma\sigma^{\prime}}-\hat{Q}^{ kl}_{\sigma^{\prime}\sigma}q^{k}q^{l},\] (13) \[(\mathbf{\hat{\epsilon}}_{\sigma}\cdot\hat{\mathbf{n}})(\mathbf{\hat{ \epsilon}}^{\star}_{\sigma^{\prime}}\cdot\mathbf{\hat{n}}) = \frac{1}{3}\delta_{\sigma\sigma^{\prime}}-\hat{Q}^{kl}_{\sigma^{ \prime}\sigma}\hat{n}^{k}\hat{n}^{l},\] (14) \[(\mathbf{\hat{\epsilon}}_{\sigma}\cdot\hat{\mathbf{n}})(\mathbf{\hat{ \epsilon}}^{\star}_{\sigma^{\prime}}\cdot\mathbf{q})+(\mathbf{\hat{\epsilon}}_{ \sigma}\cdot\mathbf{q})(\mathbf{\hat{\epsilon}}^{\star}_{\sigma^{\prime}}\cdot \hat{\mathbf{n}}) = \frac{2}{3}(\mathbf{\hat{n}}\cdot\mathbf{q})\delta_{\sigma\sigma^ {\prime}}-2\hat{Q}^{nk}_{\sigma^{\prime}\sigma}\hat{n}^{k}\hat{q}^{l},\] (15) \[\mathbf{\hat{\epsilon}}^{i}_{\sigma}\mathbf{\hat{\epsilon}}^{\star j}_{ \sigma}+\mathbf{\hat{\epsilon}}^{j}_{\sigma}\mathbf{\hat{\epsilon}}^{\star i}_{\sigma^ {\prime}} = \frac{2}{3}\delta^{ij}\delta_{\sigma\sigma^{\prime}}-2\hat{Q}^{ij}_{ \sigma^{\prime}\sigma}\,,\] (16) \[(\mathbf{\hat{\epsilon}}^{\star}_{\sigma^{\prime}}\times\mathbf{\hat{ \epsilon}}_{\sigma}) = i\mathbf{\hat{S}}_{\sigma^{\prime}\sigma}. \tag{17}\] Multipole tensor of the \(n\)th rank is defined for \(r\neq 0\) as [47]: \[Y^{i_{1},i_{2},...,i_{n}}_{n}(\mathbf{\hat{r}})=\frac{(-1)^{n}}{(2n-1)!!}r^{n+ 1}\partial^{i_{1}}...\partial^{i_{n}}\frac{1}{r}\,. \tag{18}\] From Eq. (18) follows in particular: \[Y_{0}(\mathbf{\hat{r}})=1,\ \ Y^{i}_{1}(\mathbf{\hat{r}})=\frac{r^{i}}{r},\ \ Y^{ij}_{2}(\mathbf{\hat{r}})=\frac{r^{i}r^{j}}{r^{2}}-\frac{1}{3}\delta^{ij}\,. \tag{19}\] ## Appendix C Linear combinations of the gravitational form factors Linear combinations of gravitational form factors in the ZAMF: \[{\cal E}_{0}(-{\bf q}_{\perp}^{2}) = A_{0}(-{\bf q}_{\perp}^{2})-\frac{{\bf q}_{\perp}^{2}}{12m^{2}}A_{1 }(-{\bf q}_{\perp}^{2})+\] \[+ \frac{{\bf q}_{\perp}^{2}}{12m^{2}}\left(4J(-{\bf q}_{\perp}^{2}) -2E(-{\bf q}_{\perp}^{2})-2A_{0}(-{\bf q}_{\perp}^{2})+A_{1}(-{\bf q}_{\perp}^{ 2})\frac{{\bf q}_{\perp}^{2}}{4m^{2}}\right)+\frac{M^{2}}{3m^{2}}\overline{f}(- {\bf q}_{\perp}^{2})\,,\] \[{\cal E}_{2}(-{\bf q}_{\perp}^{2}) = \frac{A_{1}(-{\bf q}_{\perp}^{2})}{4}\,,\] \[{\cal E}_{1}(-{\bf q}_{\perp}^{2}) = \frac{1}{2}\left(A_{0}(-{\bf q}_{\perp}^{2})+E(-{\bf q}_{\perp}^{ 2})-2J(-{\bf q}_{\perp}^{2})-A_{1}(-{\bf q}_{\perp}^{2})\frac{{\bf q}_{\perp} ^{2}}{8m^{2}}\right)-\frac{M^{2}}{{\bf q}_{\perp}^{2}}\overline{f}(-{\bf q}_{ \perp}^{2})\,,\] \[{\cal J}(-{\bf q}_{\perp}^{2}) = J(-{\bf q}_{\perp}^{2})-A_{0}(-{\bf q}_{\perp}^{2})+A_{1}(-{\bf q }_{\perp}^{2})\frac{{\bf q}_{\perp}^{2}}{8m^{2}}\,,\] \[{\cal D}_{0}(-{\bf q}_{\perp}^{2}) = \frac{D_{0}(-{\bf q}_{\perp}^{2})}{2}+\frac{{\bf q}_{\perp}^{2}}{ 24m^{2}}D_{1}(-{\bf q}_{\perp}^{2})-\frac{{\bf q}_{\perp}^{2}}{12m^{2}}\left(D _{0}(-{\bf q}_{\perp}^{2})+\frac{{\bf q}_{\perp}^{2}}{8m^{2}}D_{1}(-{\bf q}_{ \perp}^{2})\right)\,,\] \[{\cal D}_{1}(-{\bf q}_{\perp}^{2}) = \frac{1}{4}\Bigg{[}D_{0}(-{\bf q}_{\perp}^{2})+D_{1}(-{\bf q}_{ \perp}^{2})\frac{{\bf q}_{\perp}^{2}}{8m^{2}}\Bigg{]}\,,\] \[{\cal D}_{2}(-{\bf q}_{\perp}^{2}) = -\frac{1}{8}D_{1}(-{\bf q}_{\perp}^{2})\,,\] \[{\cal C}_{0}(-{\bf q}_{\perp}^{2}) = \overline{c}_{0}(-{\bf q}_{\perp}^{2})+\frac{{\bf q}_{\perp}^{2}} {12m^{2}}\overline{c}_{1}(-{\bf q}_{\perp}^{2})-\frac{{\bf q}_{\perp}^{2}}{6m^ {2}}\left(\overline{c}_{0}(-{\bf q}_{\perp}^{2})+\frac{{\bf q}_{\perp}^{2}}{8 m^{2}}\overline{c}_{1}(-{\bf q}_{\perp}^{2})\right)\,,\] \[{\cal C}_{1}(-{\bf q}_{\perp}^{2}) = \frac{1}{2}\left(\overline{c}_{0}(-{\bf q}_{\perp}^{2})+\frac{{ \bf q}_{\perp}^{2}}{8m^{2}}\overline{c}_{1}(-{\bf q}_{\perp}^{2})\right),\] \[{\cal C}_{2}(-{\bf q}_{\perp}^{2}) = -\frac{\overline{c}_{1}(-{\bf q}_{\perp}^{2})}{4}\,. \tag{101}\] Linear combinations of gravitational form factors in the Breit frame: \[{\cal E}_{0}^{BF}(-{\bf q}^{2}) = A_{0}(-{\bf q}^{2})-\frac{{\bf q}^{2}}{12m^{2}}A_{1}(-{\bf q}^{2})\,,\] \[{\cal E}_{2}^{BF}(-{\bf q}^{2}) = \frac{A_{1}(-{\bf q}^{2})}{4}\,,\] \[{\cal J}^{BF}(-{\bf q}^{2}) = \frac{J(-{\bf q}^{2})}{2}\,,\] \[{\cal D}_{0}^{BF}(-{\bf q}^{2}) = \frac{D_{0}(-{\bf q}^{2})}{4}+\frac{{\bf q}^{2}}{48m^{2}}D_{1}(-{ \bf q}^{2})-\frac{E(-{\bf q}^{2})}{3}\,,\] \[{\cal D}_{2}^{BF}(-{\bf q}^{2}) = -\frac{E(-{\bf q}^{2})}{2}\,,\] \[{\cal D}_{3}^{BF}(-{\bf q}^{2}) = -\frac{D_{1}(-{\bf q}^{2})}{16}\,,\] \[{\cal C}_{0}^{BF}(-{\bf q}^{2}) = \overline{f}(-{\bf q}^{2})\frac{1}{12}+\overline{c}_{0}(-{\bf q}^ {2})\frac{1}{2}+\frac{{\bf q}^{2}}{24m^{2}}\overline{c}_{1}(-{\bf q}^{2})\,,\] \[{\cal C}_{2}^{BF}(-{\bf q}^{2}) = -\frac{\overline{c}_{1}(-{\bf q}^{2})}{8}\,. \tag{102}\] ## Appendix D The coefficients \(\hat{d}_{i}\) and \(\hat{e}_{i}\) The differential operators \(\hat{d}_{i}\) and \(\hat{e}_{i}\): \[\hat{d}_{1}(r) = \left(\frac{1}{3}\hat{O}_{2}(r_{\parallel})-\frac{2}{3}\hat{O}_{2} (r_{\perp})\right)\tilde{\mathcal{D}}_{0}(r_{\perp})\delta(r_{\parallel})\,,\] \[\hat{d}_{2}(r) = \left[-\frac{1}{2}\hat{O}_{2}(r_{\perp})-\frac{1}{2}\hat{O}_{2}(r _{\parallel})+\frac{3}{r^{2}}r_{\perp}^{k}r_{\parallel}^{l}\frac{d^{2}}{dr_{ \perp}^{k}dr_{\parallel}^{l}}+\frac{3}{2r^{2}}\left(r_{\perp}^{k}r_{\perp}^{l} \frac{d^{2}}{dr_{\perp}^{k}dr_{\perp}^{l}}+r_{\parallel}^{k}r_{\parallel}^{l} \frac{d^{2}}{dr_{\parallel}^{k}dr_{\parallel}^{l}}\right)\right]\tilde{ \mathcal{D}}_{0}(r_{\perp})\delta(r_{\parallel})\,,\] \[\hat{d}_{3}(r) = \left[\frac{1}{2}\left(3\,\frac{r_{\parallel}^{2}}{r^{2}}-1 \right)\hat{O}_{2}(r_{\perp})+\hat{O}_{3}(r_{\perp},r_{\parallel})\right]\hat{ O}_{2}(r_{\perp})\frac{\tilde{\mathcal{D}}_{1}(r_{\perp})}{m^{2}}\delta(r_{ \parallel})\,,\] \[\hat{d}_{4}(r) = \hat{O}_{4}(r_{\perp},r_{\parallel})\hat{O}_{2}(r_{\perp})\frac{ \tilde{\mathcal{D}}_{1}(r_{\perp})}{m^{2}}\delta(r_{\parallel})\,,\] \[\hat{d}_{5}(r) = \hat{O}_{5}(r_{\perp},r_{\parallel})\hat{O}_{2}(r_{\perp})\frac{ \tilde{\mathcal{D}}_{1}(r_{\perp})}{m^{2}}\delta(r_{\parallel})\,,\] \[\hat{d}_{6}(r) = \hat{O}_{6}(r_{\perp},r_{\parallel})\hat{O}_{2}(r_{\perp})\frac{ \tilde{\mathcal{D}}_{1}(r_{\perp})}{m^{2}}\delta(r_{\parallel})\,,\] \[\hat{e}_{1}(r) = \left[\frac{1}{2}\left(3\,\frac{r_{\perp}^{2}}{r^{2}}-1\right) \hat{O}_{2}(r_{\perp})+\hat{O}_{3}(r_{\parallel},r_{\perp})\right]\hat{O}_{2} (r_{\perp})\frac{\tilde{\mathcal{D}}_{2}(r_{\perp})}{m^{2}}\delta(r_{ \parallel})\,,\] \[\hat{e}_{2}(r) = \hat{O}_{4}(r_{\parallel},r_{\perp})\hat{O}_{2}(r_{\perp})\frac{ \tilde{\mathcal{D}}_{2}(r_{\perp})}{m^{2}}\delta(r_{\parallel})\,,\] \[\hat{e}_{3}(r) = \hat{O}_{5}(r_{\parallel},r_{\perp})\hat{O}_{2}(r_{\perp})\frac{ \tilde{\mathcal{D}}_{2}(r_{\perp})}{m^{2}}\delta(r_{\parallel})\,,\] \[\hat{e}_{4}(r) = \hat{O}_{6}(r_{\parallel},r_{\perp})\hat{O}_{2}(r_{\perp})\frac{ \tilde{\mathcal{D}}_{2}(r_{\perp})}{m^{2}}\delta(r_{\parallel})\,, \tag{45}\] where \[\hat{O}_{3}(x,y) := \frac{x^{4}-6x^{2}y^{2}-2y^{4}}{6r^{4}}\hat{O}_{2}(x)-\frac{5y^{2 }-3r^{2}}{6r^{4}}\left(2y^{a}x^{b}\frac{\partial^{2}}{\partial y^{a}\partial x ^{b}}+y^{2}\hat{O}_{2}(y)\right), \tag{46}\] \[\hat{O}_{4}(x,y) := \frac{4r^{4}-35y^{2}x^{2}}{8r^{4}}\hat{O}_{2}(x)+\frac{5\left(3r ^{2}-7y^{2}\right)}{4r^{4}}y^{a}x^{b}\frac{\partial^{2}}{\partial y^{a}\partial x ^{b}}+\frac{5y^{2}\left(6r^{2}-7y^{2}\right)-24r^{4}}{8r^{4}}\,\hat{O}_{2}(y),\] (47) \[\hat{O}_{5}(x,y) := \frac{7y^{2}x^{2}}{12r^{4}}\hat{O}_{2}(x)+\frac{7y^{2}-3r^{2}}{6r ^{4}}y^{a}x^{b}\frac{\partial^{2}}{\partial y^{a}\partial x^{b}}-\frac{x^{2} \left(r^{2}+7y^{2}\right)}{12r^{4}}\hat{O}_{2}(y),\] (48) \[\hat{O}_{6}(x,y) := \frac{5y^{2}x^{2}}{4r^{4}}\hat{O}_{2}(x)+\frac{5y^{2}-3r^{2}}{2r^ {4}}y^{a}x^{b}\frac{\partial^{2}}{\partial y^{a}\partial x^{b}}+\frac{x^{2} \left(r^{2}-5y^{2}\right)}{4r^{4}}\hat{O}_{2}(y)\,, \tag{49}\] and \[\tilde{\mathcal{D}}_{i}(r_{\perp})=\int\frac{d^{2}q_{\perp}}{(2\pi)^{2}}e^{-i{ \bf q}_{\perp}\cdot{\bf r}_{\perp}}\mathcal{D}_{i}(-{\bf q}_{\perp}^{2})\,. \tag{50}\] The operators \(\hat{O}_{1}\) and \(\hat{O}_{2}\) are defined in Eq. (13).
2306.06876
On the thermodynamic geometry of one-dimensional spin-3/2 lattice models
Four-dimensional state space geometry is worked out for the exactly solved one-dimensional spin-3/2 lattice with a Blume-Emery-Griffiths (BEG) Hamiltonian as well as a more general one with a term containing a non-zero field coupling to the octopole moments. The phase behaviour of the spin-3/2 chain is also explored extensively and novel phenomena suggesting anomalies in the hyperscaling relation and in the decay of fluctuations are reported for a range of parameter values. Using the method of constrained fluctuations worked out earlier in \cite{asknbads,riekan1} three sectional curvatures and a $3d$ curvature are obtained and shown to separately encode dipolar, quadrupolar and octopolar correlations both near and away from pseudo-criticality. In all instances of a seeming hyperscaling violation the $3d$ scalar curvature is found to encode the correlation length while the relevant $2d$ curvature equals the inverse of singular free energy. For parameter values where the order parameter fluctuation anomalously decays despite a divergence in correlation length the relevant scalar curvature undergoes a sign change to positive values, signalling a possible change in statistics.
Riekshika Sanwari, Soumen Khatua, Anurag Sahay
2023-06-12T05:18:06Z
http://arxiv.org/abs/2306.06876v1
# On the thermodynamic geometry of one-dimensional spin-3/2 lattice models ###### Abstract Four-dimensional state space geometry is worked out for the exactly solved one-dimensional spin-3/2 lattice with a Blume-Emery-Griffiths (BEG) Hamiltonian as well as a more general one with a term containing a non-zero field coupling to the octopole moments. The phase behaviour of the spin-3/2 chain is also explored extensively and novel phenomena suggesting anomalies in the hyperscaling relation and in the decay of fluctuations are reported for a range of parameter values. Using the method of constrained fluctuations worked out earlier in [1; 2] three sectional curvatures and a \(3d\) curvature are obtained and shown to separately encode dipolar, quadrupolar and octopolar correlations both near and away from pseudo-criticality. In all instances of a seeming hyperscaling violation the \(3d\) scalar curvature is found to encode the correlation length while the relevant \(2d\) curvature equals the inverse of singular free energy. For parameter values where the order parameter fluctuation anomalously decays despite a divergence in correlation length the relevant scalar curvature undergoes a sign change to positive values, signalling a possible change in statistics. ## I Introduction Thermodynamic geometry (TG) has been extensively used to investigate phase transition phenomena in a wide class of physical systems ranging from fluids to black holes [3]. By introducing a Riemannian distance measure in the equilibrium state space TG forges a remarkable connection between the geometric invariants of the state space manifold and the underlying microscopic description of the physical system, thus providing a seemingly surprising thermodynamic route to statistical mechanical results. For example, upto a constant of order unity, the state space scalar curvature \(R\) has been conjectured to be equal to the correlation volume in the vicinity of critical point [4; 5; 6] and, furthermore, it can be used to infer the coexistence curves in discontinous phase transitions as well as the Widom line in the supercritical phase [8; 9; 10; 11]. In addition to its well known correspondence with the correlation length both near and away from criticality, the state space scalar curvature \(R\) also possibly encodes some information of higher order statistics. For instance, it has been widely observed that the sign of \(R\) discriminates between repulsive and attractive nature of statistical interactions or between solid-like and fluid-like states of aggregation [11; 12; 13; 14; 15; 16; 17]. There has been a sustained interest in the TG of exactly solved models, especially the one-dimensional lattice spin models. In spite of the absence of a finite temperature critical point the exactly solved models often exhibit sufficiently rich ground state phase structures which contain clues to understanding their higher dimensional counterparts. With analytical control at hand, they also provide fertile testing grounds for the predictions and claims of thermodynamic geometry. For example, the scalar curvature \(R\) can be directly compared to the correlation length obtained via the transfer matrix of one-dimensional spin models. One of the first such models used to successfully verify TG is the one-dimensional Ising model which has a pseudocritical point at zero temperature in zero magnetic field. The scalar curvature earlier worked out numerically in [18] was later found to be a surprisingly simple expression, [19]. Some other cases where TG has been applied to exactly solved models are the Ising model on a Bethe lattice, [20], Ising model on planar random graphs, [21], the spherical model, [22] and the one dimensional Potts model, [23], decorated two-parameter Ising spin chain with frustration, [24] and a ferromagnetic Ising model in an external magnetic field [25]. The TG of the one-dimensional spin-one model and of its mean field approximation was investigated by us earlier in [2] and [26]. See also [27; 28] for an alternative approach to the geometry of the spin-one model. With their rich phase structure comprising critical lines, coexisting surfaces and triple points, higher spin lattice models have been of continued interest in the study of phase transition phenomena in systems with competing order parameters. One of the most popular spin-one models, the Blume-Emery-Griffiths (BEG) model, originally formulated to study the phase behaviour of He\({}^{3}\)-He\({}^{4}\) mixtures, has been used to study the phase behaviour in variety of systems, [29; 30; 31; 32]. The spin-3/2 model, the system of interest in this work, was first used by Krinsky and Mukamel [33] as an improvement over the spin-1 lattice gas model of ternary fluid mixtures of Mukamel and Blume [32]. It was argued in [33] that a true representation of the non-symmetric tricritical point found in ternary mixtures requires a four dimensional parameter space which necessitates at least a spin-3/2 lattice. In addition to the dipolar and quadrupolar order parameters found in the spin-1 model the spin-3/2 case includes an additional octopolar order parameter. Thus, the Krinsky-Mukamel model is able to accommodate three different types of particles and a vacancy as required for a ternary mixture instead of just two particles and a vacancy in the Mukamel-Blume model. Indeed, the mean field phase structure of the spin-3/2 lattice gas model gives a qualitatively correct picture of multicritical phase behaviour in ternary mixtures including the non-symmetric tricritical points found experimentally. On the other hand, a spin-3/2 BEG model containing only dipolar and quadrupolar interactions without crystal field was first used to model the phase behaviour of \(DyVO_{4}\)[34]. Other works include a study of the spin-3/2 BEG model with a crystal-field within the mean field approximation and also via a Monte Carlo simulation [35] and by taking the renormalization-group approach [36]. Antiferromagnetic spin-3/2 Blume Capel model was studied in [37]. Higher spin lattice models are especially interesting from a TG perspective. With their state space manifold of dimension three or more (eg, three dimensions in spin-1 and four in spin-3/2) they provide enough opportunity to explore higher dimensional thermodynamic geometry [38]. As against the two dimensional case where the only independent curvature is the state space scalar curvature \(R\), now the full Riemannian curvature tensor can be exploited for more detailed information about underlying statistical interactions. In particular, curvatures on appropriately chosen slices of the higher dimensional thermodynamic manifold could be harnessed for information about specific order parameter correlations. In a basic form this approach of investigating curvatures on hypersurfaces was used in [39; 40] in the context of Kerr Newman black holes and in [41] for Kerr-Newman AdS black holes. A formal mathematical basis termed the method of constrained fluctuations was worked out in [1] in the context of extended phase space Kerr-AdS black holes. The method was further developed in our earlier work [2] where it was applied to an ordinary thermodynamic system, namely the one-dimensional spin-one model. We were able to show in detail how curvatures on suitable hypersurfaces separately encode correlations in the dipole and quadrupole fluctuations. In this work we obtain the state space Riemannian geometry for the exactly solved one-dimensional spin-3/2 lattice model and extensively investigate its phase behaviour in the light of geometry. Three sectional curvatures and a \(3d\) curvature are obtained by taking suitable hypersurfaces in the four dimensional state space manifold and their properties investigated vis-a-vis the fluctuations and correlations in order parameters. There are three order parameters in the model which gives rise to the possibility of two or more correlation lengths. We present our results on the extent to which geometry encodes the rich phase behaviour of the system. Further, we explore the connection between the sign changes in the curvatures and the change in pattern of fluctuations in order parameters. This paper is organised as follows. In section II we obtain the most general Hamiltonian of a spin-3/2 lattice with nearest -neighbour interactions and then discuss its BEG limit and a more general limit where the octopolar field is non-zero. Restricting to the spin-3/2 chain we obtain the free energy, fluctuations moments and correlation lengths via its transfer matrix. In section III we discuss in detail the ground state phase behaviour of the spin-3/2 chain both for the BEG case and the more general case. Several new results are reported. In section IV we briefly outline our development of the relevant spin-3/2 curvatures via the method of constrained fluctuations mentioned earlier. In section V we report the asymptotic expressions of the singular free energy and the correlation lengths in different parameter regimes. In section VI we present the results of our detailed investigations into the state space geometry of the spin-3/2 chain. Finally, in the concluding section VII we summarize our key results and try to define the scope of our work. ## II One-dimensional spin-3/2 model Following [33] and adding some explanation of our own, we first briefly outline a justification for the most general form of the spin-3/2 Hamiltonian from a ternary mixture lattice gas perspective. Let \(S=3/2,1/2,-1/2\) represent, respectively, particle 1, 2, and 3 and \(S=-3/2\) represent a vacancy or particle 0. The interaction between the particles is given by the coupling strengths \(K_{11},K_{22},K_{33},K_{12},K_{13},K_{23}\) where the subscripts indicate the type of particles. The Hamiltonian can then be written in terms of projection operators (functions), \[\mathcal{H}=-\sum_{\langle ij\rangle}\sum_{\lambda}\sum_{\sigma}K_{\lambda \sigma}\mathcal{P}_{\lambda}(S_{i})\mathcal{P}_{\sigma}(S_{j})-\sum_{i}\sum_{ \lambda}\mu_{\lambda}\mathcal{P}_{\lambda}(S_{i}) \tag{1}\] where \(\lambda,\sigma=1,2,3\) label the different particles, \(\mu_{\lambda}\) are the chemical potentials and the \(\mathcal{P}_{\lambda}\) are the projection functions which we explain now. The projection functions have the property that \[\sum_{\lambda=0}^{3}\mathcal{P}_{\lambda}(S)=1\hskip 28.452756pt;\hskip 28.452756pt \mathcal{P}_{\lambda}(S)\mathcal{P}_{\nu}(S)=\delta_{\lambda\nu}\mathcal{P}_{ \nu}(S) \tag{2}\] and are easy to construct as polynomials of third order in the spin variable \(f(S)=a_{0}+a_{1}\,S+a_{2}\,S^{2}+a_{3}\,S^{3}\) such that, for example, \(\mathcal{P}_{1}(S)=1\) for \(S=3/2\) and zero otherwise, etc. The projection functions turn out to be \[\mathcal{P}_{1}(S) = \frac{1}{48}\left(8S^{3}+12S^{2}-2S-3\right)\] \[\mathcal{P}_{2}(S) = \frac{1}{16}\left(-8S^{3}-4S^{2}+18S+9\right)\] \[\mathcal{P}_{3}(S) = \frac{1}{16}\left(8S^{3}-4S^{2}-18S+9\right)\] \[\mathcal{P}_{4}(S) = \frac{1}{48}\left(-8S^{3}+12S^{2}+2S-3\right). \tag{3}\] Plugging the projection operators back in the Hamiltonian, eq.(1), and rearranging the spin terms one obtains the most general nearest neighbour spin-3/2 Hamiltonian. We present its one-dimensional version, \[{\cal H} = -J\sum_{i}S_{i}\,S_{i+1}-K\sum_{i}S_{i}^{2}\,S_{i+1}^{2}-L\sum_{i}S_{ i}^{3}\,S_{i+1}^{3} \tag{5}\] \[- \frac{M_{1}}{2}\sum_{i}(S_{i}^{2}\,S_{i+1}+S_{i}\,S_{i+1}^{2})- \frac{M_{2}}{2}\sum_{i}(S_{i}\,S_{i+1}^{3}+S_{i}^{3}\,S_{i+1}^{3})\] \[- \frac{M_{3}}{2}\sum_{i}(S_{i}^{2}\,S_{i+1}^{3}+S_{i}^{3}\,S_{i+1}^ {2})-H\sum_{i}S_{i}+D\sum_{i}S_{i}^{2}\] \[- W\sum_{i}S_{i}^{3}.\] The spin-3/2 case admits three order parameters, namely the mean magnetization per site \(M\), the mean quadrupole moment per site \(Q\) and the mean octopole moment per site \(\Omega\). \[M = \left<S_{i}\right>,\] \[Q = \left<S_{i}^{2}\right>\ \ \mbox{and}\] \[\Omega = \left<S_{i}^{3}\right>.\] We note that owing to translation invariance the lattice index subscript \(i\) on the spin variable \(S\) is of no consequence. Each of the nine spin coupling strengths \(J,K,..,W\) in eq.(4) above are given in terms of the nine lattice gas couplings, namely the six \(K\)'s and the three \(\mu\)'s. The spin coupling strengths \(H,D\,\mbox{and}\,W\) depend on all the six \(K\)'s and the three \(\mu\)'s as can be checked. Considering the potentials \(\mu\) as external, tunable parameters we can reinterpret the couplings \(H\), \(D\) and \(W\) as external, tunable 'fields'. On the other hand the remaining six spin couplings \(J,K,L,M_{1},M_{2}\) and \(M_{3}\) depend only on the six \(K\)'s and can be thought of as 'internal'. To keep things simple and symmetric we switch off the 'cross' interactions as well as the octopole-octopole coupling strength \(L\) to obtain the working Hamiltonian for this investigation, \[{\cal H}_{3/2} = -J\sum_{i}S_{i}\,S_{i+1}-K\sum_{i}S_{i}^{2}\,S_{i+1}^{2} \tag{6}\] \[- H\sum_{i}S_{i}+D\sum_{i}S_{i}^{2}-W\sum_{i}S_{i}^{3}.\] While the above Hamiltonian is drastically simplified, it is complex enough to include non-trivial effects peculiar to spin-3/2 lattice1. The transfer matrix for the above Hamiltonian can be solved numerically to obtain the largest eigenvalue and the correlation function (more about it later). Setting \(H\) and \(W\) to zero renders additional symmetry to the transfer matrix and it becomes amenable to closed form solutions. Of course, now the model becomes a spin-3/2 BEG model as treated in [34] in the context of \(\mbox{DyVO}_{4}\) phase structure. But, as we shall see in the sequel, its geometry and phase structure already reflect the complexity of spin-3/2. Footnote 1: Since the spin-spin coupling \(J\) is always positive in this work, in all the calculations we shall always scale it away, though it will occasionally appear in the formulae in the usual way. The transfer matrix for the zero field spin-3/2 BEG model obtained by setting \(H=W=0\) in the Hamiltonian in eq.(6) is \[T=\left(\begin{array}{ccccc}e^{\frac{8K}{16}-\frac{9D\beta}{4}+\frac{9\beta }{4}}&e^{\frac{9K\beta}{16}-\frac{9D\beta}{4}+\frac{3\beta}{4}}&e^{\frac{9K \beta}{16}-\frac{5D\beta}{4}-\frac{3\beta}{4}}&e^{\frac{81K\beta}{16}-\frac{9 D\beta}{4}-\frac{9\beta}{4}}\\ e^{\frac{9K\beta}{16}-\frac{5D\beta}{4}+\frac{3\beta}{4}}&e^{\frac{8K}{16}- \frac{D\beta}{4}+\frac{9}{4}}&e^{\frac{8K\beta}{16}-\frac{D\beta}{4}-\frac{9 }{4}}&e^{\frac{8K}{16}-\frac{D\beta}{4}-\frac{9\beta}{4}}&e^{\frac{8K}{16}- \frac{5D\beta}{4}-\frac{3\beta}{4}}\\ e^{\frac{9K\beta}{16}-\frac{5D\beta}{4}-\frac{3\beta}{4}}&e^{\frac{8K}{16}- \frac{D\beta}{4}-\frac{9\beta}{4}}&e^{\frac{8K}{16}-\frac{D\beta}{4}+\frac{9 }{4}}&e^{\frac{8K}{16}-\frac{D\beta}{4}+\frac{9\beta}{4}}&e^{\frac{8K}{16}- \frac{5D\beta}{4}+\frac{3\beta}{4}}\\ e^{\frac{81K\beta}{16}-\frac{9D\beta}{4}-\frac{9\beta}{4}}&e^{\frac{8K}{16}- \frac{5D\beta}{4}-\frac{3\beta}{4}}&e^{\frac{9K\beta}{16}-\frac{5D\beta}{4}+ \frac{3\beta}{4}}&e^{\frac{81K\beta}{16}-\frac{9D\beta}{4}+\frac{9\beta}{4}} \end{array}\right) \tag{7}\] This allows for a closed form expression of the largest eigenvalue, \[\lambda_{+} = \frac{1}{2}e^{-\frac{9\beta}{4}-\frac{9\beta D}{4}+\frac{9K}{16 }}\Big{(}e^{\frac{5\beta}{2}+2\beta D} \tag{8}\] \[+ e^{2\beta+2\beta D}+e^{5\beta K}+e^{\frac{9\beta}{2}+5\beta K}+ \sqrt{X^{2}-4Y}\Big{)},\] where \[X=-e^{2\beta+2\beta D}-e^{\frac{5\beta}{2}+2\beta D}-e^{5\beta K}-e^{\frac{9 \beta}{2}+5\beta K}\] and \[Y = -e^{3\beta+2\beta D+\beta K}-2e^{\frac{9\beta}{2}+2\beta D+\beta K }-e^{6\beta+2\beta D+\beta K}\] \[+ e^{2\beta+2\beta D+5\beta K}+e^{\frac{5\beta}{2}+2\beta D+5 \beta K}+e^{\frac{13\beta}{2}+2\beta D+5\beta K}\] \[+ e^{7\beta+2\beta D+5\beta K}.\] The Massieu function per spin (free energy, in short) can be obtained as the log of the largest eigenvalue \(\lambda_{+}\), \[\psi=\log\lambda_{+} \tag{9}\] More generally, we will retain a non-zero \(H\) and \(W\) so that the transfer matrix can now be solved only numerically implying that the free energy and subsequent operations on it are performed numerically. The correlation function between spins \(R\) lattice sites apart is \[\langle S_{1}^{\alpha}S_{1+R}^{\beta}\rangle-\langle S_{1}^{\alpha}\rangle \langle S_{1+R}^{\beta}\rangle=\sum_{j\neq 1}\left(\frac{\lambda_{j}}{\lambda_{1}} \right)^{R}\,\langle t_{j}|{\bf S}^{\alpha}|t_{1}\rangle\,\langle t_{1}|{\bf S }^{\beta}|t_{j}\rangle, \tag{10}\] where \({\bf S}\) is the spin matrix 2 Footnote 2: Similarly, the quadrupole and octopole matrices are \({\bf S^{2}}\) and \({\bf S^{3}}\). \[{\bf S}=\left(\begin{array}{cccc}\frac{3}{2}&0&0&0\\ 0&\frac{1}{2}&0&0\\ 0&0&-\frac{1}{2}&0\\ 0&0&0&-\frac{3}{2}\end{array}\right) \tag{11}\] Here \(|t_{j}\rangle\) denote the eigenvectors of the transfer matrix so that we have \(T=\sum_{i}|t_{i}\rangle\lambda_{i}|\langle t_{i}|\) with eigenvalues \(\lambda_{+}=\lambda_{1}>\lambda_{2}>\lambda_{3}>\lambda_{4}\). For non-zero \(H\) or \(W\), it can be checked that the matrix elements \(\langle t_{2}|{\bf S}^{\alpha}|t_{1}\rangle\) are non-zero for \(\alpha=1,2,3\) so that in this case there is only one correlation length \[\xi_{m}^{-1}=-\log{\left|\frac{\lambda_{2}}{\lambda_{1}}\right|}. \tag{12}\] However, when both \(H\) and \(W\) are set to zero the matrix element \(\langle t_{2}|{\bf S}^{2}|t_{1}\rangle\) becomes zero with the leading non-zero term element being \(\langle t_{3}|{\bf S}^{2}|t_{1}\rangle\). Thus, in this case there are two correlation lengths, the above one for the dipole moment and \[\xi_{q}^{-1}=-\log{\left|\frac{\lambda_{3}}{\lambda_{1}}\right|}. \tag{13}\] for the quadrupole moment. On the other hand, the correlation length for the octopolar fluctuations is always same as the dipolar correlation length, namely \(\xi_{m}\) in eq.(12). This is because for the octopolar case the matrix element \(\langle t_{2}|{\bf S}^{3}|t_{1}\rangle\) is already non-zero everywhere. Satisfingly, as we shall show in the sequel, geometry robustly encodes this feature. We also point out an interesting feature of the quadrupole fluctuations. Using eq. (10) it can be checked that for low temperatures the second moment of quadrupole fluctuations per lattice site \(\sigma_{q}^{2}\) goes as \[\sigma_{q}^{2}=\frac{|\langle t_{1}|{\bf S}^{2}|t_{3}\rangle|^{2}}{1-\lambda_ {3}/\lambda_{1}}\sim\frac{|Q-Q_{0}|}{1-\lambda_{3}/\lambda_{1}}. \tag{14}\] This expression is similar to the spin-one case in [43]. In the cases for which \(\xi_{q}\) diverges, the denominator of eq.(14) goes to zero. However, depending on the _relative speed_ with which the numerator and denominator tend towards zero, \(\sigma_{q}^{2}\) might or might not diverge. In particular, there will also occur counter-intuitive cases where \(\sigma_{q}^{2}\) decays even as the correlation length \(\xi_{q}\) diverges. In the special case where both the numerator and denominator decay equally fast, \(\sigma_{q}^{2}\) would approach a constant value at low temperature. Remarkably, as we shall demonstrate in the sequel, geometry will efficiently encode all these cases. ## III Ground state phase structure of the spin-3/2 chain The zero temperature phase structure is realized by comparing the energies of different ground state configurations obtained from the spin-3/2 Hamiltonian in eq.(6). Thus one obtains the ground state energy of a nearest-neighbour pair with spins \(S\) and \(S^{\prime}\), [35] \[{\cal E}_{SS^{\prime}} = -J\,S\,S^{\prime}-K\,S^{2}\,S^{\prime 2}-\frac{H}{2}(S+S^{ \prime})+ \tag{15}\] \[\frac{D}{2}(S^{2}+S^{\prime 2})-\frac{W}{2}(S^{3}+S^{\prime 3})\] which may be minimized over different nearest-neighbour configurations conveniently labeled as \(\{33\},\{\bar{3}3\},\{31\},\{\bar{3}1\},\{11\},\{\bar{1}1\}\), etc. Here, for example \(\{3\bar{3}\}\) would represent the nearest-neighbour configuration \(\{\frac{3}{2},-\frac{3}{2}\}\), etc. Since the Hamiltonian is reflection symmetric a change in sign of \(H\) and \(W\) together will simply flip the sign of the stable spin configurations so that we will not lose any generality by considering only the cases of \(H,W\) both positive and the case of \(H\) positive and \(W\) negative. For ferromagnetic \(J\) (which is always the case in our investigation) the opposite sign pairs like \(\{3\bar{1}\}\), etc are always of a higher energy so they will not figure anywhere in the phase diagrams to follow. For the case \(H,W\) both non-negative we obtain from the pair-energy equation eq.(15) three planes of coexistence separating the \(H-D-K\) space, parametrized by \(W\), into three ground-state configurations \(\{33\}\), \(\{11\}\), and \(\{31\}\). Thus, the planes \({\cal L}_{1}\), \({\cal L}_{2}\), and \({\cal L}_{3}\) given as \[8D-20K-4H = 8+13W\] \[8D-36K-4H = 12+13W\ \mbox{and}\] \[8D-4K-4H = 4+13W \tag{16}\] separate, respectively, \(\{33\}\) and \(\{11\},\{33\}\) and \(\{31\}\), and finally, \(\{31\}\) and \(\{11\}\) ground state configurations. As we shall see subsequently, while the coexistence plane \({\cal L}_{1}\) shows criticality even for non-zero \(H\) and \(W\) the \({\cal L}_{2},{\cal L}_{3}\) planes become critical only when they intersect with the plane \(H=W=0\) in what we shall term as the \(l_{2}\) and \(l_{3}\) lines in the following. The coexistence planes intersect in the triple line in the \(H-D-K\) space, given by the equations \[2D-H=\frac{3}{4}+\frac{13W}{4}\ \ \mbox{and}\ \ K=-\frac{1}{4}. \tag{17}\] We shall discuss separately the phase structures of the case \(W=0\), namely the BEG spinp-3/2 case, and the more general case of non-zero \(W\). The former, restricted case will be investigated more comprehensively while in the latter, more general case we shall not claim any completeness. #### iii.1.1 **Phase behaviour in the BEG case, \(W=0\)** Fig.(1a) shows the phase diagram in the \(H-D-K\) parameter space of the spin-3/2 BEG chain, namely the Hamiltonian in eq.(6) with \(W\) set to zero. Above the green coloured plane \(\mathcal{L}_{1}\) extending to the top right and the red colored plane \(\mathcal{L}_{2}\) extending to the left, \(\{33\}\) remains the ground state of the chain. Below the plane \(\mathcal{L}_{1}\) the configuration \(\{11\}\) becomes the ground state. Finally, sandwiched between \(\mathcal{L}_{2}\) and the black colored plane \(\mathcal{L}_{3}\) extending to the bottom left, the \(\{31\}\) configuration becomes the most stable. Henceforth, we shall label the region with globally stable \(\{33\}\) ground state configuration as the \(\{\mathbf{33}\}\) region, etc. Fig.(1b) is a zero-field projection of the phase diagram of fig.(1a) on the \(D-K\) plane with \(H=W=0\). The lines labelled \(l_{1}\), \(l_{2}\) and \(l_{3}\) represent the intersections of the corresponding coexistence planes with the plane \(H=0\) in the \(H-D-K\) space. The three coexistence lines intersect at the triple point \(\mathbf{T}\) with \(\{D=3/8,K=-1/4\}\). This phase diagram was first reported in [35]. The whole of the zero field phase diagram (\(H=W=0\)) of the BEG case is critical for the spin fluctuations and octopole fluctuations. However, it can be checked, except for a segment of the coexistence line \(l_{1}\), the quadrupole fluctations are finite everywhere and mostly decay to zero for low temperatures. Starting with the triple point \(\mathbf{T}\) at \(D=3/8\), where the variance \(\sigma_{q}^{2}\) approaches unity, the quadrupole fluctuations show a slow divergence to infinity which becomes steeper along \(l_{1}\) until the point \(D=1,K=0\) where the divergence is the sharpest. Moving further to the right where \(K\) is positive, the divergence slows down until \(\sigma_{q}^{2}\) flattens to \(8\) at the point \(\{D=21/16,K=1/8\}\) labeled as \(\mathbf{P}\) in fig.(1b). Beyond \(\mathbf{P}\) the quadrupole fluctuations decay to zero on the line \(l_{1}\). Finally, the moment \(\sigma_{q}^{2}\) always approaches small positive values on both lines \(l_{2}\) and \(l_{3}\) while everywhere else in the zero field plane it decays to zero as mentioned earlier. Moving on to non-zero values of \(H\) we note that the coexistence planes \(\mathcal{L}_{2}\) and \(\mathcal{L}_{3}\) of fig.(1a) remain non-critical Figure 1: Ground state phase diagram for the spin- \(3/2\) model in \((a)\)\(D\)-\(K\)-\(H\) space with \(W=0\). The three coexistence planes partitions the space into three phases. The triple line is the intersection of these three planes. \((b)\) The projection of \((a)\) in \(K\)-\(D\) plane with \(H=0\). Here the projection of triple line is point T. Figure 2: Ground state phase diagram for the spin-\(3/2\) model \((a)\) in the \(H\)-\(D\) plane with \(W=0,K=0.1\) and \((b)\) the non-zero field ‘\(l_{1}\)’, ‘\(l_{2}\)’ and ‘\(l_{3}\)’ lines in the \(D\)-\(K\) plane with \(H=0.15\). The point \(\mathbf{P}\) beyond which both dipole and quadrupole fluctuations decay is at \(D=2.6\) and \(K=0.6\). Compare with the zero-field \(l_{1}\) line in fig.(1b). with a decaying correlation length3 while the coexistence plane \({\cal L}_{3}\) shows criticality, with the correlation length diverging everywhere on it4. Footnote 3: We recall that for non-zero \(H\) or \(W\) there is only one correlation length (\(\xi_{m}\) of eq.(12)) for all the order parameters. Footnote 4: Indeed, the order parameter fluctuation do not diverge everywhere on \({\cal L}_{1}\) as we shall soon see. We study the non-zero field phase behaviour of the spin-3/2 BEG case in fig.(2a) which shows the intersection of the coexistence planes of fig. (1a) with a fixed \(K\) plane for a representative value of \(K\) above the triple line at \(K=-1/4\). For \(K=0.1\) there is only one phase boundary, the \(f\) line corresponding to the \({\cal L}_{1}\) plane as shown in fig.(2a) where the correlation length diverges everywhere. In fig.(2b) we depict the intersection of the phase coexistence surfaces of fig.(1a) with the surface \(H=0.15\). The coexistence lines are similar to the lines \(l_{1}\), \(l_{2}\) and \(l_{3}\) of fig.(1b) however now the non-zero field lines \(l_{2}\) and \(l_{3}\) are completely non-critical, while the line \(l_{1}\) shows critical fluctuations in order parameters upto the point \({\bf P}\) which is \(H\) dependent (and whose equation we shall soon discuss). Beyond the point \({\bf P}\) the fluctuations decay, though the correlation length still diverges. We therefore see that the plane \({\cal L}_{1}\) is not critical everywhere. It shows further interesting structure which, we believe, is presented for the first time here. Thus, in fig.(3a), where the coexistence plane \({\cal L}_{1}\) has been projected onto the \(H-D\) plane, the numerically obtained straight lines \(m_{1}\) (blue, dashed) and \(m_{2}\) (blue, smooth) divide the plane into a critical left part where the quadrupole, dipole and octopole fluctuations all diverge and a non-critical right part where they decay towards zero. These \(m\) lines are nothing but a locus of the points \({\bf P}\) for different values of \(H\). On the lines themselves all the fluctuations flatten to small finite values. For example, \(\sigma_{m}^{2}\to 2\) everywhere on the line \(m_{2}\) while on the line \(m_{1}\) it flattens to decreasing values starting from 2 as \(H\) approaches zero. The equation for the \(m_{1}\) and \(m_{2}\) lines for \(W=0\) is obtained by a numerical investigation as \[80D = 160H+183\hskip 28.452756ptm_{1}\ \mbox{line, $W=0$}\] \[8D = 19H+18\hskip 28.452756ptm_{2}\ \mbox{line, $W=0$} \tag{18}\] The dotted lines in fig.(3a) are the projections of fixed \(K\) lines in the \({\cal L}_{1}\) plane onto the \(H-D\) plane, thus being the \(f\) lines of fig.(2a). The \(f\) line labeled by \(K=0.5\) does not intersect the \(m\) lines anywhere so that it is critical everywhere. On the other hand the \(f\) lines marked \(K=0.545\) and \(K=1.23\) remain critical only to the left of the \(m\) lines. From the equation for the \(m_{1}\) line we can verify that for \[-0.25<K<0.515 \tag{19}\] the \(f\) line remains critical everywhere. Note that the line \(m_{1}\) ends on the \(H=0\) axis in an open circle indicating that it is defined only for non-zero \(H\). Namely, the \({\bf P}\) point for zero magnetic field is discontinuous with such points for non-zero magnetic field. It is notable that irrespective of the \(m\) lines, everywhere on the \({\cal L}_{1}\) plane the correlation length \(\xi\) diverges as mentioned earlier. A mathematical explanation of the curious fact that fluctuations decay to the left of the \(m\) lines even as the correlation length diverges there is along similar lines as the discussion around eq.(14) for \(q\)-fluctuations. Thus, at low temperatures the spin and quadrupole fluctuation moments go respectively as, \[\sigma_{m}^{2} \sim \frac{|\langle t_{1}|{\bf S}|t_{2}\rangle|^{2}}{1-\lambda_{2}/ \lambda_{1}},\] \[\mbox{and}\ \ \sigma_{q}^{2} \sim \frac{|\langle t_{1}|{\bf S}^{2}|t_{2}\rangle|^{2}}{1-\lambda_{2}/ \lambda_{1}}. \tag{20}\] Figure 3: (\(a\)) Projection of the \({\cal L}_{1}\) coexistence plane onto the \(D-H\) plane for the spin-3/2 BEG chain, \(W=0\). The \(m_{1}\) and \(m_{2}\) lines separate regions of diverging fluctuations to their left and decaying fluctuation moments to their right. The dotted lines are the \(f\) lines like the one in fig.(2b) with different \(K\) values. (\(b\)) Similar to (\(a\)) but for the general spin-3/2 chain with \(W=1/10\). Open circles on the \(H\) axis in both sub-figures indicate that the \(m\) lines are defined only for non-zero \(H\). The \({\bf P}\) point at \(H=0\) is discontinuous with other \({\bf P}\) points. Once again, as with the \(q\)-fluctuations on the \(l_{1}\) line, it can be checked that to the left of the \(m\) lines in fig.(3a) the numerator approaches zero faster than the denominator. It will be demonstrated in the sequel that the fluctuations and correlations in the dipolar and quadrupolar order parameters are efficiently encoded by geometry. #### iii.2.2 **Phase behaviour in the general case, \(W\neq 0\)** We now briefly report few instances of phase behaviour in the general spin-\(3/2\) chain. As mentioned earlier, in the presence of a finite coupling \(W\) the zero \(H\) field plane in the \(H-D-K\) space is no more a plane of symmetry and so we do not expect it to be critical everywhere, unlike the BEG case. Indeed, this holds true everywhere on the plane \(H=0\) where the \(m\) and \(q\) fluctuations do not diverge, except, however, for a segment of the \(l_{1}\) line from (but excluding) the triple point \(\mathbf{T}\) to a \(W\) dependent point \(\mathbf{P}\)5. For example for \(W=0.1\) and \(W=0.5\) respectively, the \(\mathbf{P}\) points are at \(\{D,K\}\) equals \(\{0.51,2.44\}\) and \(\{0.65,3.44\}\). We also note that to the left of the \(\mathbf{P}\) point the magnetization saturates to \(1\) on the \(l_{1}\) line and to its right it saturates to \(0.5\). As in the previous cases, the correlation length diverges along the whole of the \(l_{1}\) line even as the fluctuations decay beyond the \(\mathbf{P}\) point. Footnote 5: Note that we are using the same labels for the \(W\neq 0\) case coexistence lines and planes, triple point, etc as for the BEG case For non-zero \(H\) (same sign as \(W\)) the phase structure of the spin-\(3/2\) chain remains qualitatively the same as the BEG case. Thus, the planes \(\mathcal{L}_{2}\) and \(\mathcal{L}_{3}\) are non-critical while \(\mathcal{L}_{1}\) shows criticality, with the correlation length diverging everywhere on it. Once again, the \(\mathcal{L}_{1}\) plane can be separated into a part where the fluctuations diverge and a part where the fluctuations decay. From fig.(3b) for a general spin-\(3/2\) chain with \(W=1/10\) we find that the features on the critical surface \(\mathcal{L}_{1}\) are qualitatively similar. A competition for the ground state arises when \(H\) and \(W\) are of opposite signs. This is because both the fields are coupled to odd powers of the lattice spins so that they have the same symmetry. Thus, for instance, when \(H\) is positive and \(W\) negative the former will compete for a positive spin oriented ground state as it will lower energy while the latter will prefer a negative spin ground state. In one such case of competing configurations we just record our observation that for \[W<-\frac{4}{9}\,H \tag{21}\] the \(\{\bar{3}\bar{3}\}\) ground state becomes more stable than the \(\{33\}\) state, irrespective of the values of \(D\) and \(K\). The latter do determine the sharpness with which the magnetization switches sign at a finite temperature, though we have not attempted any detailed investigation here. In fig.(4a) and fig.(4b) we show, respectively, magnetization vs \(\beta\) plots for respectively positive \(H\) dominant and negative \(W\) dominant parameter values. In subsequent sections we shall see that geometry will provide important clues to the underlying statistics. ## IV Method of constrained fluctuations: curvature invariants on hypersurfaces Thermodynamic geometry envisages a Riemannian manifold structure for the thermodynamic state space, which it achieves by introducing a non-negative distance measure between nearby equilibrium states, [4; 5; 6; 7]. In the entropy representation introduced and developed by Ruppeiner, [5] the thermodynamic metric can be conveniently represented as the second derivative of the Figure 4: Magnetization \(m\) vs \(\beta\) plots for positive \(H\) and negative \(W\). In (a) where \(H\) dominates \(W\) the spin sharply increases to \(3/2\) from small values. The parameters are \(H=0.3\), \(K=1\), \(D=3\), \(W=-0.1333\). In (b) where \(W\) just about dominates \(H\) spin flips sharply to \(-3/2\) starting from positive values at high temperature. The parameters are \(H=0.3\), \(K=1/8\), \(D=6/8\), \(W=-0.1335\). Massieu function, \[g_{\mu\nu}=\frac{\partial^{2}\psi}{\partial x^{\mu}\partial x^{\nu}}, \tag{22}\] where the co-ordinates \(x^{\mu}\) are the entropic intensive variables. In a two dimensional state space, which is the case for simple fluids, Ising model, etc, the only independent curvature and hence the only source of microscopic information is the Riemann curvature scalar \(R\). In three dimensions or higher the full Riemann curvature tensor \(R_{\mu\nu\rho\sigma}\) comes into effect and it is worth exploring if different components of the Riemann tensor provide complementary information about the underlying microscopics. Alternatively, we could slice the higher dimensional state space via well-defined lower dimensional hypersurfaces and investigate the resulting curvatures. The latter method, termed the method of constrained fluctuations, was adopted earlier in the context of the three-dimensional state space of the extended Kerr-Ads black holes in [1] and of the spin-one chain in [2]. The advantage of using carefully chosen sections is that a physically transparent meaning in terms of constrained fluctuations can be ascribed to the induced thermal metrics and thus also to the resulting lower dimensional curvatures. Thus, in the spin-one case in [2] for example, we chose hypersurfaces of constant \(H\) and of constant \(D\) and were able to demonstrate that on the constant \(H\) surface the fluctuations in magnetization \(M\) were suppressed and those in the quadrupole \(Q\) were unrestricted, while on the constant \(D\) surface reverse was the case. In the former case of free \(Q\) fluctuations the resulting sectional curvature was labelled \(R_{q}\) and the latter sectional curvature was similarly labelled \(R_{m}\). In the present case, with a four-dimensional state space for the spin-3/2 chain, we can now construct three two-dimensional hypersurfaces by fixing any two of the three parameters \(H,D\) and \(W\)6. Thus, the sectional curvatures on the hypersurface of fixed \(D\) and \(W\), called the \(DW\)-surface will be labelled \(R_{m}\) as it might better encode the correlations in fluctuations of the dipole moment \(M\) which are free on that surface, as against the quadrupole and the octopole fluctuations which remain somewhat suppressed 7. Similarly, the sectional curvature on the \(HW\)-surface is labelled \(R_{q}\) for encoding correlations in \(Q\) fluctuations and the one on the \(HD\)-surface is labelled \(R_{\omega}\) for encoding \(\Omega\) correlations. It is important to note here that limiting thermodynamic fluctuations to the specified hypersurfaces is ultimately a mathematical device that helps filter out information about competing correlations. The only guiding principle is that the freezing out of fluctuations in certain directions should not be unphysical, as was clarified in [2]. On the other hand, it is also true that any physical process leading to a suppression or slowing down of certain fluctuations will automatically suggest a relevant hypersurface in the state space on which all thermodynamic motion (including fluctuations) remains restricted, as was the case in [1]. Footnote 6: We have not pursued the full four-dimensional scalar curvature in this work. Apart from the high computational cost involved in its evaluation we feel it will contain mixed information about separate order parameter correlations and thus might not be very useful. Footnote 7: See, however, later discussion in this subsection on a three dimensional sclar curvature. For mathematical details we refer the reader to [1] and [2] wherein a general formalism for choosing relevant hypersurfaces in a higher dimensional thermodynamic manifold was outlined. Also, the pullback of the ambient thermal metric on the chosen hypersurfaces was interpreted in terms of constrained fluctuations in the thermodynamic quantities. Here we shall simply outline the procedure for obtaining the three sectional curvatures starting with the free energy (Massieu function) per spin which is obtained as the logarithm of the partition function, \[\psi=\frac{1}{N}\log\sum_{\{S_{i}\}}e^{-\beta\,{\cal H}_{3/2}}. \tag{23}\] where \({\cal H}_{3/2}\) is the Hamiltonian in eq.(6). Considered as a function of \(\beta\) with \(H,D,W\) held constant, the beta derivative of the free energy will give the "total" energy \(E\), which includes the "internal" energy \(U\) of the lattice spins parameterized by the fixed couplings \(J\) and \(K\), and the "external" energy of coupling between the spins, quadrupoles and octopoles with the correponding tunable couplings \(H\), \(D\) and \(W\), \[-\left.\frac{\partial\psi}{\partial\beta}\right|_{H,D,W}=E=\frac{1}{N}\langle{ \cal H}_{3/2}\rangle \tag{24}\] On the other hand the internal energy \(U\) can be obtained by taking the \(\beta\) derivative of the free energy at a fixed value of the remaining entropic intensive variables \((\,\nu,\mu,\gamma\,)=(\,\beta\,H,\beta\,D,\,\beta\,W)\), \[-\left.\frac{\partial\psi}{\partial\beta}\right|_{\nu,\mu,\gamma}=U=E+HM-DQ+W\Omega \tag{25}\] In terms of the entropic intensive variables the differential of free energy becomes, \[d\psi=-U\,d\beta+M\,d\nu-Q\,d\mu+\Omega\,d\gamma, \tag{26}\] By setting to constant two of the three parameters \(H,D,W\) in the above equation we can obtain the governing equations for the three hypersurfaces. Thus, for the \(DW\)-surface eq.(26) becomes \[d\psi_{DW}(\beta,\nu)=-(E+HM)\,d\beta+M\,d\nu,, \tag{27}\] and on the \(HW\)-surface the free energy differential becomes \[d\psi_{HW}(\beta,\mu)=-(E-DQ)\,d\beta-Q\,d\mu, \tag{28}\] while the \(HD\) surface gives \[d\psi_{HD}(\beta,\gamma)=-(E+W\Omega)\,d\beta+\Omega\,d\gamma, \tag{29}\] The calculations for the induced metrics and the sectional curvatures for the three hypersurfaces follow directly from the above three equations. In addition to the sectional curvatures mentioned above, the 4-dimensional Riemannian manifold also allows for the possibility of meaningful 3-dimensional scalar curvatures. One such scalar curvature that we will explore in some detail is the one living on constant \(D\) surfaces in the 4-dimensional parameter space of the spin-3/2 chain. Restricting thermal fluctuations to within the constant \(D\)-hypersurface will partially suppress the quadrupole fluctuations while allowing unrestricted variations in the spin and octopole moments. The latter two order parameters have a similar odd symmetry under sign change and hence their fluctuation statistics are also expected to be similar, given that they have the same correlation length as mentioned earlier. Therefore it might be useful to compare the \(3d\) scalar curvature \(R_{D}\) with the \(2d\) scalar curvature \(R_{m}\) discussed earlier. Similar to the \(R_{D}\) we could in principle also investigate other \(3d\) scalar curvatures like \(R_{H}\) and \(R_{W}\) which live on state space sub-manifolds where, respectively, fluctuations in \(H\) and \(W\) are held frozen. However, since the \(H\) and \(W\) fields couple to the dipole and octopole moments which have a similar symmetry the respective curvatures \(R_{H}\) and \(R_{W}\) might contain mixed information about spin and octopole correlations on the one hand and the quadrupole correlation on the other. We shall therefore restrict our investigations to the \(3d\) scalar curvature \(R_{D}\). ## V Scaling near pseudo-criticality We recall that the Ruppeiner equation relates the state space scalar curvature to the singular part of free energy (or the Massieu function \(\psi_{s}\)), [5], \[R=\frac{\kappa_{1}}{\psi_{s}} \tag{30}\] where \(\kappa_{1}\) is an order unity negative dimensionless constant whose value depends on the universal scaling exponents and the dimension of the state space manifold, [38]. Using hyperscaling, which relates the singular free energy to the correlation volume, in the asymptotic critical region we have \[R=\kappa_{2}\xi^{d} \tag{31}\] where \(d\) is the physical dimension of the system and \(\kappa_{2}\) is similarly an order unity constant. However, as we shall verify for several cases here, the correspondence of \(R\) with the correlation length \(\xi\) continues in non-critical regimes as well even if it is not as mathematically precise as in eq.(31) near criticality. Thus, more generally, \[R\sim\xi^{d} \tag{32}\] This feature of \(R\) has been successfully exploited in inferring the coexistence curves for fluid as well as magnetic spin systems, (cite widom line, musbach, tapo, spin-one mean field). Before delving into the behaviour of the scalar curvatures in (pseudo)critical and non-critical regions we first summarize our findings for the scaling behaviour near (pseudo)criticality of the singular part of free energy followed by that of the correlation lengths. ### Scaling of the free energy The scaling form of the free energy for the spin-3/2 chain near its pseudocritical points may be written as \[\psi_{s}=n|\tau|^{p}Y\left(n_{1}\frac{h_{1}}{|\tau|^{q}},n_{2}\frac{h_{2}}{| \tau|^{r}},n_{3}\frac{h_{3}}{|\tau|^{s}}\right) \tag{33}\] where \(p,q,r,s\) are the universal critical exponents and the \(n\)'s are non-universal constants. The scaling fields \(h_{1},h_{2}\), and \(h_{3}\) could in general be obtained as linear combinations of the displacement of the fields \(H,D\,\mbox{and}\,W\) from their critical values and \(\tau\) is the reduced temperature. \(Y\) is the spin scaling function which becomes a constant when all the scaling fields are set to zero. In other words, at the (pseudo)critical values of \(H,D\,\mbox{and}\,W\) the singular part of free energy depends only on the'reduced temperature' scaling field \(\tau\). Note that, as in the spin-one case, [2], the scaling field \(\tau=e^{-\beta X}\) where \(X\) can be a linear combination of the coupling constants \(J,K\) and the fields. In the case of zero field pseudocriticality of the BEG chain the free energy can be expressed in a closed form as already mentioned in the preceding. This allows us to analytically obtain the leading singular term in the free energy by filtering out the regular terms and the fast decaying terms, as in [2]. While for convenience we restrict ourselves in the following to the BEG case with \(W=0\) we have checked that the free energy scaling does not change for lines and regions which continue to remain critical in the general case \(W\neq 0\). We now present our results for the zero field scaling of the free energy for different regions and lines. In the region {**33**} the singular free energy presents two types of scaling in sub-regions separated by the line \(l_{33}\) given by \(K=2D/9+1/6\), see fig.(5). Note that \(l_{33}\) is parallel to \(l_{2}\) and shifted above \(l_{2}\) by \(J/2\). On and below the line \(l_{33}\) and above it the singular free energy has the following limiting expression in the region {**33**}, \[\psi_{s} = e^{-(9K+3J-2D)\beta}\quad\mbox{ below and on $l_{33}$},\] \[\psi_{s} = e^{-9\,J\beta/2}\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \(3J/2\) and shifted below \(l_{3}\) by \(J/2\) such that the limiting expression of free energy in the region becomes \[\psi_{s} = e^{-(2D-K-J)\beta}\qquad\mbox{ above }l_{11},\] \[\psi_{s} = 2e^{-J\,\beta/2}\quad\mbox{ below and on }l_{11}. \tag{35}\] We shall label the light blue shaded sub-region of \(\{\mathbf{11}\}\) between \(l_{3}\) and \(l_{11}\) as \(\{\mathbf{11}\}^{\prime}\). In the region \(\{\mathbf{31}\}\), \[\psi_{s} = \frac{1}{2}e^{-\beta(K-2D+J)/2}. \tag{36}\] Along the line \(l_{1}\) the singular part of free energy scales as \[\psi_{s} = e^{-J\beta/2}\qquad\qquad\quad(D>1,K>0)\] \[\psi_{s} = \frac{(1+\sqrt{5})}{2}e^{-J\beta/2}\quad\ \ (D=1,K=0)\] \[\psi_{s} = e^{-(8D-3J)\beta/10}\qquad\quad(D<1,K<0). \tag{37}\] Along \(l_{2}\) it scales as \[\psi_{s} = \frac{(\sqrt{5}-1)}{(5+\sqrt{5})}e^{-(3J-8D)\beta/9}, \tag{38}\] and along the line \(l_{3}\), \[\psi_{s} = \frac{1}{\sqrt{5}}e^{-J\beta/2}. \tag{39}\] Finally, at the triple point \(\mathbf{T}\), the scaling of free energy goes as \[\psi_{s} = \frac{1}{4}e^{-J\beta/2}. \tag{40}\] On the \(f\)- line, with reference to fig.(2a), the singular free energy becomes \[\psi_{s} = 2e^{-(J+4K)\beta/2}. \tag{41}\] ### Scaling of the correlation lengths In a similar manner the scaling of the correlation length can also be worked out. We present some results now. On the lines \(l_{1}\) and \(l_{3}\) the limiting expression of correlation length \(\xi_{m}\) becomes \[\xi_{m}=e^{J\,\beta/2} \tag{42}\] while on \(l_{2}\) it becomes \[\xi_{m}=\frac{5+\sqrt{5}}{2\sqrt{5}-2}\,e^{\beta(5J/6-8D/9)}. \tag{43}\] On the other hand the correlation length \(\xi_{q}\) scales as the inverse of free energy everywhere on \(l_{1}\). Its asymptotic behaviour on \(l_{1}\) is as follows, \[\xi_{q} = 0\qquad(D=3/8)\] \[\xi_{q} = \frac{1}{2}\psi_{s}^{-1}\ \ \left(3/8<D<1\right)\] \[\xi_{q} = \frac{1}{\sqrt{5}}\,\xi_{m}=\frac{1}{10}\left(5+\sqrt{5}\right) \psi_{s}^{-1}\ \ \left(D=1\right)\] \[\xi_{q} = \xi_{m}\quad=\psi_{s}^{-1}\ \ \left(D>1\right) \tag{44}\] In the region \(\{\mathbf{11}\}\) the limiting expression of \(\xi_{m}\) becomes \[\xi_{m}=\frac{1}{2}e^{\beta\,J/2} \tag{45}\] and in \(\{\mathbf{31}\}\) as \[\xi_{m}\sim e^{\beta(K-2D+2J)/2}. \tag{46}\] In the region \(\{\mathbf{33}\}\) the scaling of correlation length varies according to a changing pattern which we could not detect fully but, nonetheless, deep enough into the \(\{\mathbf{33}\}\) region we have checked that the limiting expression is \[\xi_{m}=\frac{1}{2}\,e^{9\,\beta\,J/2}. \tag{47}\] On the \(f\) line the asymptotic expression for the correlation length becomes \[\xi_{m}=\frac{1}{2}\,e^{(J+4K)\beta/2}. \tag{48}\] Interestingly, there appear to be several instances here where the correlation length does not scale as the inverse free energy, in an apparent violation of the hyperscaling relation. It is noteworthy that such anomalies occur only in the \(H=0\) plane. On the "genuinely" critical Figure 5: Ground state phase diagram for \(W=0\) as in fig.(1b) with additional lines \(l_{33}\) and \(l_{11}\) dividing respectively the regions \(\{\mathbf{33}\}\) and \(\{\mathbf{11}\}\) into subregions with different scaling behaviours of the free energy. See text for details. surface hyperscaling is always followed. Significantly, as we shall see in the following, geometry efficiently encodes this anomaly via its sectional and three dimensional state space curvatures. We add that physically, this anomaly seems surprising if not intriguing since cases of hyperscaling violation normally imply that some other fluctuation is at work in addition to the usual thermal one as is the case in random field Ising models, [42]. Admittedly, we have not explored the apparent hyperscaling violation in any depth in this work but only made some observations of the zero field scaling of the free energy (setting \(Y=0\) in eq.(33)) and compared it to the scaling of the correlation length(s). Our emphasis here is to investigate whether or not geometry is sensitive to such anomalies. We hope to return to this interesting issue in the future. ## VI Geometry of the spin-3/2 chain In this section we present our results following the geometric analysis of the spin-3/2 chain. We discuss mainly the sectional curvatures \(R_{m}\) and \(R_{q}\), \(R_{\omega}\) and the \(3d\) scalar curvature \(R_{D}\), in both the scaling and the non-scaling regions of the parameter space. In both cases we shall be able to amply demonstrate the power of geometry in encoding the underlying correlations in the order parameter(s). In subsections VI.1 and VI.2 we shall discuss, respectively, the geometry of the BEG chain and the more general \(W\neq 0\) case. ### Geometry of the BEG case (\(W=0\)) In this subsection we present our results for the one-dimensional spin-3/2 BEG model. Following previous discussion on phase structure, we know that the spin-3/2 BEG chain already exhibits many qualitatively new features not found in the spin-one BEG chain. We first describe the geometry associated with the zero \(H\) field, dealing separately with the coexistence lines \(l_{1},l_{2},l_{3}\) and the ground state configurations \(\{33\},\{13\}\) and \(\{11\}\). This is followed by the geometry of the case \(H\neq 0\). #### vi.1.1 Zero magnetic field, \(H=0\) The whole of the zero field (\(H=W=0\)) phase diagram is a plane of symmetry for the dipole and octopole moments and is critical for their fluctuations. Correspondingly, as we will see, the sectional curvatures \(R_{m},R_{\omega}\) and the 3-\(d\) curvature \(R_{D}\) all diverge everywhere on the symmetry plane. However, the sectional curvature \(R_{q}\), which is expected to encode the quadrupolar correlations, does not diverge everywhere. In addition, we shall see that geometry encodes the aforementioned hyperscaling anomaly in that \(R_{D}\) encodes the correlation length while \(R_{m}\) encodes the inverse free energy whenever their respective scaling behaviours do not match. In this subsection we will focus on the detailed behaviour of the quadrupolar curvature \(R_{q}\), the dipolar curvature \(R_{m}\) and the \(3d\) curvature \(R_{D}\). We will first present our detailed results for the coexistence lines \(l_{1}\), \(l_{2}\), \(l_{3}\) and the lines \(l_{11}\) and \(l_{33}\). Finally, we will outline our results for the curvatures \(R_{m}\) and \(R_{D}\) in parameter regions for different ground states. _The quadrupolar curvature \(R_{q}\)_: We first report our results for the sectional curvature \(R_{q}\) on the coexistence lines. On the line \(l_{1}\), which is the only site of criticality in the quadrupolar order for the zero field BEG chain, the asymptotic expression of \(R_{q}\) is as follows \[R_{q} = -1\ \ \ (D=3/8,\,\text{triple point})\] \[R_{q} = \psi_{s}^{-1}\ \ (3/8<D\leq 1)\] \[R_{q} = \frac{1}{25}(4D+1)(16D-21)\,\psi_{s}^{-1}\ \left(D>1\right)\] Interestingly, \(R_{q}\) undergoes a sign change on \(l_{1}\) at low temperatures beyond the point \(\mathbf{P}\) of fig.(1b) with \(D>21/16\) where it asymptotes to positive infinity even though it continues to scale as the inverse of free energy 8. The positive divergence at criticality of the state space curvature is in contrast to its expected negative divergence [4]. This anomalous behaviour of the quadrupolar curvature is related to the equally anomalous variations in quadrupolar fluctuations discussed earlier in subsection III.1. We check that the mean square quadrupolar fluctuation \(\sigma_{q}^{2}\) has the following asymptotic expression along \(l_{1}\), Footnote 8: Exactly at the point \(\mathbf{P}\)\(R_{q}\) asymptotes to a finite value of \(-9\) \[\sigma_{q}^{2} = 1\ \ In fig.(6) we show plots of \(R_{q}\), \(\sigma_{q}^{2}\) and \(\xi_{q}\)\(vs.\)\(\beta\) along the line \(l_{1}\) for a range of \(D\) values. Thus, in fig.(6a) where \(D\) is less than one the quadrupolar curvature is twice in magnitude compared to the quadrupolar correlation length. At the point \(\mathbf{P}\) with \(D=21/16\) in fig.(6b) the quadrupolar curvature and the fluctuation moment both become constant while the correlation length continues to diverge. In fig.(6c) \(R_{q}\) changes sign to positive for \(D=30/16\), but not before dipping to a negative minimum at about the same time as \(\sigma_{q}^{2}\) undergoes a local maxima before the latter eventually decays to zero. As discussed earlier, the sign change in \(R_{q}\) could possibly be related to the change in the underlying statistics of quadrupolar correlations. Fig.(6d) which displays the plot in fig.(6c) for a larger range of \(\beta\) clearly brings out the fact that even as \(R_{q}\) turns positive it continues to scale as the correlation length and is \(153/50\) times \(\xi_{q}\) as per eq.(49). On the lines \(l_{2}\) and \(l_{3}\) there is no criticality in the quadrupolar order parameter and the correlation length \(\xi_{q}\) is seen to asymptote to the same value on both the coexistence lines, \[\xi_{q} \rightarrow \log\,\frac{1}{2}\left(3+\sqrt{5}\right) \tag{51}\] \[\sim 0.962\hskip 56.905512pt(\mathrm{on}\,l_{2}\,\mathrm{and}\,l_{3})\] Satisfyingly, dipolar curvature \(R_{q}\) too asymptotes to small constant values on \(l_{2}\) and \(l_{3}\), \[R_{q} = -\frac{1}{\sqrt{5}}\sim-0.45\hskip 56.905512pt(\mathrm{on}\,l_{2})\] \[R_{q} = -1-\frac{1}{\sqrt{5}}\sim-1.45\hskip 56.905512pt(\mathrm{on}\,l_{3}) \tag{52}\] Note that the asymptotic values of the curvature are close to, or even smaller than, the unit lattice size which is consistent with the correlation length being of order unity. _The dipolar curvature \(R_{m}\) and the \(3d\) curvature \(R_{D}\)_: It is observed that \(R_{m}\) (and also \(R_{\omega}\)) diverges to negative infinity on all the lines \(l_{1}\), \(l_{2}\) and \(l_{3}\) including the triple point and this is consistent with the divergence of \(\sigma_{m}^{2}\) (and \(\sigma_{\omega}^{2}\)) everywhere on \(H=0\) plane of the spin-3/2 BEG model. Similarly, the \(3d\) curvature \(R_{D}\), which we expect to better capture the correlations in magnetization, also diverges everywhere on the plane \(H=0\). However, the scaling of the two differs in some instances on the lines and, interestingly, this variation seems to encode the hyperscaling anomaly discussed previously in section V. Interestingly, we observe that on \(l_{3}\) where hyperscaling Figure 6: Plots with respect to \(\beta\) of the sectional curvature \(R_{q}\), mean square quadrupole fluctuation \(\sigma_{q}^{2}\) and the quadrupole correlation length \(\xi_{q}\) on the line \(l_{1}\) for the BEG case, with \((a)\)\(D=12/16,K=-1/10\), \((b)\)\(D=21/16,K=1/8\), \((c)\)\(D=30/16,K=7/20\), and \((d)\) same as \((c)\) but for a larger range of \(\beta\). is followed both \(R_{m}\) and \(R_{D}\) are asymptotically equal, \[-R_{m}=-R_{D}=\psi_{s}^{-1}=2\,\xi_{m}\ \mbox{on}\ l_{3} \tag{53}\] On the other hand on \(l_{2}\) where the the spin correlation length scales faster than the inverse free energy (compare eq.(38) with eq.(43)) the two scalar curvatures encode separate behaviours, \[R_{m}\sim \psi_{s}^{-1}\] \[R_{D} = -\xi_{m}.\ \bigg{\}}\,\mbox{on}\ l_{2} \tag{54}\] Thus, while the dipolar curvature \(R_{m}\) continues to scale as the inverse free energy the \(3d\) curvature \(R_{D}\) now encodes the spin correlation length \(\xi_{m}\). On the triple point we get the following asymptotic behaviour of curvatures, \[R_{m}=R_{D}=\psi_{s}^{-1}\ \ \mbox{triple point}. \tag{55}\] Along the line \(l_{1}\) for \(3/8<D<1\) we recall that the inverse singular free energy scales as the quadrupolar correlation \(\xi_{q}\) ( see eq.(44)) but not as the dipolar correlation length \(\xi_{m}\) which diverges at a faster rate of \(e^{\beta J/2}\). We could say that, as far as the quadrupolar correlations are concerned the free energy scaling is consistent with hyperscaling. For \(D\geq 1\) the scaling of both \(\xi_{q}\) and \(\xi_{m}\) becomes the same and is consistent with the inverse singular free energy as can be checked again from eq.(44). Along \(l_{1}\) the asymptotic expressions/scaling behaviour of \(R_{m}\) and \(R_{D}\) is as follows, \[R_{m} = R_{D}=\psi_{s}^{-1}\ \ \ \ \ \ (D=3/8)\] \[R_{m} = R_{D}=\kappa\psi_{s}^{-1}\ \ \ \ \ (3/8<D\leq 1)\] \[R_{m} = \frac{2}{3}\,R_{D}=\psi_{s}^{-1}\ \ \ \ (D=1)\] \[R_{m} = \kappa_{1}R_{D}=\kappa_{2}\psi_{s}^{-1}\ \ \ (D>1), \tag{56}\] where, in the last case for \(D>1\) the proportionality constants \(\kappa_{1}\) and \(\kappa_{2}\) are always of order unity. For the case \(D<1\) the constant \(\kappa\) is \(D\) dependent and is anomalously high for \(D\) near the triple point. Thus, near \(D=7/16\) it is about 100, but reduces quickly to order unity values for \(D>8/16\). We see that on \(l_{1}\) where hyperscaling is broadly followed (albeit with different correlation lengths) both \(R_{m}\) and \(R_{D}\) have a similar scaling behaviour as the inverse of free energy (though sometimes with an anomalously large proportionality constant). In fig.(7) we plot the dipolar curvature \(R_{m}\) and the octopolar curvature \(R_{\omega}\) on \(l_{1}\) to demonstrate that the two sectional curvatures approach each other. Given that the correlation lengths of the dipole and octopole moments are the same and their statistics are similar the equality of their respective sectional curvatures further substantiates our approach of representing correlations via suitable hypersurface geometries. \(R_{m}\), \(R_{q}\) and \(R_{D}\) in the regions: Recalling that the plane \(H=0\) remains critical for spin (and octopolar) fluctuations, we now report our observations for \(R_{m}\) and \(R_{D}\) in the parameter regions corresponding to the stable ground state configurations. Quite satisfyingly, we find that in all regions where hyperscaling is followed, namely \(\{\bf 33\}\) above and on \(l_{33}\) and \(\{\bf 11\}\) below and on \(l_{11}\) in fig.(5), the curvatures \(R_{m}\) and \(R_{D}\) both scale as the inverse of free energy. On the other hand, where it is not followed, namely in \(\{\bf 13\}\), \(\{\bf 33\}^{\prime}\) and \(\{\bf 11\}^{\prime}\), the curvature \(R_{m}\) continues to scale as the inverse free energy while \(R_{D}\) goes as the correlation length \(\xi_{m}\). To summarize, therefore, we have \[R_{m}\sim R_{D}\sim\psi_{s}^{-1}\sim\xi_{m},\ \ \mbox{where hyperscaling holds} \tag{57}\] and \[R_{m}\sim \psi_{s}^{-1}\ \bigg{\}}\,\mbox{where hyperscaling fails}. \tag{58}\] Moving forward to our observation of the quadrupolar curvature in the regions we show in fig.(8a) plots of \(R_{q}\), \(\xi_{q}\) and \(\sigma_{q}^{2}\) in the region \(\{\bf 33\}\) slightly above the \(l_{1}\) line and then follow it up with fig.(8b) for a similar plot a bit higher above the \(l_{1}\) line. We see that close to the line \(R_{q}\) approaches small negative values, almost mirroring the correlation length \(\xi_{q}\) while sufficiently deeper into the region it flips to positive values in the \(\{\bf 33\}\) region and undergoes a positive divergence. Possibly, the dipolar curvature is again encoding a change in the statistics of fluctuations here. In fig.(9a) and fig.(9b) respectively we plot \(R_{q}\), \(\xi_{q}\) and \(\sigma_{q}^{2}\) in the region \(\{\bf 11\}\) slightly below the \(l_{1}\) line and then go further deep into the region. This time the geometry seems to suggest the opposite behaviour. Thus, close to the \(l_{1}\) line in the \(\{\bf 11\}\) region \(R_{q}\) changes sign to positive and diverges indicating a change in statistics of quadrupole correlations. On the other hand, deeper into the region it switches to negative and asymptotes to \(-1\) thus suggesting that 'normal' correlation statistics prevails at low enough temperatures. #### iv.2.2 **Non-zero magnetic field, \(H\neq 0\)** We recall from discussions around fig.(2b) that for non-zero \(H\) the \(l_{2}\) and \(l_{3}\) coexistence lines are non-critical while \(l_{1}\) is critical upto an \(H\) dependent \(\mathbf{P}\) point beyond which the spin and quadrupole fluctuations decay even as the correlation length continues to diverge (see eq.(20) and the \(m\) lines in fig.(3a)). In fig.(10a) we show a plot of \(R_{m}\)\(R_{q}\) and \(\xi_{m}\) on the \(l_{2}\) coexistence line for \(H=0.1\). It is seen that both the sectional curvatures closely follow each other for low temperatures and asymptote to a value of \(-1.5\) while the correlation length limits to \(1\). Exactly the same behaviour is seen on the coexistence line \(l_{1}\), not depicted here. In fig.(10b) we show a plot of \(R_{m},R_{q}\) and \(\xi_{m}\) somewhere on the \(l_{1}\) line at a point before the \(\mathbf{P}\) point and for \(H=0.1\). They are clearly seen to merge with each other with curvatures double the correlation length asymptotically. At other points and for other values of \(H\) the proportionality constant \(R_{m}/\xi_{m}\) might change but is always close to \(1\). We now move to a geometric description of the \(\mathcal{L}_{1}\) surface near the \(m\) lines of fig.(3a) for values of the quadrupolar coupling \(K\) greater than \(0.515\) (see eq.(19)). As mentioned previously, the order parameter fluctuations decay below the \(m\) lines while \(\xi_{m}\) diverges everywhere. We carefully observe the sectional curvature \(R_{q}\) along with \(\sigma_{q}^{2}\) for a value of \(H\) below the \(m_{2}\) line for \(K>0.515\). In fig.(11a) we plot \(R_{q}\) vs. \(\beta\) and \(\sigma_{q}^{2}\) vs. \(\beta\) in the inset at \(H=0.1\) on the \(K=0.62\) line of fig.(3a). We find that instead of a negative divergence proportional to \(\xi_{m}\) as is the case above the \(m\) lines, the curvature \(R_{q}\) now changes sign and turns positive at about the same \(\beta\) as \(\sigma_{q}^{2}\) drops down after a local peak. In other words, the ability of \(R_{q}\) to encode anomalous decay of fluctuations beyond the \(\mathbf{P}\) point on the \(l_{1}\) line in the zero field case (see fig.(6c) and discussion around eq.(14)) continues for the respective \(\mathbf{P}\) points in the non-zero field case. Notice once again, as in fig.(6c) for \(H=0\) the negative peak of \(R_{q}\) is at the same place (\(\beta=10\)) as the positive peak of \(\sigma_{q}^{2}\). For both zero and non-zero field cases we suspect that the statistics of correlations changes at around the temperature where \(R_{q}\) turns positive. Also, note that unlike the zero field case the statistics of dipole and quadrupole fluctuations are similar in the non-zero field case. We add that the sectional curvature \(R_{m}\) is not sensitive to change in the statistics of fluctuations across the \(m\) lines and continues its negative divergence proportional to \(\xi_{m}\) both above and below the \(m\) lines. Significantly the \(3d\) curvature \(R_{D}\) which is expected to encode dipolar and octopolar correlations, does respond to a change in fluctuation statistics below the \(m\) lines. In fig.(11b) \(R_{D}\) is seen to change sign to positive at about the same place as the octopolar fluctuation moment (shown in the inset) begins to decay. Once again, the statistics of dipole, quadrupole and octopole fluctuations are similar in the non-zero field case. ### Geometry of the general case (\(W\neq 0\)) For the general spin-3/2 chain with \(W\neq 0\) we have seen earlier that the phase structure remains mostly qualitatively similar as is borne out by a comparison fig.(3b) for \(W=1/10\) with fig.(3a) for \(W=0\). Without going into the details of scaling, etc we report that the geometry of the non-BEG spin chain is qualitatively similar to the BEG case, with the behaviour of \(R_{m}\), \(R_{q}\) and \(R_{D}\) along similar lines in similar parameter regions. In fig.(12a) we plot the sectional curvatures and the correlation length on the \(l_{1}\) line where we see that both \(R_{m}\) and \(R_{q}\) coincide and are in a fixed ratio to the correlation length, with the proportionality constant of order unity. In fig.(12b) we show plots of \(R_{m},\xi_{m},R_{q}\) and \(\sigma_{q}^{2}\) in the \(\{\mathbf{33}\}\) region. \(R_{m}\) is seen to mirror the correlation length \(\xi_{m}\). On the other hand, \(R_{q}\) flips to positive at around the same place as the quadrupole fluctuation substantially decays. Once, again it seems here that \(R_{q}\) seems to be encoding a change in quadrupole correlation statistics. Finally, we report briefly on the geometry of the interesting case of opposite signs of \(H\) and \(W\) as discussed in eq.(21) and represented through magnetization plots in fig.(4a) and fig.(4b). In fig.(13a) we plot the dipolar sectional curvature \(R_{m}\) and the correlation length \(\xi_{m}\) along with the spin fluctuation moment \(\sigma_{m}^{2}\) for parameter values same as for the magnetization plot in fig.(4a). Here, the magnitude of negative \(W\) is a little less than its 'critical' value for the given \(H\) (see eq.(19)) so that the magnetization still remains positive though it makes a rapid transition at a finite \(\beta\) as is evident from fig.(4a). Here we see a case of anomalous decay of spin fluctuations even as the correlation length continues to diverge. \(R_{m}\) this time is indeed sensitive to the change and accordingly it flips sign to positive at about the same place as \(\sigma_{m}^{2}\) begins its decay. This is reminiscent of similar behaviour of \(R_{q}\) in several places. In fig.(13b) we plot \(R_{m}\) and \(\xi_{m}\) for the same parameter values as fig.(4b) where the magnetization flips sign to \(-3/2\) at lower temperatures owing to a stronger energy-lowering influence of \(W\) as compared to \(H\). As is clear from fig.(13) there is an accompanying sharp rise in the correlation length \(\xi_{m}\) followed by a more relaxed decay to zero at lower temperatures. Interestingly, the curvature \(R_{m}\) too undergoes a negative peak of about the same magnitude10 but then it flips to positive values and diverges. Thus, this becomes an instance of the dipolar curvature \(R_{m}\) suggesting a change of statistics in a manner different from erstwhile cases. Here it seems to be taking a cure from the changing behaviour of the correlation length itself. Footnote 10: The ratio between the \(R_{m}\) peak and the \(\xi_{m}\) peak remains order one in all cases. ## VII Conclusions In this work we have investigated in detail the phase structure and the state space geometry of the one-dimensional spin-3/2 lattice model both for the case when the Hamiltonian is BEG and for a more general case with the octopolar coupling field \(W\) turned on. On several occasions geometry has been able to shine a light on the interesting phase behaviour of the spin-3/2 chain. This work finds interesting aspects of the ground state phase behaviour of the spin-3/2 chain not reported elsewhere. Thus, for both the BEG and the general case, the critical \(\mathcal{L}_{1}\) surface extending in the \(H-D-K\) parameter space in fig.(1a) has portions where the order parameter fluctuations anomalously decay while the correlation length continues to diverge as represented in fig.(3). Furthermore, we have also documented region wise variations in the scaling of singular free energy as represented in fig.(5). Yet another interesting, if curious, observation is that in several regions of the parameter space the hyperscaling relation is apparently violated, in that the inverse of the singular free energy scales slower than the correlation length. We investigate in detail the dipolar sectional curvature \(R_{m}\), the quadrupolar sectional curvature \(R_{q}\), and the \(3d\) curvature \(R_{D}\). Our geometrical investigations amply confirm Ruppeiner's strong as well as weak conjecture. Namely, near the (pseudo)critical point the scalar curvature is found to be equal to the correlation length upto order unity constant and away from criticality the appropriate scalar curvature corresponds well with a decaying or asymptotically small correlation length. Satisfingly, the sectional curvatures chosen on appropriate surfaces are seen to efficiently encode correlations in the corresponding order parameters. The sectional curvature method, employed earlier in the context of Kerr-AdS black holes in [1] and for the spin-one chain in [2], thus appears to be a robust geometrical means to probe systems with multiple order parameters. Significantly, we have been able to systematically document the sign change in sectional curvatures with the onset of anomalous fluctuation behaviour mentioned above. Thus, at about the same temperatures as the fluctuations in an order parameter begin to decay (despite a continued divergence in the correlation length) the relevant sectional curvature flips sign from negative to positive. In the zero-field BEG case we could also succeed in obtaining an asymptotic expression for the amplitude of \(R_{q}\) which changes sign exactly as the scaling of the quadrupole fluctuation moment becomes negative. All this ties well with the previous, long-standing assertions about the signature of the scalar curvature encoding a change in statistics form'statistically attractive' (as in the case of critical point, fluids, Bose gases) to statistically repulsive (as in solids like states, Fermi gases, etc), see eg. [3]. The anomalous decay of fluctuations, mathematically represented by a faster decay of the numerator compared to the denominator in eq.(50) or eq.(14) of course also has statistical undertones, since the aforementioned equation are themselves a result of statistical averaging over correlations of all length scales. Further, geometry seems to encode the aforementioned hyperscalinlg anomaly between the singular free energy and the correlation length. For the zero-field BEG case we have reported that while the sectional curvature \(R_{m}\) continues to scale as the inverse of free energy, the \(3d\) curvature \(R_{D}\) encodes the correlation length. On the other hand where hyperscaling is followed the two curvatures become asymptotically equal or proportional. It will indeed be worthwhile investigating these interesting issues in the future. In this work we have not presented results from our ongoing work on the geometry of the mean-field spin-3/2 lattice model. We shall soon report on some interesting results of the mean-field case. ###### Acknowledgements. We gratefully acknowledge useful discussions with Tapobrata Sarkar, Suresh Govindarajan and George Ruppeiner.
2308.10560
Wide-Aperture MIMO via Reflection off a Smooth Surface
This paper provides a deterministic channel model for a scenario where wireless connectivity is established through a reflection off a smooth planar surface of an infinite extent. The developed model is rigorously built upon the physics of wave propagation and is as precise as tight are the unboundedness and smoothness assumptions on the surface. This model allows establishing how line-of-sight multiantenna communication is altered by a reflection off an electrically large surface, a situation of high interest for mmWave and terahertz frequencies.
Andrea Pizzo, Angel Lozano, Sundeep Rangan, Thomas Marzetta
2023-08-21T08:31:36Z
http://arxiv.org/abs/2308.10560v1
# Wide-Aperture MIMO via ###### Abstract This paper provides a deterministic channel model for a scenario where wireless connectivity is established through a reflection off a smooth planar surface of an infinite extent. The developed model is rigorously built upon the physics of wave propagation and is as precise as tight are the unboundedness and smoothness assumptions on the surface. This model allows establishing how line-of-sight multiantenna communication is altered by a reflection off an electrically large surface, a situation of high interest for mmWave and terahertz frequencies. ## I Introduction The wealth of unexplored spectrum in the millimeter wave (mmWave) and terahertz ranges brings an onrush of wireless research seeking its fortune at higher frequencies [2, 3, 4]. The short range for which these frequencies are most suitable, in conjunction with the tiny wavelength, enable reasonably sized arrays to access multiple spatial degrees of freedom (DOF) even in line-of-sight (LOS) [5]. Precisely, LOS spatial multiplexing is made possible by the rich pattern of phase variations of the radiated field's spherical wavefront, which mimics the diversity richness of multipath propagation at lower frequencies. This potential has unleashed much research activity on wide-aperture multiple-input multiple-output (MIMO) communication over LOS channels [6, 7]. A downside of these high frequencies is blockage and lack of diffraction around obstacles, which may render LOS MIMO vulnerable to interruptions. This naturally raises the interest in studying whether wide-aperture MIMO could also operate through a reflection, capitalizing on the availability in many environments of interest of surfaces that are electrically (i.e., relative to the wavelength) large. This paper seeks to examine MIMO communication via reflection off a smooth planar surface of infinite extent. To this end, one possibility would be to apply ray-tracing tools [8], but the accuracy to which the environment should be characterized to prevent artifacts is not known a priori. Also, ray tracing does not provide analytical insights into the underlying propagation mechanisms, which are essential to array optimization. Instead, we derive a deterministic physics-based scalar channel model that is valid irrespective of the communication range and embodies other models as particular cases. ### _Contributions_ Although an actual reflecting surface is necessarily finite and with some degree of roughness, at sufficiently high frequencies it may be reasonably regarded as infinitely large, as the impact of diffraction vanishes. Oppositely, the roughness is emphasized at high frequencies as irregularities on the surface become comparable to the small wavelengths. The latter aspect is not considered in this paper, left for future work. Motivated by the extensive physics literature on the interaction between a plane wave and an infinite smooth surface [9, 10], we start by expanding the 3D field generated by an arbitrary source in terms of plane waves [10, 11]. Fundamental principles describing the reflection and transmission phenomena at the surface can then be applied to each plane wave separately and combined to obtain the overall field at any point [10]. An LOS channel is seen to be the cascade of a low-pass filter that cuts off evanescent waves [12], and a reverse-bowl-shaped filter imposed by the wave equation [13]; a reflection off a surface adds an additional filtering stage that augments the model in [12, 13] with backward propagation. This paper can also be seen to complement the zero-mean stochastic model derived in [14], with their conjunction yielding a Rician fading model. After discretization through spatial sampling, a deterministic description of the channel is obtained. This is finally used to numerically evaluate the eigenvalues, DOF, and spectral efficiency for the purpose of MIMO communication. Altogether, the contributions are: * Starting from first principles, a channel model is developed that builds upon the physics of wave propagation. The analysis is as precise as tight are the unboundedness and smoothness assumptions on the surface. * Progress is made, in the wake of [12, 13, 14], towards a comprehensive physics-based modeling of wireless propagation on which signal processing and communication theorists can test their algorithms. Propagation is described in terms of spatial Fourier transforms and linear system theory, notions central to both communities. * Classical electromagnetic results such as the image theorem are revisited. These have fundamental implications on the optimization of antenna spacings as a function of the signal-to-noise ratio (SNR) and they allow extending results available for a pure LOS channel [15, 16] to a reflection channel. ### _Outline and Notation_ The manuscript is organized as follows. Sec. II revisits the physics behind plane-wave reflection off a smooth planar surface relying solely on linear system theory and Fourier transform. In Sec. III, the Fourier spectral representations of the LOS and reflected transmissions are derived. The connection with the image theorem is established in Sec. IV, whereas the channel impulse response follows in Sec. V. After discretization, the channel response is used in Sec. VI to assess the MIMO performance via reflection. A comparison with ray-tracing is presented in Sec. VII. Final discussions and possible extensions are set forth in Sec. VIII. We use upper (lower) case letters for spatial-frequency (spatial) entities while \(J_{0}(\cdot)\) is the Bessel function of the first kind with order \(0\), \((x)^{+}=\max(x,0)\), and \(\delta(\cdot)\) is the Dirac delta function. ## II Plane-Wave Interaction with Materials Narrowband propagation is considered at angular frequency \(\omega\) in a 3D medium with an inhomogeneity created by a \(z\)-oriented planar object of infinite thickness, dividing the medium into a region 1 \(\{r_{z}<0\}\) (free space) and a region 2 \(\{r_{z}>0\}\) (material). The electromagnetic properties are constant in each of the two ensuing regions, characterized by the refractive indexes \(n_{1}=1\) and \(n_{2}\in\mathbb{C}\) with \(\mathrm{Re}(n_{2})\geq 1\) and \(\mathrm{Im}(n_{2})>0\) modeling the phase variations and absorption losses occurring inside the material [17, Sec. 4.2]. The wavenumbers in the two regions are \(\kappa_{1}=2\pi/\lambda\) and \[\kappa_{2}=n_{2}\kappa_{1}. \tag{1}\] ### _Dielectric Half-Space_ We first consider the \(xz\)-plane containing the direction of propagation and the surface normal, namely the _plane of incidence_.1 A point in this plane has coordinates \((r_{x},r_{z})\). An upgoing _incident_ plane wave Footnote 1: This plane can always be obtained by rotating the Cartesian reference frame opportunely about the \(x\)-axis. \[e_{\mathrm{i}}(r_{x},r_{z})=E_{\mathrm{i}}(\theta_{\mathrm{i}})\,e^{\mathrm{i }\kappa_{1}(r_{x}\sin\theta_{\mathrm{i}}+r_{x}\cos\theta_{\mathrm{i}})} \tag{2}\] with amplitude \(E_{\mathrm{i}}(\theta_{\mathrm{i}})\) traveling in region 1 from an angle \(\theta_{\mathrm{i}}\) relative to the surface normal impinges thereon. As a result of interaction with the surface, this field creates a downgoing _reflected_ plane wave in region 1, \[e_{\mathrm{r}}(r_{x},r_{z})=E_{\mathrm{r}}(\theta_{\mathrm{r}})\,e^{\mathrm{i }\kappa_{1}(r_{x}\sin\theta_{\mathrm{i}}-r_{x}\cos\theta_{\mathrm{i}})}, \tag{3}\] with amplitude \(E_{\mathrm{r}}(\theta_{\mathrm{r}})\) and angle \(\theta_{\mathrm{r}}\) and another upgoing _transmitted_ plane wave in region 2, \[e_{\mathrm{t}}(r_{x},r_{z})=E_{\mathrm{t}}(\theta_{\mathrm{t}})\,e^{\mathrm{i }\kappa_{2}(r_{x}\sin\theta_{\mathrm{i}}+r_{x}\cos\theta_{\mathrm{i}})}, \tag{4}\] with amplitude \(E_{\mathrm{t}}(\theta_{\mathrm{i}})\) and angle \(\theta_{\mathrm{t}}\). Derivable from the boundary conditions, Snell's law dictates that reflection occurs at the specular angle \(\theta_{\mathrm{r}}=\theta_{\mathrm{i}}\) while transmission is specified by \(\sin(\theta_{\mathrm{t}})=\sin(\theta_{\mathrm{i}})/n_{2}\)[9, Eq. 1.5.6]. The complex-valued plane-wave amplitudes can be written in terms of the _Fresnel coefficients_\(R(\theta_{\mathrm{i}})=E_{\mathrm{r}}/E_{\mathrm{i}}\) and \(T(\theta_{\mathrm{i}})=E_{\mathrm{t}}/E_{\mathrm{i}}\), specifying the fraction of incident field reflected from or transmitted across the surface, for every incident angle. Their magnitude is always less than unity, and they satisfy the unitarity relation \(T(\theta_{\mathrm{i}})=1+R(\theta_{\mathrm{i}})\) due to conservation of energy. Multiple reflections that might arise inside an object of finite thickness would make the interaction with the surface more involved [10, Ch. 2.1.3]. However, these never occur at frequencies high enough such that the material thickness is much larger than the wavelength, making the reflection phenomenon highly predictable and suitable for array optimization, as will be seen. Figure 1: Fresnel reflection coefficient (magnitude) as a function of \(\theta_{\mathrm{i}}\) for various refractive indices. The complex-valued Fresnel reflection coefficient is given by [18, Eq. 7.4.2]2 Footnote 2: For every angle \(\theta_{\mathrm{i}}\) there are two linearly independent plane waves being the solutions of the two scalar wave equations characterizing the transverse electric (TE) polarization, where the electric field is parallel to the surface, and the transverse magnetic (TM) polarization, where the magnetic field is parallel [10, Ch. 2.1]. We concentrate on the TE equation as the TM’s is obtainable by invoking the duality principle. \[R(\theta_{\mathrm{i}})=\frac{\cos(\theta_{\mathrm{i}})-\sqrt{n_{2}^{2}-\sin^{ 2}(\theta_{\mathrm{i}})}}{\cos(\theta_{\mathrm{i}})+\sqrt{n_{2}^{2}-\sin^{2}( \theta_{\mathrm{i}})}}, \tag{5}\] whose magnitude is plotted in Fig. 1 as a function of \(\theta_{\mathrm{i}}\) for various dielectric materials [19]. Total reflection is achieved by a perfect conductor, which behaves as a mirror. Other materials behave as perfect conductors only at a grazing incidence. In general, denser materials reflect energy better and, for a given material, close-to-grazing incidences experience higher reflections than those near the normal. ### _Linear-System-Theoretic Interpretation_ We now deviate from physics and provide a different viewpoint on the interaction mechanism with the surface; this perspective relies only on linear system theory and Fourier transforms, key results in the toolbox of communication theorists. The propagation directions of the incident, reflected, and transmitted plane waves may alternatively be specified by the wavenumber coordinates \[(\kappa_{x},\pm\kappa_{1z}) =(\kappa_{1}\sin\theta_{\mathrm{i}},\pm\kappa_{1}\cos\theta_{ \mathrm{i}}) \tag{6}\] \[(\kappa_{x},\kappa_{2z}) =(\kappa_{2}\sin\theta_{\mathrm{i}},\kappa_{2}\cos\theta_{ \mathrm{i}}) \tag{7}\] satisfying the dispersion relations \(\kappa_{x}^{2}+\kappa_{1z}^{2}=\kappa_{i}^{2}\) for \(i=1,2\). By means of (6), the plane waves in (2) and (3) can be seen as the 2D Fourier harmonics \[e_{\mathrm{i}}(r_{x},r_{z}) =E_{\mathrm{i}}(\kappa_{x})\,e^{\mathrm{i}(\kappa_{x}x+\kappa_{1 z}z)} \tag{8}\] \[e_{\mathrm{f}}(r_{x},r_{z}) =E_{\mathrm{f}}(\kappa_{x})\,e^{\mathrm{i}(\kappa_{x}x-\kappa_{1 z}z)}, \tag{9}\] which are functions of the spatial-frequency variables \((\kappa_{x},\kappa_{1z})\). The same holds for (4), expressed as \[e_{\mathrm{f}}(r_{x},r_{z})=E_{\mathrm{f}}(\kappa_{x})\,e^{\mathrm{i}(\kappa_{ x}x+\kappa_{2z}z)} \tag{10}\] for \((\kappa_{x},\kappa_{2z})\). The connection with Fourier theory that the above change of variables establishes enables a linear-system-theoretic interpretation of the reflection and transmission phenomena, with the focus henceforth being on the reflection. The response to a harmonic input at spatial frequency \((\kappa_{x},\kappa_{1z})\) is another harmonic output at the same spatial frequency--up to a change of sign in \(\kappa_{1z}\) due to the reflected wave traveling in the opposite direction--whose complex amplitude is the product of the input's amplitude and the Fresnel spectrum, given by [10, Eq. 2.1.13] \[R(\kappa_{x})=\frac{\kappa_{1z}-\kappa_{2z}}{\kappa_{1z}+\kappa_{2z}} \tag{11}\] for dielectric materials; this follows from (5) after a change of variables to wavenumber coordinates while using (1). Remarkably, a behavior of this sort characterizes a linear and space-invariant (LSI) system, which is fully described by its wavenumber response \(R(\kappa_{x})\) for any \(\kappa_{x}\). ## III Plane Wave Spectral Representation Consider now every possible vertical plane obtainable by rotating the \(xz\)-plane of incidence (i.e., \(\phi_{\mathrm{i}}=0\)) about the \(x\)-axis by an angle \(\phi_{\mathrm{i}}\in[0,2\pi)\). This brings into play other variables in the spatial and wavenumber domains, which we embed into the vectors \(\mathbf{r}\) with coordinates \((r_{x},r_{y})\) and \(\mathbf{\kappa}\) with coordinates \((\kappa_{x},\kappa_{y})\). The field \(e_{\mathrm{i}}(\mathbf{r},r_{z})\) radiated by a source of electric current \(j(\mathbf{r},r_{z})\) is described exactly by an integral superposition of complex harmonics of different amplitudes and spatial frequencies via the Fourier (plane wave) spectral representation [10, 11]. Precisely, for a source enclosed within a sphere of radius \(0<R_{0}<D_{0}\) (see Fig. 2), \[e_{\mathrm{i}}(\mathbf{r},r_{z})=\left\{\begin{array}{ll}\iint_{- \infty}^{\infty}E_{\mathrm{i}}^{-}(\mathbf{\kappa})\,e^{\mathrm{i}\mathbf{\kappa}^{ \mathrm{T}}\mathbf{r}}\frac{d\mathbf{\kappa}}{(2\pi)^{2}}&r_{z}<-R_{0}\\ \iint_{-\infty}^{\infty}E_{\mathrm{i}}^{+}(\mathbf{\kappa})\,e^{\mathrm{i}\mathbf{ \kappa}^{\mathrm{T}}\mathbf{r}}\frac{d\mathbf{\kappa}}{(2\pi)^{2}}&r_{z}>R_{0}\end{array}\right. \tag{12}\] with complex-valued amplitudes \[E_{\mathrm{i}}^{\pm}(\mathbf{\kappa})=\frac{\kappa_{1}\eta_{1}}{2}\frac{J_{\pm}( \mathbf{\kappa})}{\kappa_{1z}}\,e^{\pm\mathrm{i}\kappa_{1z}r_{z}} \tag{13}\] Figure 2: Scalar wave propagation in a 3D isotropic and inhomogeneous medium. View from the plane of incidence. specified by the source's spectrum \(J_{\pm}(\mathbf{\kappa})\) obtained via a 3D Fourier transform of \(j(\mathbf{r},r_{z})\) evaluated at \(\kappa_{z}=\pm\kappa_{1z}\), \(\kappa_{iz}\) being defined as \[\kappa_{iz}=\sqrt{\kappa_{i}^{2}-\|\mathbf{\kappa}\|^{2}}, \tag{14}\] for \(i=1,2\). Thus, \[J_{\pm}(\mathbf{\kappa})\!=\!\!\int\!\!\!\!\int\!\!\!\!\int_{-\infty}^{\infty}\!\!j (\mathbf{s},s_{z})\,e^{-\mathrm{j}\left(\mathbf{\kappa}^{\mathsf{T}}\mathbf{s}\mathbf{\pm}\mathbf{ \kappa}_{1z}s_{z}\right)}\,d\mathbf{s}ds_{z} \tag{15}\] given \(\eta_{1}\approx 120\pi\) as the wave impedance of free-space. The reflected field \(e_{\text{r}}(\mathbf{r})\) follows from the linearity of the spatial filtering operation applied by the surface and the delay property of the Fourier transform, as the surface is placed at an arbitrary distance \(D_{0}\) from the source, along the \(z\)-axis; see Fig. 2. The Fourier spectral representation of \(e_{\text{r}}(\mathbf{r})\) is therefore \[e_{\text{r}}(\mathbf{r},r_{z})=\!\int\!\!\!\int_{-\infty}^{\infty}E_{\text{i}}^{+} (\mathbf{\kappa})R(\mathbf{\kappa})\,e^{-\mathrm{j}\kappa_{1z}(r_{z}-2D_{0})}e^{ \mathrm{j}\mathbf{\kappa}^{\mathsf{T}}\mathbf{r}}\frac{d\mathbf{\kappa}}{(2\pi)^{2}} \tag{16}\] with \(R(\mathbf{\kappa})\) the Fresnel spectrum in (11) and \(\kappa_{1z}\) as defined in (14). Physically, the reflected field is created by superimposing the interactions with all possible incident contributions on the plane of incidence and for all possible vertical planes. With respect to an incident plane wave, a reflected plane wave exhibits an extra phase shift that accounts for the round-trip delay accumulated by the incident wave during the travel to the surface and back, along the \(z\)-axis. This effect can be regarded as a _migration_ of the incident field and is directly connected to the image theorem, as discussed in Sec. IV. ## IV Image Theorem Plugging (13) into (12), the incident field in \(\{r_{z}>R_{0}\}\) is \[e_{\text{i}}(\mathbf{r},r_{z})\!=\!\frac{\kappa_{1}\eta_{1}}{2}\!\int\!\!\!\int_{ -\infty}^{\infty}\!\frac{J_{+}(\mathbf{\kappa})}{\kappa_{1z}}\,e^{\mathrm{j} \left(\mathbf{\kappa}^{\mathsf{T}}\mathbf{r}+\kappa_{1z}r_{z}\right)}\frac{d\mathbf{\kappa }}{(2\pi)^{2}} \tag{17}\] where \(J_{+}(\mathbf{\kappa})\) is given by (15). Similarly, the reflected field in (16) can be rewritten as \[e_{\text{r}}(\mathbf{r},r_{z})=\frac{\kappa_{1}\eta_{1}}{2}\!\int\!\!\!\int_{- \infty}^{\infty}\frac{J_{\text{r}}(\mathbf{\kappa})}{\kappa_{1z}}\,e^{\mathrm{j} \left(\mathbf{\kappa}^{\mathsf{T}}\mathbf{r}-\kappa_{1z}r_{z}\right)}\frac{d\mathbf{\kappa }}{(2\pi)^{2}} \tag{18}\] where \[J_{\text{r}}(\mathbf{\kappa})=J_{+}(\mathbf{\kappa})\,e^{\mathrm{j}\kappa_{1z}2D_{0}} \,R(\mathbf{\kappa}). \tag{19}\] Notice that (18) and (17) have the same form. Hence, \(J_{\text{r}}(\mathbf{\kappa})\) may be regarded as the Fourier spectrum of a fictitious source \(j_{\text{r}}(\mathbf{r},r_{z})\). For \(R(\mathbf{\kappa})=-1\), the reflected field in (18) may be reproduced by replicating the source at \(r_{z}=2D_{0}\), which accounts for the field migration to the surface and backward. This is the _image theorem_, whereby the reflection elicited by a perfect conductor is equivalent to a mirror image of the source [20, Sec. 4.7.1]. As an example, for a point source \(j(\mathbf{r},r_{z})=\delta(\mathbf{r})\delta(r_{z})\), i.e., for \(J_{+}(\mathbf{\kappa})=1\), applying Weyl's identity [10, Eq. 2.2.27] \[\frac{e^{\mathrm{j}\kappa_{1}\|(\mathbf{r},|r_{z})\|}}{\|(\mathbf{r},|r_{z})\|}=\frac{ \mathrm{j}}{2\pi}\int\!\!\!\int_{-\infty}^{\infty}\frac{e^{\mathrm{j}\left( \mathbf{\kappa}^{\mathsf{T}}\mathbf{r}+\kappa_{1z}|r_{z}\right)}}{\kappa_{1z}}\,d\mathbf{ \kappa}, \tag{20}\] from (18) we obtain \[e_{\text{r}}(\mathbf{r},r_{z})=\!\mathrm{j}\frac{\kappa_{1}\eta_{1}}{4\pi}G(\mathbf{r},r_{z},\mathbf{0},2D_{0}) \tag{21}\] where \[G(\mathbf{r},r_{z},\mathbf{r}^{\prime},r_{z}^{\prime})=\frac{e^{\mathrm{j}\kappa_{1}\| (\mathbf{r}-\mathbf{r}^{\prime},r_{z}-r_{z}^{\prime})\|}}{4\pi\,\|(\mathbf{r}-\mathbf{r}^{ \prime},r_{z}-r_{z}^{\prime})\|} \tag{22}\] is the Green's function describing a spherical wave generated at \((\mathbf{r}^{\prime},r_{z}^{\prime})\) and measured at \((\mathbf{r},r_{z})\). Hence, \(j_{\text{r}}(\mathbf{r},r_{z})=\delta(\mathbf{r})\delta(r_{z}-2D_{0})\). For arbitrary materials, \(j_{\text{r}}(\mathbf{r},r_{z})\) is obtained from the spatial convolution \[j_{\text{r}}(\mathbf{r},r_{z})=\int\!\!\!\int_{-\infty}^{\infty}j(\mathbf{u},r_{z}-2D_ {0})\,r(\mathbf{r}-\mathbf{u})\,d\mathbf{u} \tag{23}\] of the image source and the impulse response of the surface, \[r(\mathbf{r})=\int\!\!\!\int_{-\infty}^{\infty}R(\mathbf{\kappa})\,e^{\mathrm{j}\mathbf{ \kappa}^{\mathsf{T}}\mathbf{r}}\frac{d\mathbf{\kappa}}{(2\pi)^{2}}, \tag{24}\] which is defined as the 2D inverse Fourier transform of \(R(\mathbf{\kappa})\) in (11). The azimuthal dependance of \(r(\mathbf{r})\) can be eliminated by evaluating (24) at \((\|\mathbf{r}\|,0)\), which is possible due to the circular symmetry of \(R(\mathbf{\kappa})\). From (23), we infer that the spatial filtering applied by the surface creates a _blurred image_ of the source. This effect vanishes in perfect conductors, recreating a perfect image. For a point source, \(j_{\text{r}}(\mathbf{r},r_{z})=r(\mathbf{r})\delta(r_{z}-2D_{0})\), \(\mathbf{r}\in\mathbb{R}^{2}\), modeling the impressed currents induced by the source on the entire surface. The spatial filtering simplifies when the surface is far enough from the source that the reflected propagation occurs in the _paraxial regime_. Then, \(R(\mathbf{\kappa})\) is roughly constant for all possible incident angles and given by the complex material reflectivity [9, Sec. 1.5.3]. Due to the impulsiveness of the reflection mechanism under the paraxial assumption, the image source becomes a weakened (and phase-shifted) version of the original one, which is the premise of ray-tracing algorithms. However, this need not be the case in wide-aperture MIMO, which rests on the range being short; this aspect is further expounded in Sec. VII. The implications on the optimization of antenna spacings in MIMO communication are discussed in Sec. VI. ## V Channel Impulse Response A complete description of what unfolds in region 1 is obtained by combining all contributions into \[e(\mathbf{r},r_{z})=e_{\text{i}}(\mathbf{r},r_{z})+e_{\text{r}}(\mathbf{r},r_{z}) \tag{25}\] whose expression is given by (27) after substituting (12) and (16). The input-output relationship between \(j(\mathbf{s},s_{z})\) and \(e(\mathbf{r},r_{z})\) is the spatial convolution [14] \[e(\mathbf{r},r_{z})=\iiint_{-\infty}^{\infty}j(\mathbf{s},s_{z})\,h(\mathbf{r},r_{z},\mathbf{s}, s_{z})\,d\mathbf{s}ds_{z} \tag{28}\] where \(h(\mathbf{r},r_{z},\mathbf{s},s_{z})\) is the channel impulse response. Combining (27), (13), and (15), the channel response can be written as the 2D inverse Fourier transform \[h(\mathbf{r}-\mathbf{s};r_{z},s_{z})=\iint_{-\infty}^{\infty}H(\mathbf{\kappa};r_{z},s_{z} )\,e^{\mathrm{j}\mathbf{\kappa}^{\mathrm{T}}(\mathbf{r}-\mathbf{s})}\,\frac{d\mathbf{\kappa}}{ (2\pi)^{2}} \tag{29}\] of \(H(\mathbf{\kappa};r_{z},s_{z})\) in (30). Here, the integration domain is practically limited to a disk \(\mathcal{D}\) of radius \(\kappa_{1}=2\pi/\lambda\), correctly showing the low-pass-filtering behavior of the wireless propagation [12, 14], which is then converted into a functional dependence through an indicator function. The reflected channel is space invariant over any pair of parallel \(z\)-planes. This extends to any pair of parallel planes, not necessarily \(z\), for an LOS channel. The space invariance is a direct consequence of the unboundedness and smoothness of the reflecting surface and enables a linear-system-theoretic interpretation of the reflection and transmission phenomena. Precisely, communications between any two different \(z\)-planes cutting source and receiver can be regarded as an LSI system with the wavenumber response in (30). There are three main terms in (30), plus a phase shift due to migration, that may be interpreted as the cascade of: * First, \(\mathbb{1}_{\mathcal{D}}(\mathbf{\kappa})\), a low-pass filter introduced by the migration operation [12, 14]. * Then, \(1/\kappa_{1z}\), which confers a reverse-bowl behavior to \(H(\mathbf{\kappa};r_{z},s_{z})\) and is directly attributable to the wave equation [13, 14]. * Finally, \(R(\mathbf{\kappa})\) models the reflection. This depends on \(\mathbf{\kappa}\) via \(\kappa_{iz}\) in (14), hence it is circularly symmetric in the wavenumber domain, which is instrumental to devise an efficient numerical procedure to generate channel samples (see Appendix). The space-invariant channel in (29) generated by a specular reflection is obtainable as a particular instance of the double 2D Fourier transform [14, Sec. III] \[h(\mathbf{r},r_{z},\mathbf{s},s_{z})=\iiint_{-\infty}^{\infty}H(\mathbf{k}, \mathbf{\kappa};r_{z},s_{z})\\ \cdot e^{\mathrm{j}\mathbf{k}^{\mathrm{T}}\mathbf{r}}e^{-\mathrm{j}\mathbf{ \kappa}^{\mathrm{T}}\mathbf{s}}\,\frac{d\mathbf{k}}{(2\pi)^{2}}\frac{d\mathbf{\kappa}}{( 2\pi)^{2}} \tag{31}\] of the wavenumber response \[H(\mathbf{k},\mathbf{\kappa};r_{z},s_{z})=\mathbf{\phi}^{\mathrm{H}}(\mathbf{k},r_{z})\mathbf{H}( \mathbf{k},\mathbf{\kappa})\mathbf{\phi}(\mathbf{\kappa},s_{z}), \tag{32}\] given \(\mathbf{\phi}(\mathbf{\kappa},s_{z})=\left(e^{-\mathrm{j}\kappa_{1z}s_{z}},e^{\mathrm{ j}\kappa_{1z}s_{z}}\right)^{\mathrm{T}}\). The above is parametrized by the wavenumber matrix \[\mathbf{H}(\mathbf{k},\mathbf{\kappa})=\begin{pmatrix}H_{++}(\mathbf{k},\mathbf{\kappa})&H_{+-}( \mathbf{k},\mathbf{\kappa})\\ H_{-+}(\mathbf{k},\mathbf{\kappa})&H_{--}(\mathbf{k},\mathbf{\kappa})\end{pmatrix} \tag{33}\] that models the coupling between every input spatial frequency \(\mathbf{\kappa}\) and every other output spatial frequency \(\mathbf{k}\). It can also be regarded as an angular response mapping every incident plane wave traveling along \((\mathbf{\kappa},\pm\kappa_{1z})\) into every other receive plane wave from \((\mathbf{k},\pm\kappa_{1z})\). The convention adopted for the entries of (33) is that the first and second subscripts refer, respectively, to received and incident plane waves (each one being associated with upgoing or downgoing waves). We next find the parameterization of \(\mathbf{H}(\mathbf{k},\mathbf{\kappa})\) that models the scenario in Sec. III. By inspection, comparing (29)-(30) against (31), yields (34). The entries of the angular matrix are impulsive because incident and received plane waves are in one-to-one correspondence: each incident wave turns into a received wave with specular direction, as specified by Snell's law. Generally, the surface of a material object may appear as either smooth or rough depending on the frequency. A rough surface at microscopic level reflects every impinging plane wave off multiple directions creating a diffuse reflection spectrum, typically centered around the specular direction. These surface irregularities are accounted by a non-impulsive \(\mathbf{H}(\mathbf{k},\mathbf{\kappa})\) in (32), whose computation is left for future work. ## VI Application to MIMO Communication Let us now apply the developed model to evaluate the channel eigenvalues, DOF, and spectral efficiency. With \(N_{\mathrm{t}}\) transmit and \(N_{\mathrm{r}}\) receive antennas, the channel matrix \(\mathbf{H}\in\mathbb{C}^{N_{\mathrm{t}}\times N_{\mathrm{t}}}\) is obtained by sampling the impulse response at the antenna locations, \([\mathbf{H}]_{m,n}=h(\mathbf{r}_{m},\mathbf{s}_{n})\) for \(m=0,\ldots,N_{\mathrm{r}}-1\) and \(n=0,\ldots,N_{\mathrm{t}}-1\). The transmit array is centered at the origin whereas the centroid of the receive array is at \(\mathbf{r}_{0}=(r_{0x},r_{0y},r_{0z})\). An efficient numerical generation procedure for \(\mathbf{H}\) is provided in the Appendix. Let \(N_{\mathrm{min}}=\min(N_{\mathrm{r}},N_{\mathrm{t}})\) and \(N_{\mathrm{max}}=\max(N_{\mathrm{r}},N_{\mathrm{t}})\). We consider uniform linear arrays (ULAs) at \(57.5\) GHz (see Fig. 3) under the proviso that those ULAs are substantially shorter than their separation range, the so-called _paraxial approximation_, so we can leverage results available for LOS channels [15, 16]. The transmitting and receiving ULAs have arbitrary orientations \(\vartheta_{\mathrm{t}}\) and \(\vartheta_{\mathrm{r}}\) with respect to the \(x\)-axis. We hasten to emphasize that the reliance on the paraxial approximation is confined to the production of benchmark results for LOS MIMO, with our channel model being valid regardless. The frequency, in turn, is motivated by mmWave applications [2] and by availability of refractive indices for most common materials [19]. ### _Parallel Arrays Optimized for LOS Transmission_ Consider parallel ULAs aligned with the \(x\)-axis, with \(N_{\mathrm{t}}=N_{\mathrm{r}}=8\) and antenna spacing \(\mathsf{d}\). The range is \(D=10\) m whereas the surface is at \(D_{0}=15\) m. First, we validate the model in LOS, for which the closed-form solution in (22) is available. The channel matrix obtained by sampling (22) is compared to the LOS component in our model, derivable after an inverse Fourier transform of the first term in (30), the LOS term, according to (29), followed by spatial sampling. Setting \[\mathsf{d}(D)=\sqrt{\lambda D/N_{\mathrm{max}}}, \tag{35}\] renders \(\mathbf{H}\) a Fourier matrix and is optimum at high SNR [5, 15]. The normalized eigenvalues of \(\mathbf{H}\mathbf{H}^{n}\), \(\lambda_{n}(\mathbf{H})\), are plotted in Fig. 4. The perfect match validates the numerical procedure in the Appendix for this LOS setting. Then, we validate the model under perfect reflection. To this end, the channel obtained by imaging the source is compared against the one associated with the perfect reflection in our model; the latter is obtained by plugging the second term in (30) with \(R(\mathbf{\kappa})=-1\) into (29) and sampling. The eigenvalues of the reflected channel matrix are further shown in Fig. 4 for different materials. These undergo two effects relative to their LOS brethren: * _Power loss_ caused by the longer range and by the reflection of only a share of the incident power, with dense materials and shallow angles reflecting better. * _Spatial selectivity_ due the antenna spacing in (35) being suboptimally small for the longer range of the reflected channel. We now gauge the capacity with channel-state information at the transmitter, which equals [21, 22] \[C(\mathbf{H},\mathsf{SNR})=\sum_{n=1}^{N_{\mathrm{min}}}\log_{2}\!\left(1+\left( \nu-\frac{1}{\lambda_{n}(\mathbf{H})}\right)^{\!\!+}\lambda_{n}(\mathbf{H})\right) \tag{36}\] where \(\nu\) is such that \(\sum_{n=1}^{N_{\mathrm{min}}}\left(\nu-1/\lambda_{n}(\mathbf{H})\right)^{\!\!+}= \mathsf{SNR}\) while \(\sum_{n=1}^{N_{\mathrm{min}}}\lambda_{n}(\mathbf{H})=N_{\mathrm{r}}N_{\mathrm{t}}\). At any given SNR, \(C(\mathsf{SNR})=\max_{\mathbf{H}}C(\mathbf{H},\mathsf{SNR})\) satisfies [15, 16] \[C(\mathsf{SNR})\leq\max_{\rho\in\{1,2,\ldots,N_{\mathrm{min}}\}}\rho\log_{2} \!\left(1+\frac{\mathsf{SNR}}{\rho}\frac{N_{\mathrm{r}}N_{\mathrm{t}}}{\rho} \right), \tag{37}\] with the upper bound corresponding to \(\rho\) nonzero eigenvalues equal to \(N_{\mathrm{r}}N_{\mathrm{t}}/\rho\) and to the SNR-dependent antenna spacing \[\mathsf{d}(D,\mathsf{SNR})=\sqrt{\eta(\mathsf{SNR})\lambda D/N_{\mathrm{max}}}, \tag{38}\] Figure 3: ULAs separated by \(D\) and equipped with \(N_{\mathrm{t}}=N_{\mathrm{r}}=8\) antennas with spacing \(\mathsf{d}\). Arrays have arbitrary orientations \(\vartheta_{\mathrm{t}}\) and \(\vartheta_{\mathrm{r}}\) with respect to the \(x\)-axis. The clear and solid circles at source and receiver indicate the antennas and their projections onto the \(x\)-axis, respectively. Antennas are connected either via a LOS or a reflected channel off a \(z\)-oriented surface. We denote by \(\theta_{0}\) the angle formed by the surface normal and the geometrical path connecting the centroids of the image source and receiver. for a fraction \(\eta(\mathsf{SNR})=\rho(\mathsf{SNR})/N_{\min}\) of the \(N_{\min}\) potential DOF. Thus, \(\eta\in[0,1]\) with \(\eta=1\) at high enough SNR. The capacity \(C(\boldsymbol{H},\mathsf{SNR})\) is reported in Fig. 5 for the antenna spacing, \(\mathsf{d}(D,\mathsf{SNR})\), that is optimum for the LOS channel at every SNR. With respect to the LOS case, the capacity of the reflected channel experiences an offset (power loss, due to the longer range) and a reduced slope (DOF loss, due to the spatial selectivity). ### _Parallel Arrays Optimized for the Reflected Transmission_ While the power loss is inevitable, because of the longer range, the spatial selectivity can be corrected by tailoring the antenna spacing to the equivalent LOS transmission from the image source. To this end, recall from the image theorem that the reflected channel can be regarded as an LOS channel with augmented distance \(D_{\mathsf{e}}>D\); in the setting of Figs. 4 and 5, \(D_{\mathsf{e}}=2D_{0}-D\). For a perfect conductor, this alone justifies the choice of an antenna spacing equal to \(\mathsf{d}(D_{\mathsf{e}})\). The argument is somewhat more involved for arbitrary materials, due to the distortion introduced by reflection, but it ultimately leads to the same observation as illustrated in Fig. 6. Numerically, this is supported by the invariance of the curves for the materials in Fig. 4. Physically, it is explained by the paraxial approximation, whereby the field has an approximately constant wavenumber response in magnitude. Hence, the reflection has an approximately multiplicative effect on the channel impulse response in (30) and the whole interaction phenomenon with the surface is described by the reflectivity coefficient, \(R(\theta_{0})\), which is derivable from (5) after setting \(\theta_{\mathsf{i}}=\theta_{0}\) with \(\theta_{0}\) as per Fig. 3. Similarly, the eigenvalues of the reflected MIMO Figure 4: Normalized channel eigenvalues for various materials. Parallel ULAs separated by \(D=10\) m with spacing \(\mathsf{d}(D)\) in (35). Figure 5: Spectral efficiency as a function of SNR for various materials. Parallel ULAs separated by \(D=10\) m with spacing \(\mathsf{d}(D,\mathsf{SNR})\) in (38). Figure 6: Normalized channel eigenvalues for various materials. Parallel ULAs separated by \(D=10\) m with spacing \(\mathsf{d}(D)\) for the LOS channel and \(\mathsf{d}(D_{\mathsf{e}})\) for the reflected channel. Figure 7: Spectral efficiency as a function of SNR for various materials. Parallel ULAs separated by \(D=10\) m with spacing \(\mathsf{d}(D,\mathsf{SNR})\) for the LOS channel and \(\mathsf{d}(D_{\mathsf{e}},\mathsf{SNR})\) for the reflected channel. channel \(\mathbf{H}\mathbf{H}^{\text{\tiny H}}\) are obtained by scaling the LOS eigenvalues uniformly by \(|R(\theta_{0})|^{2}\). From (5), for the chosen materials, setting \(\theta_{\text{i}}=\theta_{0}\) yields a scaling of \(7.19\) dB (concrete), \(9.63\) dB (floorboard), and \(13.98\) dB (plaster board). These values describe the gap in Fig. 6 between the eigenvalues of the reflected channel for various materials and those of a perfect conductor. The additional gap to the LOS channel is due to the enhanced range, a loss of \(6.02\) dB in our setting. For completeness, Fig. 7 shows the spectral efficiency corresponding to the eigenvalue distributions in Fig. 6. With respect to Fig. 5, the antenna spacing is \(\mathsf{d}(D,\mathsf{SNR})\) for the LOS channel and \(\mathsf{d}(D_{\text{e}},\mathsf{SNR})\) for the reflected channel, which lead to the same DOF. ### _Power Loss and Spatial Selectivity for Parallel Arrays_ We have seen that the power loss is determined by the additional range and by the share of incident power not reflected by the surface. This is constant over the arrays themselves as amplitude variations thereon are negligible with the proviso that propagation occurs in the paraxial regime. From the image theorem, \[\beta=|R(0)|^{2}\left(\frac{\lambda}{4\pi D_{\text{e}}}\right)^{2} \tag{39}\] where \(D_{\text{e}}=2D_{0}-D\) and \(R(0)=(1-n_{2})/(1+n_{2})\). In Fig. 8, \(\beta\) is plotted as a function of \((D_{0}-D)\) for different materials. The interface is at \(D_{0}=15\) m from the source, while the range between receiver and surface varies accordingly to \((D_{0}-D)\). Receiver motion away from the surface, if unaccounted for, leads to a decreasing stepwise function of \(D-D_{0}\in[0,D_{0}]\); this is shown in Fig. 9, where the DOF equal the number of eigenvalues that are at most \(40\) dB below the maximum. Correcting the antenna spacing as a function of \(D\) prevents this decrease. ### _Non-Parallel Arrays_ Non-parallel ULA configurations arise either when the receiver is shifted along the \(x\)-axis, creating an oblique incidence (\(\theta_{0}>0\)), or when arrays are oriented differently in elevation (\(\vartheta_{\text{t}}\neq\vartheta_{\text{r}}\)); see Fig. 3. The relative azimuth angle is set to zero, as it is immaterial to ULAs [16]. With the focus on oblique incidence and its impact on power loss and spatial selectivity, the ULAs are aligned with the \(x\)-axis (\(\vartheta_{\text{t}}=\vartheta_{\text{r}}=0\)). First, let us consider the power loss. Due to rotational symmetry about the \(x\)-axis, the \(xz\)-plane can be selected without loss of generality. The pathloss in (39) generalizes to arbitrary receive positions when using \[D_{\text{e}}(\theta_{0})=\frac{2D_{0}-r_{0z}}{\cos(\theta_{0})}, \tag{40}\] and \(R(\theta_{0})\), which are parametrized by the incident angle \[\theta_{0}=\arccos\!\left(\frac{2D_{0}-r_{0z}}{\sqrt{D^{2}+4D_{0}(D_{0}-r_{0z })}}\right). \tag{41}\] Figure 8: Pathloss as a function of \(D\) for different materials at normal incidence. Figure 10: Pathloss as a function of \(\theta_{0}\) for various materials. Oblique incidence with the receiver at \(r_{0x}\in[0,100]\) m and \(r_{0z}=10\) m. Figure 9: Number of DOF as a function of \((D_{0}-D)\) when the material is concrete. Parallel ULAs. Fig. 10 depicts \(\beta\) for various materials. The receiver is shifted along the \(x\)-axis on the interval \(r_{0x}\in[0,100]\) m with \(r_{0z}=10\) m such that \(D=(r_{0x}^{2}+r_{0z}^{2})^{1/2}\). Second, we turn to spatial selectivity. Consider oblique incidence on a vertical plane, not necessarily the \(xz\)-plane. Its projected views on the \(yz\)-plane and on the \(xz\)-plane are illustrated in Figs. 10(a) and 10(b). For the side view in Fig. 10(b), we define \[\widehat{D} =D/\sqrt{1+\left(\frac{r_{0y}}{r_{0z}}\right)^{2}} \tag{42}\] \[\widehat{D}_{\text{e}} =D_{\text{e}}/\sqrt{1+\left(\frac{r_{0y}}{2D_{0}-r_{0z}}\right)^ {2}}, \tag{43}\] which are obtained by projecting their counterparts \(D\) and \(D_{\text{e}}\) onto the \(xz\)-plane; see Fig. 10(a). As sketched in Fig. 10(c), shifting the receiver along the \(x\)-axis is equivalent to rotating the transmitting and receiving ULAs with respect to the \(x\)-axis by an angle \[\vartheta=\arccos\Bigl{(}r_{0z}/\widehat{D}\Bigr{)} \tag{44}\] for the LOS channel, and by another angle \[\vartheta_{\text{e}}=\arccos\biggl{(}\frac{2D_{0}-r_{0z}}{\widehat{D}_{ \text{e}}}\biggr{)} \tag{45}\] for the reflected channel. Unlike the power loss, spatial selectivity can be corrected by tailoring the ULA spacing opportunely [16]. To this end, for the LOS channel, \[\mathsf{d}(D,\mathsf{SNR},\vartheta)=\frac{\mathsf{d}(D,\mathsf{SNR})}{\cos( \vartheta)}, \tag{46}\] with \(\mathsf{d}(D,\mathsf{SNR})\) the optimal antenna spacing for parallel ULAs in (38) whereas, for the reflected channel, \(\mathsf{d}(D_{\text{e}},\mathsf{SNR},\vartheta_{\text{e}})\) with \(D_{\text{e}}\) in (40) and \(\vartheta_{\text{e}}\) in (45). Compared to parallel ULAs, non-parallel ULAs have antennas that are spaced further apart due to the division by \(\cos(\cdot)\) in (46). The potential DOF thereby shrink for ULAs tilted sideways. At high SNR, since \(\eta(\mathsf{SNR})=1\) in Figure 11: Non-parallel ULA configuration arising from an oblique incidence with ULAs oriented as the \(x\)-axis. Figure 12: Normalized channel eigenvalues for various materials. Oblique incidence with the receiving ULA at \(\mathbf{r}_{0}=(1,4,10)\) m (hence, \(\vartheta=5.3^{\circ}\) and \(\vartheta_{\text{e}}=2.8^{\circ}\)). The antenna spacings are \(\mathsf{d}(D,\mathsf{SNR},\vartheta)\) for the LOS channel and \(\mathsf{d}(D_{\text{e}},\mathsf{SNR},\vartheta_{\text{e}})\) for the reflected channel. Figure 13: Spectral efficiency as a function of SNR for different materials. Non-parallel ULAs with spacing \(\mathsf{d}(D,\mathsf{SNR},\vartheta)\) for the LOS channel and \(\mathsf{d}(D_{\text{e}},\mathsf{SNR},\vartheta_{\text{e}})\) for the reflected channel. (38), \(\mathsf{d}(D,\mathsf{SNR})\) reduces to \(\mathsf{d}(D)\) in (35) leading to full-rank channel matrices for the LOS and reflected channels. This property is validated in Fig. 12 for an \(x\)-oriented ULA located at \(\mathbf{r}_{0}=(1,4,10)\) m, i.e., for \(\vartheta=5.3^{\circ}\) and \(\vartheta_{\text{e}}=2.8^{\circ}\). Finally, the spectral efficiency with ULA spacings optimized at every SNR for the LOS and reflected transmissions is also shown in Fig. 13 for different materials. ## VII Implications for Ray Tracing Algorithms NLOS connectivity is typically established via multiple reflections involving possibly distinct materials and orientations. Analysis becomes unwieldy in such general settings and the recourse are numerical algorithm such as ray tracing [17]. Our setup provides insights into the mechanisms involved at each stage of reflection. Our exact channel model describes the reflected propagation as an LSI filtering, whereas the ray-tracing model regards the convolving response as an impulse weighted by the reflectivity coefficient in (5) at \(\theta_{\text{i}}=\theta_{0}\). To appreciate the difference between the exact and the approximated method (ray-tracing) one should increase the array apertures \(L_{\text{r}}\) and \(L_{\text{t}}\) for a given communication range \(D_{\text{e}}\), thus violating the sufficient condition for the reflected transmission to be paraxial. To this end, Fig. 14 depicts the spectral efficiency of the reflected channel between two ULAs of apertures \(L_{\text{r}}=L_{\text{t}}=L\) as a function of \(L/D_{\text{e}}\) at \(\mathsf{SNR}=0\) dB. The ULAs are separated by \(D=2\) m and the surface distance is \(D_{0}=3\) m so that \(D_{\text{e}}=4\) m. In turn, the antenna spacing is optimized for the reflected transmission, which implies array apertures linearly increase with the number of antennas. The ray-tracing curve yields a tight match with the exact one, except for the regime where the two arrays have an aperture \(L\) comparable to the range \(D_{\text{e}}\) of the reflected transmission. Hence, ray tracing algorithms leveraging the paraxial approximation offer a good fit to reality, as also supported by the robustness of the underlying approximation against changes in the propagation geometry. ## VIII Conclusion Through a physics-based formulation, we have confirmed that reflection off a large and smooth planar surface, say a wall or ceiling, can serve as alternatives to LOS for wide-aperture MIMO communication. With respect to an LOS link, a reflected counterpart exhibits: * A power loss determined by the additional range and by the share of incident power not reflected by the surface. * A reduction in the number of DOF because of the antenna spacing tailored to the LOS link being smaller than the one that the reflected link would require at the same SNR. If the arrays are outright configured for the reflected transmission, then the second effect is corrected. The above observations bode well for flexible LOS MIMO communication aided by reflections, with further work required to determine the impact of surface finiteness and roughness. This paper ignores mutual coupling effects among antenna elements, which are most impactful at sub-wavelength spacings [23]. This ought not to be the case for wide-aperture MIMO that envisions electrically large antenna spacings, with follow-up studies needed to confirm this hypothesis. Connection with the image theorem that underlies ray tracing showed that, with non-planar wavefronts, the image of the transmitter is blurred by the convolution with a response modeling the not perfect reflectivity of the surface. Ray tracing ignores this blurring, which is to say it regards the convolving response as an impulse. However, our findings show that only for very large arrays does the response depart from an impulse, justifying the use of ray tracing algorithms [8] in most situation. ## Appendix A Generation of the MIMO channel matrix For the sake of compactness, let us define the space-lag variable \(\boldsymbol{\delta}=\mathbf{r}-\mathbf{s}\) with coordinates \(\delta_{x}=r_{x}-s_{x}\) and \(\delta_{y}=r_{y}-s_{y}\), indicating the displacement between source and receiver on the \(z\)-plane. Due to circular symmetry of \(H(\mathbf{\kappa};r_{z},s_{z})\) in (30), we eliminate the azimuthal dependance of the channel impulse response by evaluating (29) at \((\delta_{\rho},0)\), \(\delta_{\rho}=\|\boldsymbol{\delta}\|\). The result is reported in (47) for any given \(s_{z}\) and \(r_{z}\) where we introduced \(\kappa_{\rho}=\|\mathbf{\kappa}\|\in[0,\kappa_{1}]\) within \(\mathbf{\kappa}\in\mathcal{D}\). Hence, the impulse response is invariant under any affine transformation that preserves the distance between source and receiver on the \(z\)-plane. Figure 14: Spectral efficiency as a function of the array apertures over equivalent distance ratio at \(\mathsf{SNR}=0\) dB when the material is concrete. The antenna spacing is optimized for the reflected transmission. Eq. (47) is a Sommerfeld-type integral [10, Eq. 2.2.30]. This describes the received field as integral superpositions of cylindrical waves times an upgoing or downgoing plane wave in the \(z\)-direction. Analytical solutions of (47) are hardly available and problem-specific [10, Ch. 2.7.3]. Hence, we resort to a numerical integration procedure that accounts for the singularities on the complex \(\kappa_{\rho}\)-plane.3 Footnote 3: The error of a numerical integration routine is proportional to the derivatives of the integrand and are unbounded near a singularity [24]. Assuming the analyticity of the integrand, we can invoke Cauchy's integral theorem and deform the contour integration path to avoid singularities. The integral value is unchanged along this new integration path. This should lie in the fourth orthant due to \(\mathrm{Re}(\kappa_{\rho})\geq 0\) and Sommerfeld radiation conditions (i.e., \(\mathrm{Im}(\kappa_{1z})\geq 0\) and \(\mathrm{Re}(\kappa_{1z})\geq 0\)) that ensures convergence of the improper integral in (47) [10, Ch. 2.2.3].4 We follow [25] and choose a semi-elliptical integration path \(\mathcal{C}\) that goes around the pole singularities with semi-axes of the ellipse chosen as [25] Footnote 4: The half-planes \(\mathrm{Im}(\kappa_{1z})=0\) and \(\mathrm{Re}(\kappa_{1z})=0\) map to the hyperbola [10, Eq. 2.2.33] in the complex \(\kappa_{\rho}\) plane; see [10, Fig. 2.2.8]. \[\kappa_{\rho}^{\text{maj}}=(\kappa_{1}+\kappa_{2})/2\qquad\kappa_{\rho}^{ \text{min}}=\kappa_{\rho}^{\text{maj}}/10^{3} \tag{48}\] so that the contour of \(\mathcal{C}\) is sufficiently away from the singularity but \(\kappa_{\rho}\) is small enough for the argument of the Bessel function in (47) to ensure controlled oscillations. For complex integration, we parametrize the curve as \(\kappa_{\rho}(\theta):[\pi,2\pi)\to\mathcal{C}\) where \(\kappa_{\rho}(\theta)=\kappa_{\rho}^{\prime}(\theta)+\mathrm{j}\kappa_{\rho} ^{\prime\prime}(\theta)\) with \[\kappa_{\rho}^{\prime}(\theta)=\frac{\kappa_{\rho}^{\text{maj}}}{2}(1+\cos( \theta))\quad\kappa_{\rho}^{\prime\prime}(\theta)=\frac{\kappa_{\rho}^{ \text{min}}}{2}\sin(\theta), \tag{49}\] leading to [24, Ch. 10.5] \[\int_{\mathcal{C}}f(\kappa_{\rho})\,d\kappa_{\rho}=\int_{0}^{\pi}f(\kappa_{ \rho}(\theta))\left|\frac{\partial\kappa_{\rho}(\theta)}{\partial\theta} \right|\,d\theta \tag{50}\] where \(f(\kappa_{\rho})\) is the integrand of (47) and the Jacobian is \[\frac{\partial\kappa_{\rho}(\theta)}{\partial\theta}=\frac{1}{2}\left(-\kappa _{\rho}^{\text{maj}}\sin(\theta)+\mathrm{j}\kappa_{\rho}^{\text{min}}\cos( \theta)\right). \tag{51}\] The presented numerical generation procedure performs superbly as long as the transverse distance \(\rho\) is not too large compared to the wavelength \(\lambda\). Numerical simulations show no issue for \(\rho<18\) m at \(60\) GHz, i.e., \(\delta_{\rho}/\lambda<3600\). For larger \(\delta_{\rho}\), the integrand in (47) becomes a rapidly oscillating function of \(\kappa_{\rho}\), due to the large variations into the Bessel function, and \(\mathcal{C}\) must be chosen according to the steepest descent path [10, Ch. 2.7.3].
2307.07841
An Exploration of Learning Processes as Process Maps in FLOSS Repositories
Evidence suggests that Free/Libre Open Source Software (FLOSS) environments provide unlimited learning opportunities. Community members engage in a number of activities both during their interaction with their peers and while making use of the tools available in these environments. A number of studies document the existence of learning processes in FLOSS through the analysis of surveys and questionnaires filled by FLOSS project participants. At the same time, the interest in understanding the dynamics of the FLOSS phenomenon, its popularity and success resulted in the development of tools and techniques for extracting and analyzing data from different FLOSS data sources. This new field is called Mining Software Repositories (MSR). In spite of these efforts, there is limited work aiming to provide empirical evidence of learning processes directly from FLOSS repositories. In this paper, we seek to trigger such an initiative by proposing an approach based on Process Mining to trace learning behaviors from FLOSS participants trails of activities, as recorded in FLOSS repositories, and visualize them as process maps. Process maps provide a pictorial representation of real behavior as it is recorded in FLOSS data. Our aim is to provide critical evidence that boosts the understanding of learning behavior in FLOSS communities by analyzing the relevant repositories. In order to accomplish this, we propose an effective approach that comprises first the mining of FLOSS repositories in order to generate Event logs, and then the generation of process maps, equipped with relevant statistical data interpreting and indicating the value of process discovery from these repos-itories
Patrick Mukala, Antonio Cerone, Franco Turini
2023-07-15T16:18:44Z
http://arxiv.org/abs/2307.07841v1
# An Exploration of Learning Processes as Process Maps in FLOSS Repositories ###### Abstract Evidence suggests that Free/Libre Open Source Software (FLOSS) environments provide unlimited learning opportunities. Community members engage in a number of activities both during their interaction with their peers and while making use of the tools available in these environments. A number of studies document the existence of learning processes in FLOSS through the analysis of surveys and questionnaires filled by FLOSS project participants. At the same time, the interest in understanding the dynamics of the FLOSS phenomenon, its popularity and success resulted in the development of tools and techniques for extracting and analyzing data from different FLOSS data sources. This new field is called Mining Software Repositories (MSR). In spite of these efforts, there is limited work aiming to provide empirical evidence of learning processes directly from FLOSS repositories. In this paper, we seek to trigger such an initiative by proposing an approach based on Process Mining to trace learning behaviors from FLOSS participants' trails of activities, as recorded in FLOSS repositories, and visualize them as process maps. Process maps provide a pictorial representation of real behavior as it is recorded in FLOSS data. Our aim is to provide critical evidence that boosts the understanding of learning behavior in FLOSS communities by analyzing the relevant repositories. In order to accomplish this, we propose an effective approach that comprises first the mining of FLOSS repositories in order to generate Event logs, and then the generation of process maps, equipped with relevant statistical data interpreting and indicating the value of process discovery from these repositories. **Keywords:** FLOSS learning processes, learning activities in Open Source, Mining Software Repositories, Process Mining, Process Mapping, and Semantic Search. ## 1 Introduction Over the years, there has been increasing interest in exploiting learning opportunities in FLOSS communities. Numerous studies conducted in this regard suggest the existence of learning opportunities in FLOSS environments [1-9]. It has emerged that collaborative and participatory learning between participants successfully occurs in such environments [5, 7-8]. This has potentially fostered the levels of interest as well as the aura linked with the occurrence of learning within FLOSS, attracting practitioners from tertiary education and urging them to consider incorporating participation in FLOSS projects as a requirement for some Software Engineering courses [3, 7, 10-14]. A number of pilot studies have been conducted in order to evaluate the effectiveness of such an approach in traditional settings of learning [2-5, 14-15]. Furthermore, an emphasis on how learning occurs in terms of phases has been studied [18-19, 29]. To this end, it has been proposed that a typical learning process in FLOSS occurs in three main phases: Initiation, Progression and Maturation. In each phase, a number of activities are executed between Novices and Experts. A Novice is considered as any participant in quest of knowledge while the knowledge provider is referred to as the Expert. A number of activities are performed during the course of interactions between these participants across the three learning phases. These activities can range from formulating a question, observing a discussion, answering a question to developing code, committing changes and reporting bugs. FLOSS repositories such as CVS, Bug reports, mailing archives, Internet relay chats all contain traces of participants' activities. Sowe and Stamelos [1], Cerone and Sowe [4] as well as Cerone [18] argue that FLOSS environments typically include discussion forums or mailing lists where users can post questions and get help from developers or other users. Forums are unrestricted and act as learning environments for both Novices and Experts. While many studies have provided invaluable insights in this direction, their results are mostly based on either surveys or observation reports [10, 14-16, 21-22] or stochastic or probability studies conducted in FLOSS communities [29]. Singh _et al._[29] proposed a stochastic model that aims at capturing learning dynamics in FLOSS environments. The main realization of this study was the design of a Hidden Markov Model (HMM) used to investigate the extent to which individuals learn from their own experience and from interactions with peers. Moreover, the model aims to determine whether an individual's ability to learn from these activities varies as the individual evolves/learns over time, and to what extent the learning persists over time [29]. In spite of these few mentioned attempts and many more available in the literature, there has been no work, to our knowledge, conducted on how the traces of data in FLOSS repositories can be mined specifically in order to study learning patterns. Therefore, we propose to contribute in this context by studying learning activities from FLOSS repositories using process mining. We succinctly present a number of steps undertaken as part of our approach for mining and exploring these repositories in order to trace and produce a pictorial representation of learning processes as process maps. Such a representation plays a pivotal role in fostering the understanding of potential learning processes in FLOSS communities. The rest of the paper is structured as follows. Section 2 presents a short review of learning processes in FLOSS communities. Section 3 briefly introduces process mapping and explains it in relation to learning processes. In Section 4 we discuss the data collection and analysis techniques pertaining to the construction of the relevant Event logs used for process mapping. In Section 5 we describe the results of our analysis based on the process maps. In Section 6 we further discuss our approach and outcomes with respect to related work and highlight their contribution to the understanding of learning processes in FLOSS communities. ## 2 Learning processes in FLOSS communities: An overview The bulk of reports on FLOSS members' profiling, such as the ones by Glott _et al._[21-22] and Gosh _et al._[37-38], and the analysis performed by Krishnamurthy [39] have found that OSS members in these communities hold different roles that define their responsibilities and participation in the community activities. These include testers, debuggers, project managers, co-developers and the core developers that make up the core development team. Among these roles, project initiators and the core development team remain at the heart of any development project in the community. This is made up of a small number of developers while the rest of contributors, referred to as the enhanced team, perform additional tasks such as feature suggestions, testing and query handling [39]. Apart from FLOSS participants who play roles with direct impact on FLOSS project, we can also distinguish between passive and active users of FLOSS products. Passive users are observers whose only active role is the mere use of the products. Active users are members of the community who do not necessarily contribute to the project in terms of coding, but whose support is made through testing and bug reporting [21-22, 37-38]. Figure 1 suggests a progressive skills development process that can be observed in FLOSS communities [31]. As highlighted by Aberdour [30], participants increase their involvement in the project through a process of role meritocracy. This implies that passive users could move from their state of passiveness to active users, bug reporters until they possibly become part of the core team [21, 30]. All these roles represent crucial contributions required for the overall project quality. However, in FLOSS environments, moving to a higher state is regarded as a reward and recognition of members' abilities and contributions [30]. This is illustrated in Figure 1 where, upon attaining a certain level of expertise and contributive role, users move to roles with more responsibilities. Such role migration is also seen as moving to a higher skill level [30-31]. To this end, it has been proposed that a typical learning process in FLOSS occurs in three main phases: Initiation, Progression and Maturation [20, 30]. In every phase, a number of activities are executed between Novices and Experts. A Novice is considered as any participant in quest of knowledge while the knowledge provider is referred to as the Expert. In Figure 2, we observe a summary of the deliverables expected from contributors' activities as executed in each phase of the development process as well as the main learning activities as shown on the x axis [18]. Moreover, Figure 2 shows how the different activities as carried out by FLOSS contributors are linked to their maturation through learning stages [18]. Each phase in Figure 2 corresponds to a phase in the learning process. The Understanding, Practicing and Developing phases in Figure 2 respectively correspond to the Initiation, Progression and Maturation phases of the learning process. In describing the learning phases we make use of action identifiers (activities) that correspond to events recorded in FLOSS data. The vocabulary and terminology for learning phases in FLOSS are expressed through the OntoLiFLOSS ontology, which was defined in our previous work [42]. The main activities that may be identified through the three phases are described in Sections 2.1 - 2.3. ### Initiation In this phase, both Novice and Expert perform a number of activities that can trigger and perpetuate a learning process. A Novice seeking help can perform the following activities before making contact with an Expert: _FormulateQuestion_ and/or _IdentifyExpert,_ and lastly _PostQuestion_, _CommentPost_ or _PostMessage_. Then, the Novice can simply perform ContactExpert. If the Expert responds positively, the Novice can perform _SendDetailedRequest_ otherwise the cycle for identifying Expert is restarted [20]. The Expert can provide help after performing one of the following activities: _ReadMessages_ on mailing lists/Chat messages, _ReadPost_ from forums, _ReadSourceCode_ as any participant commits code to the project, or _CommentPost_. Just like the Novice seeking to be part of some form of knowledge channel, the Expect can perform ContactNovice and show interest in helping or simply perform _CommentPost_. Figure 2: Learning stages and Activity contents (From Cerone [18]) ### Progression In this phase, both Novice and Expert execute a series of new activities building up from the previous phase. After accepting a request from the Novice, the Expert performs _ReviewThreadPosts_ to be fully aware of the questions and needs for clarification raised by the Novice and _ReviewThreadCode_, for the purpose of critiquing and fixing the code, if needed. The Expert may also perform SendReply in an attempt to answer any direct questions and help requests or just to react to a discussion in a forum. Furthermore, the Expert performs _SendFeedback_ and _ReplyPostedQuestion_, to directly or indirectly address doubts or questions from the Novice, _PostQuestions_, to enquiry about possible further needs by the Novice, and _ReportBugs_, as a response to Novice's needs, such as understanding why a piece of code does not run properly. Moreover, the Expert may monitor the Novice through a set of activities for the purpose of evaluating the level of skill acquisition. These activities include RunSourceCode and _AnalyseSourceCode_, to identify flaws in the Novice's works, and, if necessary, _ReportBugs_, _CommentOnCode_ and _ReplyToPost_[20]. The Novice can only react to the Expert's help or feedback by providing insights on the extent to which such help or feedback was useful through _ProvideFeedback_, or simply posing more questions through _PostQuestions_. The Novice also performs a number of activities in the context of posting. These activities may include _PostQuestions_, _ReplyPostedQuestions_ and possibly _SendFeedback_. Furthermore, the Novice can start exercising the new acquired skills through activities such as _AnalyseSourceCode_, when looking at new commits, new pieces of code being posted by community members. Thus, the Novice is able to comment on commits and code through _CommentOnCode_ and by reporting bugs through _ReportBugs_[20]. ### Maturation In the last phase, Maturation, the learning process intensifies as the knowledge exchanged has matured and allows the Novice to perform more advanced activities according to the progress made in skills accumulation, as depicted in Figure 1. The Novice is assumed to have acquired enough skills to be able to undertake a set of activities such as _AnalyseDiscussions_ in order to actively engage and contribute to comments and posts about topics in the sphere of the skills acquired and possibly becoming an Expert. Activity _AnalyseSourceCode_ entails looking at the code (when applied) in order to understand and critique that piece of software. Activity _AnalyzeThread-Progression_ entails to be part of a discussion and exchange channel that engages on a topic related to a new skill learnt. Having developed skills in a specific area, the Novice can now commit some deliverables that can be evaluated and criticized by the community. This can be summarized through the following activities: _SubmitBugReport_, consisting in committing any fix or bug report for the interest of the entire community; _SubmitCode_, consisting in committing code; _SubmitDocumentation_, consisting in documentation in terms of requirements elicitation documents, help document, user manuals, tutorials etc. The activity _SubmitCode_ is essential in building reputation for a possible role transition. The Novice also carries out a number of activities that demonstrate core software engineering skills such as developing software, refactoring code, testing and optimizing pieces of code. The activities include: _FixBugs_, consisting in fixing any reported bugs; _GiveSuggestion_, as part of reviewing peers' works and providing alternatives when needed; _PostCommentOnCode_, in order to make sure that appropriate indicative comments that appears in the code are also posted; _ReplyToSuggestion_, to reply and critique suggestions from other Experts or Novices in an active fashion; _WriteSourceCode_, in order to commit pieces of software; _ModifySourceCode_, to modify code and implement suggestions. The Novice can undertake a number of review activities: _ReviewCommentContents_, in order to contribute to comments and posts about topics in the sphere of the skills acquired and possibly become an Expert; _ReviewPosts_ on mailing lists and forums so as to react as needed to comments and posts related to a particular content that is the subject of learning; _ReviewSourceCode_, to explain the ability to analyze the code and identify flaws, which can be reported through activity ReportBugs. The Expert will perform exactly the same activities while tracking the Novice's progress. These include activities such as _AnalyzeThreadProgression_, _AnalyzeSourceCode_ and _AnalyzeDiscussions_. Moreover, the Expert conducts reviews on these activities for monitoring purposes and gives feedback. These activities include _ReviewDocumentation_, _ReviewCode_, _ReviewReport_ and _SendFeedback_. Furthermore, the Expert carries out a number of additional activities as part of the learning process: _RunSourceCode_, _AnalyzeSourceCode_, _CommentOnCode_ and _ReportBugs_, and contributes to the Novice's learning process by performing: _ReviewThreadPosts_, to become aware of questions and requests for clarification; _ReviewThreadCode_, for the purpose of critiquing and fixing as required; _SendReply_, as an attempt to answer any direct questions and help requests or just react to a discussion in a forum. Finally, it is important to note that in this last phase of the learning process, the Expert sometimes performs the same activities as the Novice, but on the Novice's progress: _ReviewPosts_, to react to comments and posts by the Novice, as well as _ReviewSourceCode_, _ReportBugs_ and _ProvideFeedback_. The description herein of the three learning phases provides a broad and generic representation of how communication between FLOSS members also involves learning processes. Such description is critical to understand learning processes in FLOSS environments as it lays the details pertaining to activities and tasks performed by both the Novice and Expert as part of these processes. As pointed out, our aim is to further explore these learning processes by providing empirical evidence directly from data. The key to such empirical analysis is to make sure that we present a systematic way of mining the FLOSS repositories we have identified and producing the evidence of learning processes in these repositories. Such evidence, in the form of traces accounting for users' activities in FLOSS communities, can be represented through process maps. ## 3 Process Mapping Process mapping can be considered in simple terms as the step-by-step description of the actions taken by workers (users) as they use a specific set of inputs to produce a defined set of outputs [40]. Literally, a process map is a pictorial representation of the sequence of actions that comprise a process. It provides an opportunity to learn about work that is being performed and a reference to discuss how things get done [32-36]. Process maps can be produced to serve a number of purposes. They can help in outlining processes in various settings: manufacturing processes, corporate structures and management tasks [40]. Process maps provide valuable information about a process to help management find ways to make the process better. The detailed step-by-step descriptions included in process maps provide a clear and concise blueprint for content. Furthermore, process mapping offers numerous advantages related to managing processes, evaluating and communicating the processes performance and guiding decision making [32-36]. However, for the purpose of this paper, we consider process mapping as a viable means to model and communicate learning processes found in FLOSS repositories. We believe that the visual depiction process mapping offers will reinforce the understanding of learning patters as described through simple surveys or narrative reports. Marrelli [40] argues that process mapping provides a rich and straightforward depiction of processes that requires little or no interpretation. Moreover, process maps include additional elements such as the elapsed time required to perform each step as well as the relative frequency of occurrence for each step (task). This provides critical insights on the scale at which learning occurs in FLOSS communities. Data description and analysis requirements ### FLOSS repositories for Analysis The FLOSS platform used for our analysis is OpenStack [23]. According to Wikipedia, "OpenStack is a free and open-source software cloud computing software platform. Users primarily deploy it as an infrastructure as a service (IaaS) solution. The technology consists of a series of interrelated projects that control pools of processing, storage, and networking resources throughout a data center--which users manage through a web-based dashboard, command-line tools, or a RESTful API that is released under the terms of the Apache Licensee" [23]. We considered this platform mainly due to the availability of data about chat messages archives, Source Code and bug reports that we need to track possible learning activities for all three phases of the learning process. Principal activities in the Initiation phase of the learning process are generally about observing and making contacts. Ideally, this phase constitutes an opportunity for the Novice to ask questions and get some help depending on the requests while the Expert intervenes to respond to such requests. In the Progression phase, both participants step up their engagement and commitment during their interactions. The Novice applies the guidelines provided by the Expert who executes a number of activities to ensure such guidelines are implemented. Hence, the Internet Relay Chat (IRC) messages data set is appropriate to trace these activities in the first two phases. In the Maturation phase, as someone's skillset has matured, we identify a Novice as anybody who is still not acquainted with some topics or parts of the code and requests information about them, while the Expert is respondent to such requests. In order to trace these advanced skills, we look at activities performed by participants who tend towards the last layer of contributors, the core developers as described in Section 2. Such activities can be identified by looking at repositories that store data about developing, creating code, examining and reviewing the code, identifying and fixing possible bugs. Therefore, there are three potential repositories from Openstack that can provide such information: Source Code, Bug reports and Reviews datasets. However, for the purpose of our analysis, we provide insights considering solely the Source Code dataset. In Source Code, the emphasis is about getting the feeling about how participants react to pieces of code as these are committed, how far they personally contribute in terms of code submission, reviewing code that has been submitted as well as providing feedback with regards to any comment or question asked in the process. ### Data description The Internet Relay Chat (IRC) messages repository contains more than 5 million chat messages, exactly 5603302, exchanged between 19247 people on a combined total of 30 channels/forums. From these chats messages, we eliminated those that could not be linked to senders. The final dataset was made up of 2142690 chat messages that we analyzed. These chat messages were exchanged over a period of 3 and half years. The first message was sent on 2010-07-28 at 05:09:11 and the last message we considered was exchanged on 2014-04-09 at 18:07:19. In this dataset, the average length of a message sent was 60 characters; the longest chat message reached 502 characters while the shortest message was of one character length. The Source Code repository contains 93584 source code files that are reported to be committed. This is achieved by 2677 people. These people performed a number of actions and these amount to 425744 on about 210 projects. The files submitted are of 75441 types. The different file types were used to identify whether a file committed was documentation, user manual or patch or even any general source file. These files were submitted during a period of time spanning from 2010 to 2014. About 131556 messages can be identified as related to the committed files. The first message recorded was at 23:05:26 on the 27th of May 2010 while the last message included in this analysis was sent at 12:27:48 on the 6th of May 2014. Furthermore, the length of the messages considered is of an average of 182 characters, the longest description/report was of 8628 characters and the shortest message is just one character. ### Constructing Event logs Constructing the Event logs entails retrieving data from all of our repositories as they relate to users' activities, the commit time, the user's details and all elements needed to constitute a full event. In order to be able to deduct that a certain learning activity took place after looking at an annotation or comment posted online, we need to be able to gauge the semantic and contextual meaning of all messages. Therefore, we make use of semantic search with MSSQL as a proper means to achieve this goal. Semantic search improves search by understanding the contextual meaning of the terms and tries to provide the most accurate answer for a given text document. However, this also requires the use of key phrases to steer the search [24]. Our choice of key phrases is based on a number of studies conducted in FLOSS with regards to the kinds of questions and answers that are asked in FLOSS communication environments [25-29]. We start from this categorization, following questions and responses categories; then we deduct a number of key phrases. Considering previous findings [25-28], a formal model of learning activities in FLOSS communities [19], as well as lexical semantics, guided by our ontology for learning in FLOSS communities, OntoLiFLOSS [42], we design a set of three catalogs of key phrases that clearly summarize all identified key phrases, the corresponding learning activities as well as the learning phase. Lexical semantics builds from synonyms of terms and their homonyms to derive the meaning of words in specific contexts. The use thereof is paramount and promises to capture the meaning of annotations and messages found in our datasets. It is also critical in identifying the learning activities as we described them in Section 2. Due to space constraints, we show in Figure 3 only the catalog that contains the key phrases that semantically identify activities as categorized according to the participants' roles in the Initiation phase of the learning process. In order to produce process maps, we need to identify events that will make up our Event log. An event is a tuple made up essentially of an event identifier, the performer, activity and any other attributes that might be needed. In our case, we include the following attributes: state, date as well as the role (Novice, Expert). An event \(E\) is a sextuple (_t, a, p, d, s, r_) such that \(t\) is the case in the event and can be either a topic on emails or an issue number on code and bug reports; \(a\) is the activity; \(p\) is the participant; \(d\) is the relevant date of occurrence; \(s\) is the state of the learning process phase; and \(r\) is the participant's role in the process. Making use of our key phrase catalogs, we then construct the relevant Event logs by retrieving the mappings between key phrases, activities, states and participants. Let \(c_{l}\), \(c_{2}\) and \(c_{3}\) denote respectively Initiation, Progression and Maturation. We distinguish between key phrases for activities and states. Therefore, we refer to key phrases for states as _gl_key_ (global keys) while the key phrases that help distinguish activities are referred to as _lc_key_ (local keys). We set catalogs as sextuples (_C,c_i, _gl_key, state, lc_key, activity, role_) such that \(C\) is the set of all our catalogs; \(c_{i}\in C\) is a single catalog; _gl_key_ is the key phrase for the identification of a state; _state_ is the state as it appears in the catalog; _lc_key_ is the key phrase used to identify an activity; _activity_ is the corresponding activity in the catalog and, finally, _role_ is the role as it appears in the catalog. Hence, our final Event logs are constructed using this information as described by the pseudocode shown in Figure 4. A snapshot depicting an Event log is shown in Figure 5. This Event log has 6 attributes including the name of the contributor, the executed activity, the state of learning phase, the channel topic, the corresponding event date and the role, which in this case is Novice. The channel topic in this log might represents a file number, a question thread or any relevant topic upon which a discussion or debate is based. ## References ### Tool for Process Mapping In order to process mine these Event logs, we choose an appropriate tool for analyzing the identified events and that can provide efficient visualizations to demonstrate the workflow of occurrence of activities in these processes. We consider Disco [41]. Disco stands for Discover Your Processes. It is a toolkit for Process Mining that enables the user to provide a preprocessed log specifying case, activities, originator and any other attributes. The tool performs automatic process discovery from the log and outputs process maps as well as relevant statistical data In essence, Disco applies Process Mining techniques in order to construct process models based on available logging data that are turned into an Event log. The logging data contain all the details about transactions that can be found in Event logs. Based on these logs, Disco provides process maps which can be represented with two different set of metrics on the basis of which the flow of events is explained. These metrics include frequency and performance. The main objective of the frequency metrics is the depiction of how often certain parts of the processes have been executed. We can distinguish three levels of frequency: absolute frequency, case frequency and maximum repetitions. The roles of such levels of frequency can be summarized as follows: * Absolute frequency: When this value is associated with an activity, it indicates how many times in total that activity has been performed, while it also gives an indication of how often activities moved from one point to another on the edge (path), or how often that particular path has been "travelled" on throughout the whole process; * Case frequency: This allows to ignore repetitions that might have occurred with activities and only show relative numbers of how many cases passed through which activities and along which path (regardless of whether they came by there just once or multiple times); * Max. Repetitions: This provides the maximum number of repetitions within a case. Unlike frequency, the performance metrics provide details about the time at different levels and paths during the execution process. This can also be aggregated to three different levels: * Total duration: This is the default metric when one needs to display processes according to their performance. It represents the accumulated durations (summed up over all cases) for the execution of each activity and for the delays on each path; * Mean duration: This is the average time spent within and between activities; * Max. Duration: This is an indication of the largest execution times and delays that were measured during the process execution. Figure 3: Catalog of keyphrases for the Initiation phase of the learning process These metrics can also be provided at once for a path of an activity if needed. It is crucial to note at this point that the process maps we consider in our work represent the activity flows for Novice's and Expert's roles. Figure 4: **Step by step Pseudo Code Rules for Constructing Event logs** ## 5 Empirical Results Taking a FLOSS environment called Openstack as data source, we apply our proposed techniques to identify learning activities based on key phrases catalogs and classification rules expressed through pseudo code as well as the appropriate Process Mining tool. We thus produce Event logs that are based on the semantic content of messages in Openstack Internet Relay Chat (IRC) messages and Source Code to retrieve the corresponding activities. Considering these repositories in line with the three learning process phases (Initiation, Progression and Maturation), we produce an Event log for each category of participant (Novice or Expert) in every phase on the corresponding dataset. Hence, we produce 6 Event logs that help build 6 corresponding process maps which are visual representations of the flow occurrence of learning activities in FLOSS for each category of participant. We describe these process maps for each category of participant by considering case frequency for the first array of metrics as well as the average duration in order to provide insights within the bounds of our experiments. ### Process Maps for the Initiation Phase After mining the IRC data set, we produce the resultant process maps depicting graphical representations of the occurrence of learning activities in these IRC forums by category of participant. Figure 5: Snapshot of Event log for Novice during Initiation phase Figure 6 shows a representation of the time frame for the occurrence of our process events is given. The right hand side of the figure gives some details with regards to input data used in the plot. The log timeline on the horizontal axis represents the total timeframe covered by our dataset (from earliest to latest timestamp observed). We can note that the Initiation Phase of the learning process started on the 28th of July 2010 and ended on the 9th of April 2014. We note that, during this time, a total of 605965 events were generated. An event represents a tuple made up of the case (in this context, the discussion topic), the chat message senders as well as the relevant learning activities. With about 28 cases, a total of 14 activities are executed with an average time per case of 35.7 weeks while the median duration is of 22.4 weeks. Finally, Figure 7 shows how participants in quest for knowledge justifiably claim the majority of activities with a total of 524332 amounting to 86.53% of all executed activities at this point in contrast with Experts who intervene at a lower rate of 6.25% with 37898 activities, slightly behind activities related to doing something other than exchanging know Figure 6: **Event of time on IRC for Initiation Phase** Figure 7: **Details about learning Participants** We justify this discrepancy in the next paragraphs as we unpack activities flow for Novices and Experts respectively. As indicated earlier, we shall consider only case frequency for the first array of metrics as well as the average duration in order to provide insights within the bounds of our experiments. The process map depicted in Figure 8 represents a workflow for all the activities performed by the Novice during the Initiation Phase. The numbers, the thickness of the arcs or edges, and the coloring in the model illustrate how frequently each activity or path has been performed. One can notice that the thickness of the edges is the same. We considered case frequency to identify how many cases were involved in constructing the workflow. In this instance, all 28 cases are considered. The degree to which people engage in sending instant messages and the motivation therein can justify the presence of the workflow in almost every channel. Cerone [18] hints that an _observe_ activity may lead to communication between FLOSS members and that a _post_ activity may trigger a learning experience. In Figure 8, the process map depicts the activity flow in terms of learning for a Novice in IRC chats. We note that in 18 cases, the learning process for the Novice starts with commenting a post while in the remaining cases, the Novice triggers this phase through formulating a question. The map demonstrates that the activity _CommentPost_ occurs in all 28 cases, although it is the starting point in only 18 of those. From commenting on a post, the Novice would post a message and then identify an Expert. In some cases, the Novice would then formulate an appropriate question to express a request and post the question. In other cases, the Novice would either go back to post further comments on posts or make a direct contact with an Expert by sending a detailed request through formulating a question appropriately. From formulating a question, the Novice can follow 2 paths: either return to _CommentPost_ and follow the initial paths or contact the Expert again and continue the indicated paths accordingly. The existence of such process maps in IRC messages is also highlighted by additional performance indicators. Figure 7 shows that the Novice was active in a total of 524332 events as indicated in the main model performing a total of 7 activities over an average period of 35.7 weeks. The results also indicated that 7 activities were executed on average 74904.57 times with the most executed activity occurring 131888 times, although this is not shown in Figure 7. Figures 10 and 11 provide insights on the Expert role. The process map in Figure 10 depicts the level of involvement of the Expert during the Initiation Phase. We considered case frequency to identify how many cases were involved in constructing the workflow. The Expert process map in Figure 10 indicates that the Expert gets involved in the learning process by contacting a Novice, commenting a post and probably contacting the Novice again. Then, the Expert reads the source code especially if the Novice's request pertains to coding, and then makes another contact with the Novice probably for further clarification or follow-up comment and follows the same path. Thereafter, the Expert reads messages and posts and then comment on posts before going back to running the code again. Figure 11 shows that the Expert was active in a total of 37898 events, performing a total of 6 activities over an average period of 35.3 weeks in 28 processes. Our experiment also indicates that these activities were executed on average 6316.33 times with the most executed activity occurring 19506 times. Therefore, one can observe that learning activities do occur at a significant rate during messages exchange on these forums and the results from mining IRC messages provide insights on the starting activities that trigger these processes. While the dataset boasts over 5 million messages, only 2142690 of these were good enough for our analysis. It has emerged that 14 of the learning activities could be identified through 605965 events across 28 cases where the Novice accounts for the biggest share of traffic during these exchanges. As illustrated in Figure 7, the Novice executes the activities about 86.53% of the total number compared to the Expert who only operates 6.25%. Before we discuss the second phase, it is crucial to point out that existing literature on learning in FLOSS has not provided insights that would indicate how the communication occurs, the sequential occurrence of learning activities and, more importantly, the level of occurrence of learning activities in FLOSS communities. This level of occurrence could be expressed by looking at the number of learning activities proportionally to the total number of activities that are recorded in repositories. Figure 8: Process map for Novice–Per frequency (Initiation Phase) in Internet Relay Chats Figure 9: Events over time for Novice [MISSING_PAGE_POST] Glott _et al._[22] extensively describe the background of people involved in knowledge exchange, the types of software engineering skills that can be acquired in FLOSS communities but come short of explaining the possible sequence of pattern of occurrence. Moreover, Glott _et al._[21] as well as Dillon _et al._[14] discuss the potentials of using FLOSS principles to improve formal ICT education with limited or no direct empirical evidence from FLOSS repositories. Many others, including Fernandes _et al._[13] critically provide insights that FLOSS members are aware that learning is taking place but are little aware of how this occurs. ### Process Maps for the Progression Phase In modeling this second phase of the learning process, we make use of the same dataset, given that the activities belonging to this phase can also be traced in IRC messages. We represent major statistical details that are most representative of the presence, impact and occurrence of learning activities in FLOSS over the chosen period of time through the process maps. The Novice's learning process map in Figure 12 indicates that, on IRC messages, the Novice starts in 14 cases with reporting bugs as a result of the Expert's guidelines. Then, the Novice analyzes source code, comments on that code, and then either goes back to reporting bugs, or replies to posted questions, or provides/sends feedback, or posts questions. If posting questions, the Novice also posts further questions as part of reviewing posts or committed code, replies to posted questions and then goes back to analyze any relevant Source Code. Regarding the Expert, we note in Figure 13 that the activities are organised in a number of intersecting cycles. The main path comprises two cycles and shows that, in some cases, the Expert starts sending a feedback to a request from the Novice, then reports a Figure 11: Events over time for Expert Figure 10: Process map for Expert-Per frequency (Initiation Phase) in IRC messages bug accordingly, replies to a post, reviews thread code, comments on code, analyzes Source Code, runs the code and then reviews the code, possibly reports further bugs and posts messages, and, finally, restarts the reviewing code cycle once again. Along the way, the Expert moves back and forth from cycle to cycle as needed. Finally, it is important to note the increase in the involvement of the Expert in this phase comparing to the Initiation phase. Performing 12 activities out of a total of 21 activities, the Expert exemplifies the commitment and willingness to assist and help as demonstrated and reported in previous narrative findings [24-28]. Figure 12: **Process Map for Novice–Per frequency (Progression Phase) in IRC messages** Figure 13: Process Map for Expert-Per frequency (Progression Phase) in IRC messages ### Process Maps for the Maturation Phase The Maturation phase represents the degree at which the learning process has grown. This can evidently be seen through more advanced actions spanning from developing and committing pieces of code to reporting and fixing bugs. Hence, in this phase the number of activities sharply increases for both participants. This phase continues to exemplify the extent to which people are eager to learn with the Novice executing most activities with a total of 506814 activities amounting to 46.19% of all executed activities at this point, in contrast with the Expert who intervenes at a lower rate of 42.04% with 461278 activities, ahead of people doing something other than exchanging knowledge with 129189 activities. The increase in the number of activities also justifies the increase in the size of the corresponding process maps that represent the patterns as executed by the Novice and Expert respectively in Figures 14 and 15. Figure 14 traces the paths a Novice would follow while executing learning activities during Maturation phase. We note that on Source Code, the Novice performs a number of activities either in parallel or sequentially depending on the level of involvement in the process. The main paths we observe show that the Novice starts reviewing Source Code, then reports bugs as a result of the review, before performing a range of other activities spanning from analyzing discussions and posting comments on code to submitting code. Then the Novice can modify Source Code, if needed, thus fixing bugs, and, finally, provide the resultant feedback. The involvement of the Expert is shown in Figure 15: the Expert starts analyzing Source Code as part of monitoring the Novice's progress. Then, the Expert comments on the code and subsequently either sends a reply or further reviews the Source Code and reports bugs. In this phase, we argue that the Expert follows the Novice's activities for monitoring purposes and therefore, the execution of activities comprises several alternative sequences. Figure 14: Process map for Novice–Per frequency (Maturation Phase) on Source Code Figure 15: Process map for Expert–Per frequency (Maturation Phase) on Source Code Conclusion FLOSS communities can nowadays be considered as potential learning environments. Numerous studies conducted in this regard suggest that these environments present learning opportunities to users and members of these online platforms. While such studies provide invaluable insights, their results are mostly based on either surveys, observation reports [10, 14-16, 21-22], stochastic or probability studies [29]. Moreover, the learning potentials offered by FLOSS environments open new possibilities for teaching Software Engineering courses in tertiary formal institutions [3, 7, 10-14]. In this regard, we can highlight a number of pilot studies aimed at evaluating the effectiveness of such an approach in traditional settings of learning [2-5, 14-15]. Furthermore, while specific insights regarding learning in FLOSS are provided in these studies, little is shown in terms of the manner in which learning takes place. To our knowledge, existing literature on learning in FLOSS has not provided insights that would indicate how the communication occurs, the sequential occurrence of learning activities and, more importantly, the level of occurrence of learning activities in FLOSS communities. This level of occurrence could be expressed by looking at the number of learning activities proportionally to the total number of activities that are recorded in repositories. Glott _et al._[22] extensively describe the background of people involved in knowledge exchange and the types of software engineering skills that can be acquired in FLOSS communities, but come short of explaining the possible sequence of pattern of occurrence. Moreover, Glott _et al._[21] as well as Dillon _et al._[14] discuss the potentials of applying FLOSS principles to improving formal ICT education with limited or no direct empirical evidence from FLOSS repositories. Many others, including Fernandes _et al._[13], critically provide insights that FLOSS members are aware that learning takes place but with limited indication on how this occurs. In this paper, we have proposed an approach based on Process Mining. Using a combination of semantic search, ontology and lexical linguistics, we construct Event logs that capture the learning behavior of both the Novice and Expert in FLOSS. In order to show how communication occurs as well as the inherent execution of learning activities, we introduce an exploration of learning processes as process maps. Process maps provide valuable information about a process to help management find ways to make the process better. The detailed step-by-step descriptions included in process maps provide a clear and concise blueprint for content. The accomplishments of our results are twofold. Firstly, we provide a simplified description of learning processes through process mapping. The process maps contribute to easily depict a pictorial representation of the occurrence of learning patterns in FLOSS environments with no need for detailed explanations. Secondly, the quantitative results provide critical insights with regards to the level of occurrence of learning activities in FLOSS repositories. We note that learning activities do occur at a significant rate during messages exchange for both participants across the three phases. On Source Code as well, as the learning process matures, the involvement of both the Novice and Expert significantly increases across the Progression and Maturation phases. Although existing literature is critical in guiding the study of learning behavior in FLOSS, in particular, with regard to the phases of the learning processes [18-19, 29], our results provide more tangible evidence on how these phases can be traced from actual recorded data. The details that our experiments have uncovered provide richer insights regarding both roles in the learning process. The process maps executed by both the Novice and Expert provide a richer description of learning activities than provided in the current literature. This constitutes a critical step towards explicating and detailing the steps FLOSS members go through as part of learning and, more importantly, the extent to which learning in FLOSS communities is a reality. Finally, we would like to point out that there are many more FLOSS repositories that could be mined to undertake similar experiments; however, we believe that the datasets from Internet Relay Chat messages and Source Code include enough details that provide global insights on recorded activities in FLOSS environments. Specifically, IRC messages appeared to be the right choices for the detection of process maps as they represent the learning process through its first two phases. In such phases, the role of the Novice is the most active, justifying the high intensity for the quest for knowledge. Analogously, the Expert becomes progressively more and more involved from Initiation to Progression phases. Regarding the last phase of the learning process, the choice of the Source Code dataset largely explains how learning occurs at this stage. Evidence suggests the commitment of the Novice to seek answers and interact as much as possible in strengthening the acquired skills.
2308.09031
New Properties of Intrinsic Information and Their Relation to Bound Secrecy
The secret-key rate measures the rate at which Alice and Bob can extract secret bits from sampling a joint probability distribution, unknown to an eavesdropper Eve. The secret-key rate has been bounded above by the intrinsic information and reduced intrinsic information. However, we prove that the reduced intrinsic information is 0 if and only if the intrinsic information is 0. This result implies that at least one of the following two conjectures is false: bound secrecy exists, or the reduced intrinsic information equals the secret-key rate. We give an explicit construction of an information-erasing binarization for a candidate for bound secrecy. We then introduce some approaches for proving the existence of bound secrecy, such as reducing the channel space, linearly transforming Bob's map, and perturbing a channel for Eve.
Andrey Boris Khesin, Andrew Tung, Karthik Vedula
2023-08-17T15:10:45Z
http://arxiv.org/abs/2308.09031v2
# New Properties of Intrinsic Information and Their Relation to Bound Secrecy ###### Abstract **Abstract**: The secret-key rate measures the rate at which Alice and Bob can extract secret bits from sampling a joint probability distribution, unknown to an eavesdropper Eve. The secret-key rate has been bounded above by the intrinsic information and reduced intrinsic information. However, we prove that the reduced intrinsic information is 0 if and only if the intrinsic information is 0. This result implies that at least one of the following two conjectures is false: bound secrecy exists, or the reduced intrinsic information equals the secret-key rate. We give an explicit construction of an information-erasing binarization for a candidate for bound secrecy. We then introduce some approaches for proving the existence of bound secrecy, such as reducing the channel space, linearly transforming Bob's map, and perturbing a channel for Eve. **Keywords**: information theory, bound secrecy, intrinsic information, reduced intrinsic information, secret-key rate, binarization ## 1 Introduction A common problem in classical information theory is achieving secure communication over a public channel. Most modern-day cryptographic protocols rely on computational security, a type of security based on the computational difficulty of solving a certain problem. For example, the RSA protocol, widely used today, is based on the problem of factoring large integers [3; 18]. Unfortunately, the security of these types of protocols is always conditional because it relies on the fact that certain problems are computationally difficult, and that the adversary has limited computational power [13]. Protocols based on information theory avoid this problem because the secrecy that they obtain is impossible for the eavesdropper to pierce, simply due to the laws of probability [6; 19]. To achieve information-theoretic secure communication, most protocols begin with a procedure by which the two parties, call them Alice and Bob, agree on a secret key unknown to an eavesdropper Eve. Once this secret key is established, Alice and Bob can then encode an arbitrary message with the key completely securely. For example, suppose the secret key is composed of a string of bits. Then the message, in the form of another string of bits, can be perfectly secretly encoded by a one-time pad, which in this case can be performed by bitwise XOR [21]. (Note that although it is perfectly secure, using a secret key as a one-time pad is not very efficient, and one often uses a cryptographic key expansion in cases where the secret key is expensive to generate [12].) Unfortunately for Alice and Bob, agreeing on an unconditionally secret key is impossible without a source of secrecy to start with [14; 20]. An example of such secrecy is if Alice and Bob could both observe the same random number generator, whose output is not available to an eavesdropper Eve. In this case, the amount of secrecy Alice and Bob share is simply the entropy of the random number generator, but in more complicated situations (e.g. if the output of the generator is partially known to Eve) secrecy is not as easy to quantify. Quantifying how much secrecy Alice and Bob share in a given situation has been attempted by introducing a number of quantities, such as the _intrinsic information_ and the _reduced intrinsic information_[6; 17]. A number of properties of these quantities have been discovered [16; 17], suggesting that they are connected with the original problem of determining whether or not Alice and Bob can agree on a secret key (and if so, how long the key can be). For example, it has been proven that the intrinsic information is an upper bound on Alice and Bob's secret-key rate [15]. However, some surprising results have shown that there is a gap between these information-theoretic quantities and Alice and Bob's ability to generate a secret key [17]. A number of conjectures of this nature are currently unresolved, including the long-standing conjecture of the existence of bound secrecy. This conjecture has its origins in the analogous quantum phenomenon of bound entanglement, discovered in the late 1990s [5; 6; 7; 9; 17]; the existence of bound secrecy was conjectured in the early 2000s. Bound secrecy refers to secrecy (i.e. positive intrinsic information) which cannot be extracted (i.e. the secret-key rate is 0). If bound secrecy exists, it would suggest that classical information theory has surprising connections to quantum information theory, which was, in general, thought to be of a different nature. In this paper, we make an important step toward proving the existence of bound secrecy by showing that, in the crucial case where either intrinsic information or reduced intrinsic information is 0, there is no gap between the two quantities. This is significant because the original purpose of introducing the reduced intrinsic information was to provide a stronger upper bound on the secret-key rate, and one of the prevailing approaches for constructing an example that has bound secrecy was showing the example has a positive intrinsic information but a reduced intrinsic information of 0 (implying that no secrecy can be extracted). This paper shows that this approach cannot work. On the other hand, we suggest an alternative approach for establishing the existence of bound secrecy, first mentioned in [6], based on the idea of binarizations. Binarizations are ways of processing a random variable stochastically such that the new random variable has two outputs. We show that the existence of bound secrecy can be reduced to a simple statement about binarizations and probability that must hold for \(N\) copies of the distribution. We provide a proof of this statement for the case \(N=1\) for a distribution introduced in [6], and we suggest approaches to generalize the proof for larger values of \(N\). Additionally, we focus on a second family of distributions introduced in [17] which are conjectured to be bound secret. Although the approach for the previous distribution does not completely carry over, we illustrate some possible approaches which promise to extend the methods used in the \(N=1\) case of the previous distribution. The outline of this paper is as follows. In Section 2, we formally define the secret-key rate, the intrinsic information, and the reduced intrinsic information, which will be important in the rest of the paper. We also give context for our result by summarizing the properties of these quantities which have been established previously. Additionally, we provide the formal statements of a number of important conjectures, such as the problem of bound secrecy, which are addressed in this paper. In Section 3, we state and prove our results, which require a number of intermediate lemmas. In Section 4, we discuss how our results relate to prior work, showing that given the existence of bound secrecy (which is widely believed to be true), another long-standing conjecture is false. In Section 5, we discuss another approach to establish the existence of bound secrecy using binarizations of Alice's and Bob's random variables. In Section 6, we improve on previous results by giving an explicit construction of a binarization which erases intrinsic information, which appears easier to generalize than previous non-constructive solutions. Finally, in Section 7, we provide multiple approaches and simplifications to prove the bound secrecy of another family of distributions, including reduction to Z-shaped channels, row-column-type transformations for weighted average target values, and isolated perturbations to show independence for a family of binarizations. ## 2 Background The setup of the bound secrecy problem is as follows. Let \(P_{XYZ}\) be a joint probability distribution of three countable (but possibly infinite) random variables \(X\), \(Y\), and \(Z\), with Alice receiving \(X\), Bob \(Y\), and Eve \(Z\). Throughout this paper we assume that any probability distribution is countable and has finite entropy. Entropy is denoted by \(H\) and is assumed to be Shannon entropy; as such, all logs are assumed to be base 2. The _secret-key rate_\(S(X:Y||Z)\) is, informally, the rate at which Alice and Bob can extract secret bits from many copies of \(P_{XYZ}\). The notation suggests the interpretation that the secret-key rate is the amount of information between \(X\) and \(Y\) given the information in \(Z\). We are interested in the secret-key rate because if it is non-zero, Alice and Bob can extract their secret bits and thereby communicate securely. A formal definition of the secret-key rate, first introduced in [14], is as follows. **Definition 2.1**.: Suppose Alice and Bob are given \(N\) independent realizations of a countable joint probability distribution \(P_{XYZ}\). Call a protocol \(\epsilon\)_-safe_ if, at the end of the protocol, Alice and Bob can compute secret, correlated random variables \(S_{A}\) and \(S_{B}\) such that there exists another random variable \(S\) so that \[P[S_{A}=S_{B}=S]>1-\epsilon\text{ and }I(S:CZ^{N})<\epsilon.\] Here, \(C\) stands for any communications that took place during the protocol. The first condition ensures that Alice and Bob's variables must agree with probability very close to 1, so that they share some information. The second condition ensures that this information is not accessible to Eve. This is defined formally using the _mutual information_\(I(X:Y):=H(X)+H(Y)-H(X,Y)\), a measure of the amount of information two random variables share. The condition requires that the mutual information between the secret variable \(S\) and the pieces of data Eve has, namely \(Z^{N}\) and the communications \(C\), must be low. Using the definition of an \(\epsilon\)-safe protocol, we define the secret-key rate asymptotically. **Definition 2.2**.: The _secret-key rate_\(S(X:Y||Z)\) is the largest number \(R\) such that for all \(\epsilon>0\), there exists an \(N\) such that for all \(n>N\), there exists an \(\epsilon\)-safe protocol using \(n\) copies of \(P_{XYZ}\) and producing the random variable \(S\) with \(\frac{H(S)}{N}\geq R\). Although the secret-key rate is the quantity we are interested in, as it captures the true number of bits Alice and Bob can extract, it has been hard to deal with because it allows any arbitrarily long communication string \(C\). Ideally, one would express the secret-key rate \(S(X:Y||Z)\) as a simple function of the distribution \(P_{XYZ}\), but this problem is still open [17]. Instead a number of upper bounds have been found. One of the first upper bounds on the secret-key rate was the conditional mutual information \(I(X:Y|Z)\)[1, 15], defined as follows. **Definition 2.3**.: Given a probability distribution \(P_{XYZ}\), the _conditional mutual information_\(I(X:Y|Z)\) is defined as \(H(X|Z)+H(Y|Z)-H(XY|Z)\), where each term is a conditional entropy conditioned on \(Z\). One strategy for Eve to extract information about \(X\) and \(Y\) is to pass her variable \(Z\) through a channel \(P_{\overline{Z}|Z}\)[16], which in this case takes the form of a stochastic matrix acting on the vector of probabilities for \(Z\). So we define the intrinsic conditional mutual information, first introduced in [15]. **Definition 2.4**.: Given a probability distribution \(P_{XYZ}\), the _intrinsic conditional mutual information_\(I(X:Y\downarrow Z)\), sometimes called the _intrinsic information_, is defined as \[I(X:Y\downarrow Z):=\inf_{P_{\overline{Z}|Z}}I(X:Y|\overline{Z}).\] **Theorem 2.5** ([15, 17]).: _Given a distribution \(P_{XYZ}\), we have \(S(X:Y||Z)\leq I(X:Y\downarrow Z)\). However, there exist distributions with \(S(X:Y||Z)\neq I(X:Y\downarrow Z)\)._ Motivated by the fact that \(S(X:Y||ZU)\leq S(X:Y||Z)-H(U)\) holds but the corresponding inequality for the intrinsic information does not always hold, Renner and Wolf have introduced the _reduced intrinsic conditional mutual information_[16]. **Definition 2.6** ([17]).: Given a distribution \(P_{XYZ}\), the _reduced intrinsic conditional mutual information_\(I(X:Y\downarrow\downarrow Z)\), sometimes called the _reduced intrinsic information_, is defined as \[I(X:Y\downarrow\downarrow Z):=\inf_{P_{U|XYZ}}I(X:Y\downarrow ZU)+H(U).\] From the definition, we can see that the intrinsic information is an upper bound on the reduced intrinsic information, by setting \(U\) to be trivial. The reduced intrinsic information is bounded from below by the secret-key rate, informally because in the infimum we can let \(U\) be the secret bit that Alice and Bob can generate. **Theorem 2.7** ([17]).: _Given a probability distribution \(P_{XYZ}\), \(S(X:Y||Z)\leq I(X:Y\downarrow\downarrow Z)\)._ Intuitively, it is immediately obvious why the reduced intrinsic information should be different than the intrinsic information, because whatever information Eve receives through the variable \(U\) is already accounted for in the \(H(U)\) term. However it turns out that it is possible for the reduced intrinsic information to be strictly less than the intrinsic information because for some distributions Eve may have the additional disadvantage, beyond not knowing \(X\) and \(Y\), of not knowing how to process her variable \(Z\). Therefore, the knowledge of how to process \(Z\), as represented by \(U\), can reduce the shared information between \(X\) and \(Y\) by more than the amount of information in \(U\) itself. So the reduced intrinsic information is sometimes less than the intrinsic information. **Theorem 2.8** ([16]).: _There exists a countable distribution \(P_{XYZ}\) where \(I(X:Y\downarrow Z)\neq I(X:Y\downarrow\downarrow Z)\)._ As the reduced intrinsic information is a strictly stronger bound on the secret-key rate than the intrinsic information, it is natural to ask whether it in fact equals the secret-key rate. This open problem can be stated as follows. **Conjecture 2.9** ([17]).: Given a probability distribution \(P_{XYZ}\), we have \(S(X:Y||Z)=I(X:Y\downarrow\downarrow Z)\). Whereas previous bounds on \(S\), such as the intrinsic information, have been improved by finding properties that were not shared between those quantities and \(S\), so far the reduced intrinsic information appears to share many properties of the secret-key rate. If the conjecture is proven true (i.e. \(S(X:Y||Z)=I(X:Y\downarrow\downarrow Z)\) in all cases), then we would have a relatively simple description, based on only the distribution \(P_{XYZ}\), of the secret-key rate. This would fulfill one of the original objectives. If the conjecture is proven false, then it may reveal another potential strategy for Alice and Bob for secret-key extraction not related to intrinsic or reduced intrinsic information. Another significant conjecture is the problem of bound secrecy, namely secrecy between Alice and Bob that cannot be extracted. **Conjecture 2.10**.: [6] (Bound secrecy) There exists a distribution \(P_{XYZ}\) such that \(I(X:Y\downarrow Z)>0\) but \(S(X:Y||Z)=0\). This conjecture is inspired by the fact that a corresponding quantum phenomenon, bound entanglement, has been shown to exist [7; 8]. Bound entangled states are quantum entangled states, analogous to classically correlated random variables, which have secrecy which cannot be distilled [7]. Relatively strong evidence suggesting the existence of bound secrecy has been found in [6; 17] by drawing connections between the classical and quantum problems. Numerical evidence for bound secrecy has been given in [9]. ## 3 The Gap Between the Standard and Reduced Intrinsic Information The main result of this paper is the following. **Theorem 3.1**.: _Given a probability distribution \(P_{XYZ}\), we have_ \[I(X:Y\downarrow\downarrow Z)=0\iff I(X:Y\downarrow Z)=0.\] We first observe that the reverse direction follows because the intrinsic information is an upper bound on the reduced intrinsic information, which is nonnegative. We focus on the forward direction, whose proof takes the remainder of this section. An important tool in the proof is the notion of the trace distance between two random variables. **Definition 3.2**.: Let two countable random variables \(A\) and \(B\) have probability distributions \(\{a_{i}\}\) and \(\{b_{i}\}\) with the same index set. Then the _trace distance_ between \(A\) and \(B\), denoted \(D(A,B)\) is defined to be \[D(A,B):=\frac{1}{2}\sum_{i}|a_{i}-b_{i}|.\] To prove the forward direction of Theorem 3.1, we reason as follows. If \(I(X:Y\downarrow\downarrow Z)=0\), then by definition \(\inf\limits_{P_{U|XYZ}}(I(X:Y\downarrow ZU)+H(U))=0\). First, suppose that this infimum is a minimum. This means that there exists an \(XYZU\) such that \(I(X:Y\downarrow ZU)+H(U)=0\), so \(H(U)=0\) and \(I(X:Y\downarrow ZU)=0\). However, since \(U\) adds no information, we have \(0=I(X:Y\downarrow ZU)=I(X:Y\downarrow Z)\), which is the desired statement. From now on, assume that the infimum is not a minimum. This means that both quantities in the sum must approach \(0\) for a carefully chosen sequence of distributions. More rigorously, there must exist a sequence of probability distributions \(\{XYZU_{i}\}\) such that \(\lim\limits_{i\to\infty}H(U_{i})=0\) and \(\lim\limits_{i\to\infty}I(X:Y\downarrow ZU_{i})=0\). Due to the definition of intrinsic information, there must also exist a sequence of channels \(\{C_{i}\}\) such that \(\lim\limits_{i\to\infty}I(X:Y|C_{i}(ZU_{i}))=0\). In order to prove that \(I(X:Y\downarrow Z)=0\), all that we have to do is show that there exists a sequence of channels \(\{c_{i}\}\) such that \(\lim\limits_{i\to\infty}I(X:Y|c_{i}(Z))=0\). We show this by showing \(\{c_{i}\}=\{C_{i}\}\) works. In order to do this, we incorporate the defining property of the sequence \(\{C_{i}\}\) by showing that \[\lim\limits_{i\to\infty}I(X:Y|C_{i}(Z))-I(X:Y|C_{i}(ZU_{i}))=0\] starting from \(\lim\limits_{i\to\infty}H(U_{i})=0\). In the rest of the proof, the channels \(\{C_{i}\}\) are denoted using bars and the value of \(i\) will be inferred from context; for example, we will write \(\overline{Z}\) instead of \(C_{i}(Z)\). We first prove a number of lemmas regarding trace distances, denoted \(D(A,B)\), and entropies. As a convention, let \(K\) denote a constant random variable, whose probability distribution is a unit vector with the first component equal to \(1\). The size of the range of \(K\) is taken to be contextual (i.e. equal to the range of \(U_{i}\)). Also, we assume that if \(U_{i}\) is a random variable, then the probabilities for each outcome of \(U_{i}\) are ordered in descending order. Such an ordering exists because any countable set of nonnegative values with total \(1\) can be indexed in descending order: there can only be finitely many probabilities above any threshold \(x\in(0,1)\), so we can order the probabilities that are above \(x\) because there are only finitely many, and then order all the probabilities by repeatedly lowering \(x\). To prove Theorem 3.1 we will prove the following sequence of implications. \[\lim\limits_{i\to\infty}H(U_{i}) =0\] \[\implies\lim\limits_{i\to\infty}D(U_{i},K) =0\] \[\implies\lim\limits_{i\to\infty}D(XYZU_{i},XYZK) =0\] \[\implies\lim\limits_{i\to\infty}D(XY\overline{ZU_{i}},XY\overline {ZK}) =0\] \[\implies\lim\limits_{i\to\infty}I(X:Y|\overline{ZU_{i}})-I(X:Y| \overline{ZK}) =0\] To establish these implications, we first prove some lemmas. **Lemma 3.3**.: _If \(\lim\limits_{i\to\infty}H(U_{i})=0\) for some sequence of countable random variables \(U_{i}\), then \(\lim\limits_{i\to\infty}D(U_{i},K)=0\)._ Proof.: Suppose the probabilities for each outcome of the random variables \(U_{i}\) are \(a_{1i}\), \(a_{2i}\),..., with \(a_{1i}\geq a_{2i}\geq a_{3i}\geq\dots\). Then \[H(U_{i}) =\sum\limits_{j}a_{ji}\log\frac{1}{a_{ji}}\] \[\geq\sum\limits_{j}a_{ji}\log\frac{1}{a_{1i}}\] \[=\log\frac{1}{a_{1i}}.\] Since \(\log\frac{1}{a_{1i}}\) is nonnegative, if \(H(U_{i})\to 0\), we must have \(\log\frac{1}{a_{1i}}\to 0\). Therefore \(a_{1i}\to 1\). If \(k_{1}\), \(k_{2}\),... are the probabilities that \(K=1\), \(K=2\), and so on, then we have \[D(U_{i},K)=\frac{1}{2}\sum_{j}\lvert a_{ji}-k_{j}\rvert=\frac{1}{2}(1-a_{1i}+1- a_{1i})=1-a_{1i}\] so \(D(U_{i},K)\to 0\). **Lemma 3.4**.: _Consider a sequence of countable random variables \(U_{i}\) and let \(Z\) be an arbitrary countable random variable. Then \(D(ZU_{i},ZK)=D(U_{i},K)\)._ Proof.: For the purposes of this proof, let "1" be the value that \(K\) attains with probability 1. It follows almost directly from the definition of trace distance that \[D(ZU_{i},ZK)=\sum_{(z,x)\in Z\times\mathcal{U}_{i}}\max(0,P(Z=z,K=x)-P(Z=z,U_{i }=x)).\] Since \(K\) is always 1, for any \(x\) other than 1, \(P(Z=z,K=x)=0\), so \(P(Z=z,K=x)-P(Z=z,U_{i}=x)\leq 0\) and \(\max(0,P(Z=z,K=x)-P(Z=z,U_{i}=x))=0\). Thus the trace distance can be reduced to the following sum. \[D(ZU_{i},ZK)=\sum_{(z,x)\in Z\times\mathcal{U}_{i}}\max(0,P(Z=z,K=1)-P(Z=z,U_{i }=1)).\] But since \(K=1\) with probability 1, and \(P(U_{i}=1)\leq 1\), it suffices to take \[D(ZU_{i},ZK)=\sum_{z\in Z}P(Z=z)-P(Z=z,U_{i}=1)\] which is just \[D(ZU_{i},ZK)=1-\sum_{z}P(Z=z,U_{i}=1)=1-P(U_{i}=1).\] As proven at the end of the proof for Lemma 3.3, \(D(U_{i},K)=1-P(U_{i}=1)\) and we are done. _Remark 3.5_.: The importance of \(K\) is demonstrated by the above lemma, as the lemma becomes false if \(U_{i}\) and \(K\) are replaced by arbitrary random variables. A counterexample to Lemma 3.4 in which \(K\) is replaced by an arbitrary random variable is when \(Z\) is a fair coin flip and \(A=Z\) while \(B\) is an independent fair coin flip. Then \(D(A,B)=0\) because these probability distributions are identical, but \(D(ZA,ZB)=1\) because \(ZA\) is either both heads or both tails with probability 0.5, while \(ZB\) can be each of the 4 possibilities with probability 0.25. _Remark 3.6_.: The above lemma also shows the importance of converting statements about entropy into statements about trace distance (through Lemma 3.3) rather than some other measure of distance, such as the Kullback-Leibler (KL) divergence [10]. The KL divergence is defined for two probability distributions \(P\) and \(Q\), both over the probability space \(\mathcal{X}\), as \[D_{KL}(P||Q):=\sum_{x\in\mathcal{X}}P(x)\log\left(\frac{P(x)}{Q(x)}\right).\] Lemma 3.4 does not make sense if the trace distances are replaced with KL-divergences because there exists a \(Z\) with infinite range such that the KL-divergence of the left-hand side of the lemma \(D_{KL}(ZU_{i}||ZK)\) diverges. Consider \(P(Z=z_{n})=2^{-n}\) and \(P(Z=z_{n},U_{i}=1)=2^{-n-\frac{n^{n}}{i}}\) for all \(U_{i}\) (the rest of the \(ZU_{i}\) probability distribution can be filled in arbitrarily). Here, as \(i\) becomes larger, \(P(U_{i}=1)\) becomes closer to 1, but \[P(Z=z_{n})\log\left(\frac{P(Z=z_{n})}{P(Z=z_{n},U_{i}=1)}\right)=\frac{1}{i}\] so if \(\mathcal{Z}\) is the range of \(Z\), \[D_{KL}(ZK,ZU_{i})=\frac{|\mathcal{Z}|}{i}=\infty.\] **Lemma 3.7**.: _Let \(U_{i}\) be a sequence of random variables and let \(Z\) be an arbitrary random variable. Suppose \(C_{i}\) is a sequence of channels whose actions are denoted by a bar. Then for all \(i\), \(D(\overline{ZU_{i}},\overline{ZK})\leq D(ZU_{i},ZK)\)._ Proof.: The proof is similar to that of the analogous quantum result, proven in [3], that trace-preserving quantum operations are contractive. For ease of writing let \(X=ZU_{i}\) and \(Y=ZK\). We can view the probability distributions \(X\) and \(Y\) as vectors of their probabilities (\(\vec{x}\) and \(\vec{y}\)) and view the channel \(C_{i}\) as a stochastic matrix which we denote \(A\). We also let a subscript \(i\) on a vector enclosed by parentheses (e.g. \((\vec{v})_{i}\)) denote the \(i\)th component of the vector. Using this notation, we have that \[D(X,Y)=\sum_{i\text{ with }(\vec{x})_{i}-(\vec{y})_{i}>0}(\vec{x})_{i}-( \vec{y})_{i}=\sum_{i\text{ with }(\vec{x}-\vec{y})_{i}>0}(\vec{x}-\vec{y})_{i}.\] Consider the vector \(\vec{x}-\vec{y}\). We decompose this vector into its positive and negative components as follows. Let \(\vec{a}\) be the vector defined by \(\left(\vec{a}\right)_{i}=\max(0,\left(\vec{x}\right)_{i}-(\vec{y})_{i})\). Similarly, let \(\vec{b}\) be the vector defined by \(\left(\vec{b}\right)_{i}=\max(0,\left(\vec{y}\right)_{i}-(\vec{x})_{i})\). By definition, \((\vec{a})_{i}\geq 0\) for all \(i\) and \(\left(\vec{b}\right)_{i}\geq 0\) for all \(i\). Therefore \[D(X,Y)=\frac{1}{2}\left(\sum_{i}(\vec{a})_{i}+\sum_{i}(\vec{b})_{i}\right).\] We now prove the lemma: \[D(\overline{X},\overline{Y}) =\frac{1}{2}\sum_{i}|(A\vec{x})_{i}-(A\vec{y})_{i}|\] \[\leq\frac{1}{2}\sum_{i}|(A\vec{a})_{i}|+|(A\vec{b})_{i}|\] \[=\frac{1}{2}\sum_{i}(A\vec{a})_{i}+\frac{1}{2}\sum_{i}(A\vec{b}) _{i}\] \[=\frac{1}{2}\sum_{i}(\vec{a})_{i}+\frac{1}{2}\sum_{i}(\vec{b})_{i}\] \[=D(X,Y)\] where the second to last step follows because the columns of \(A\) sum to \(1\) (as it is stochastic) and therefore \(A\) preserves the sum of the elements of a vector. **Lemma 3.8**.: _Given \(P_{XYZ}\), we have that_ \[\lim_{i\rightarrow\infty}D\left(XY\overline{ZU_{i}},XY\overline{ZK}\right)=0 \implies\lim_{i\rightarrow\infty}I\left(X:Y|\overline{ZU_{i}}\right)-I\left(X :Y|\overline{ZK}\right)=0.\] Proof.: Define the following quantities: * Let \(\mathcal{X}\) and \(\mathcal{Y}\) denote the ranges of the random variables \(X\) and \(Y\), respectively. For any other variable \(V\), let \(\text{Range}(V)\) be the range of \(V\). * For all \(x\in\mathcal{X}\), \(y\in\mathcal{Y}\), \(z\in\text{Range}\left(\overline{ZU_{i}}\right)\), we have \(p_{i}(xyz):=p\left(X=x,Y=y,\overline{ZK}=z\right)\), and \(q_{i}\left(xyz\right):=p\left(X=x,Y=y,\overline{ZU_{i}}=z\right)\). * Let \(Z_{i}^{*}:=\text{Range}\left(\overline{ZU_{i}}\right)\backslash\text{Range} \left(\overline{ZK}\right)\), and let \(S_{i}:=\underset{XYZ_{i}^{*}}{\sum}q_{i}\left(xyz\right)\). Because \(S_{i}\leq 2\cdot D\left(XY\overline{ZU_{i}},XY\overline{ZK}\right)\), \(S_{i}\) tends to \(0\) as \(i\) tends to infinity. Also, if there is a group of random variables (e.g. \(X\), \(Y\)) or sets (e.g. \(Z_{i}^{*}\)) in the index of a summation, then the summation is iterated over all values in the range of each variable or element in the set, where the lowercase variables correspond to each of the uppercase random variables and sets (e.g. \(x\in\mathcal{X}\), \(y\in\mathcal{Y}\)). Expanding the conditional mutual information expressions gives \[I\left(X:Y|\overline{ZU_{i}}\right)-I\left(X:Y|\overline{ZK}\right)=\left(H \left(X\overline{ZU_{i}}\right)-H\left(X\overline{ZK}\right)\right)+\left(H \left(Y\overline{ZU_{i}}\right)-H\left(Y\overline{ZK}\right)\right)\] \[-\left(H\left(XY\overline{ZU_{i}}\right)-H\left(XY\overline{ZK}\right)\right) -\left(H\left(\overline{ZU_{i}}\right)-H\left(\overline{ZK}\right)\right)=\] \[=\underset{XY\overline{ZU_{i}}}{\sum}(q_{i}\left(xyz\right)\log\left(q_{i} \left(z\right)\right)-p_{i}\left(xyz\right)\log\left(p_{i}\left(z\right) \right))+\underset{XY\overline{ZU_{i}}}{\sum}(q_{i}\left(xyz\right)\log\left( q_{i}\left(xyz\right)\right)-p_{i}\left(xyz\right)\log\left(p_{i}\left(xyz\right) \right))\] \[-\underset{XY\overline{ZU_{i}}}{\sum}(q_{i}\left(xyz\right)\log\left(q_{i} \left(xz\right)\right)-p_{i}\left(xyz\right)\log\left(p_{i}\left(xz\right) \right))-\underset{XY\overline{ZU_{i}}}{\sum}(q_{i}\left(xyz\right)\log\left( q_{i}\left(yz\right)\right)-p_{i}\left(xyz\right)\log\left(p_{i}\left(yz\right) \right))\,.\] Now, we split the summation into two parts: \(z\in\text{Range}(\overline{ZK})\) or \(z\in\text{Range}(Z_{i}^{*})\). We now deal with the first part (\(z\in\text{Range}(\overline{ZK})\)). Note that \[-H\left(XY\overline{ZK}\right)=\underset{XY\overline{ZK}}{\sum}p_{i}\left(xyz \right)\log\left(p_{i}\left(xyz\right)\right).\] However, we also have that \[-H\left(XY\overline{ZK}\right)=-H\left(XY\overline{ZU_{i}}|z\in\overline{ZK} \right)=\underset{XY\overline{ZK}}{\sum}\frac{q_{i}\left(xyz\right)}{1-S_{i}} \log\left(\frac{q_{i}\left(xyz\right)}{1-S_{i}}\right)=\] \[-\log\left(1-S_{i}\right)+\frac{1}{1-S_{i}}\underset{XY\overline{ZK}}{\sum}q_ {i}\left(xyz\right)\log\left(q_{i}\left(xyz\right)\right)\implies\] \[\underset{XY\overline{ZK}}{\sum}q_{i}\left(xyz\right)\log\left(q_{i}\left(xyz \right)\right)=-\left(1-S_{i}\right)H\left(XY\overline{ZK}\right)+\left(1-S_{i }\right)\log\left(1-S_{i}\right).\] This means that \[\underset{XY\overline{ZK}}{\sum}\left(q_{i}\left(xyz\right)\log\left(q_{i} \left(xyz\right)\right)-p_{i}\left(xyz\right)\log\left(p_{i}\left(xyz\right) \right)\right)=S_{i}H\left(XY\overline{ZK}\right)+\left(1-S_{i}\right)\log \left(1-S_{i}\right).\] This approaches \(0\) as \(i\) goes to infinity because \(S_{i}\) tends to \(0\). For the other summations, we can repeat this logic with \(-H\left(X\overline{ZK}\right)\), \(-H\left(Y\overline{ZK}\right)\), and \(-H\left(\overline{ZK}\right)\). This will produce the expressions \[S_{i}H\left(X\overline{ZK}\right)+\left(1-S_{i}\right)\log\left(1-S_{i} \right),\] respectively. Therefore, for each of the four summations, the terms of the sum that are a part of \(z\in\overline{ZK}\) approach \(0\). This deals with the part \(z\in\overline{ZK}\). Now, consider all \(z\in\text{Range}(Z_{i}^{*})\). Here, we have \(p_{i}\left(\cdot,\cdot,z\right)=0\) because of the definition of \(Z_{i}^{*}\). This leaves us with \[\sum_{XYZ_{i}^{*}}q_{i}\left(xyz\right)\left(\log\left(q_{i}\left( z\right)\right)+\log\left(q_{i}\left(xyz\right)\right)-\log\left(q_{i}\left( xz\right)\right)-\log\left(q_{i}\left(yz\right)\right)\right)\] \[=\sum_{XYZ_{i}^{*}}q_{i}\left(xyz\right)\log\left(\frac{q_{i} \left(z\right)}{q_{i}\left(xz\right)}\right)-\sum_{XYZ_{i}^{*}}q_{i}\left(xyz \right)\log\left(\frac{q_{i}\left(yz\right)}{q_{i}\left(xyz\right)}\right)\] \[=\sum_{XYZ_{i}^{*}}q_{i}\left(xz\right)\log\left(\frac{q_{i} \left(z\right)}{q_{i}\left(xz\right)}\right)-\sum_{XYZ_{i}^{*}}q_{i}\left(xyz \right)\log\left(\frac{q_{i}\left(yz\right)}{q_{i}\left(xyz\right)}\right).\] We show that both of these summations tend to \(0\). For the first summation, for all \(x\in\mathcal{X}\), define \[f\left(x\right):=\sum_{Z_{i}^{*}}q_{i}\left(xz\right)\log\left(\frac{q_{i} \left(z\right)}{q_{i}\left(xz\right)}\right).\] Note that by the concavity of \(\log\), we have \[f\left(x\right)=q_{i}\left(x\right)\sum_{Z_{i}^{*}}\frac{q_{i}\left(xz\right) }{q_{i}\left(x\right)}\log\left(\frac{q_{i}\left(z\right)}{q_{i}\left(xz \right)}\right)\leq q_{i}(x)\log\left(\sum_{Z_{i}^{*}}\frac{q_{i}\left(xz \right)}{q_{i}\left(x\right)}\cdot\frac{q_{i}\left(z\right)}{q_{i}\left(xz \right)}\right)=q_{i}\left(x\right)\log\left(\frac{S_{i}}{q_{i}\left(x\right)} \right).\] This means that \[0\leq\sum_{XZ_{i}^{*}}q_{i}\left(xz\right)\log\left(\frac{q_{i} \left(z\right)}{q_{i}\left(xz\right)}\right)=\sum_{X}f\left(x\right)\leq\sum_ {X}q_{i}\left(x\right)\log\left(\frac{S_{i}}{q_{i}\left(x\right)}\right)=S_{i }\sum_{X}\frac{q_{i}\left(x\right)}{S_{i}}\log\left(\frac{S_{i}}{q_{i}\left( x\right)}\right)\] \[=S_{i}H\left(X|z\in Z_{i}^{*}\right)\leq S_{i}H\left(X\right).\] This means that the first summation tends to \(0\). The second summation also tends to \(0\) by replacing all instances of \(z\) in the above proof with \(yz\). Since all parts of the summations from the expanded conditional mutual information expressions tend to \(0\), we must have \(I\left(X:Y|\overline{ZVU_{i}}\right)-I\left(X:Y|\overline{ZK}\right)\) tends to \(0\) as well. We now prove Theorem 3.1. Proof.: We have the following sequence of implications, reproduced for clarity. \[\lim_{i\rightarrow\infty}H(U_{i}) =0\] \[\implies\lim_{i\rightarrow\infty}D(U_{i},K) =0\] \[\implies\lim_{i\rightarrow\infty}D(XYZU_{i},XYZK) =0\] \[\implies\lim_{i\rightarrow\infty}D(XY\overline{ZVU_{i}},XY \overline{ZK}) =0\] \[\implies\lim_{i\rightarrow\infty}I(X:Y|\overline{ZVU_{i}})-I(X:Y| \overline{ZK}) =0\] The first implication is a result of Lemma 3.3. Using Lemma 3.4 and replacing \(Z\) with \(XYZ\) gives the second implication. Then, using Lemma 3.7 with modified channels \(\{C_{i}^{\prime}\}\) that are identical to \(\{C_{i}\}\), but they leave \(X\) and \(Y\) unchanged gives the third implication. Finally, using Lemma 3.8 gives us the final implication. ## 4 Implications and extensions Theorem 3.1 is a strengthening of a remark made by Christandl, Renner, and Wolf in [2]. In [2], the authors prove that the infimum is a minimum in the definition of the intrinsic information as long as the range of \(Z\) is finite. They remark that an argument analogous to that presented in their paper may prove that the infimum is a minimum in the definition of the reduced intrinsic information, with certain conditions on the size of \(X\), \(Y\), \(Z\). This would imply a subcase of our theorem by the argument made briefly at the start of our proof. Unfortunately, it is unknown whether the arguments in [2] extend to the reduced intrinsic information measure. However, our present result is stronger than results that might be obtained through these means because we only require \(X\), \(Y\), \(Z\) to have finite entropy, whereas arguments analogous to those in [2] would require variables to have finite ranges. Another application of Theorem 3.1 is demonstrated in the following statement, mentioned briefly at the end Section 1. **Theorem 4.1**.: _If bound secrecy exists, then there exists a distribution \(P_{XYZ}\) such that_ \[S(X:Y||Z)\neq I(X:Y\downarrow\downarrow Z).\] Proof.: Let \(P_{XYZ}\) be a distribution that is bound secret, so that \(I(X:Y\downarrow Z)>0\) and \(S(X:Y||Z)=0\). However, by Theorem 3.1, we have \(I(X:Y\downarrow Z)>0\implies I(X:Y\downarrow\downarrow Z)>0\). This means that this distribution satisfies \(S(X:Y||Z)=0<I(X:Y\downarrow\downarrow Z)\), as desired. This theorem implies that at least one of the conjectures 2.9 and 2.10 is false. Since a significant amount of evidence suggesting the existence of bound secrecy has already been established, we believe that Conjecture 2.9 is false. Furthermore, the above theorem implies that the approach of showing that a certain distribution is bound secret by computing a nonzero intrinsic information and a reduced intrinsic information of \(0\) is guaranteed to fail. In order for this approach to work, a property that would make Theorem 3.1 false when the property is substituted for the reduced intrinsic information must be used. In particular, this property \(f(XYZ)\) should satisfy the following: * Given \(P_{XYZ}\), we have \(f(XYZ)\leq I(X:Y\downarrow Z)\), and equality does not always hold. * \(f(XYZ)=0\) does not imply \(I(X:Y\downarrow Z)=0\). ## 5 Binarizations and Bound Secrecy One possible path for establishing the existence of bound secrecy has been suggested in [4; 6], which we now investigate. In [6], the authors suggest that the existence of positive intrinsic information which vanishes upon binarization may be a candidate for bound secrecy. The authors provide an example of a distribution \(X_{0}Y_{0}Z_{0}\) such that for all binarizations of \(X_{0}\) and \(Y_{0}\), producing \(\overline{X_{0}}\) and \(\overline{Y_{0}}\) respectively, \(I(\overline{X_{0}}:\overline{Y_{0}}\downarrow Z_{0})=0\) (Proposition 4) [6]. They also show that for _any_ distribution \(XYZ\), if the secret-key rate \(S(X:Y||Z)\) is positive, then for some \(N\) there exist binarizations of \(X^{N}\) and \(Y^{N}\) such that \(I(\overline{X^{N}}:\overline{Y^{N}}\downarrow Z^{N})>0\) (Proposition 5) [6]. Therefore, the missing step for establishing bound secrecy for \(X_{0}Y_{0}Z_{0}\) is the following: **Conjecture 5.1**.: [6] Let \(XYZ\) be a distribution. If, for all binary output channels \(P_{\overline{X}|X}\) and \(P_{\overline{Y}|Y}\) we have \(I(\overline{X}:\overline{Y}\downarrow Z)=0\), then for all \(N\), for all binary output channels \(P_{\overline{X^{N}}|X^{N}}\) and \(P_{\overline{Y^{N}}|Y^{N}}\), we must have \(I(\overline{X^{N}}:\overline{Y^{N}}\downarrow Z^{N})=0\). In fact, it is only necessary to prove Conjecture 5.1 for the specific distribution \(X_{0}Y_{0}Z_{0}\), which we will investigate in the next section. In this section, we reduce Conjecture 5.1 to a much simpler statement which, if proven, would establish Conjecture 5.1 and thereby prove the existence of bound secrecy. In the statement of the theorem, the symbol \(\perp\!\!\!\perp\) is used to denote independence of random variables. **Theorem 5.2**.: _Conjecture 5.1 is equivalent to the following:_ \[\forall\overline{X},\overline{Y},\exists\overline{Z}\text{ such that }(\overline{X} \perp\!\!\!\perp\overline{Y})|\overline{Z}\implies\forall N,\forall \overline{X^{N}},\overline{Y^{N}},\ \exists\overline{Z^{N}}\text{ such that }(\overline{X^{N}} \perp\!\!\!\perp\overline{Y}^{N})|\overline{Z^{N}}\] _where the channels processing \(X,Y,X^{N},Y^{N}\) are assumed to be binarizations._ The statement of the theorem is noteable because it makes no reference to information-theoretic quantities: it is purely a statement about probabilities. To prove Theorem 5.2, we need the following lemma linking information and probability. **Lemma 5.3**.: _Given random variables \(X\), \(Y\), \(Z\), we have \(I(X:Y|Z)=0\) if and only if \((X\perp\!\!\!\perp Y)|Z\)._ Proof.: In this proof, for ease of writing we let \(P(x)\) denote \(P(X=x)\) for any \(x\in\mathcal{X}\), and similarly for \(Y\) and \(Z\). To prove the forward direction, we have \[0=-I(X:Y|Z) =\sum_{xyz}P(x,y|z)\log\left(\frac{P(x|z)P(y|z)}{P(x,y|z)}\right)\] \[\leq\frac{1}{\ln 2}\sum_{xyz}P(x,y|z)\left(\frac{P(x|z)P(y|z)}{P(x,y|z)}-1\right)\qquad\qquad\text{since }\log_{2}x\leq\frac{x-1}{\ln 2}\] \[=\frac{1}{\ln 2}\sum_{xyz}\left(P(x|z)P(y|z)-P(x,y|z)\right)\] \[=\frac{1}{\ln 2}\sum_{z}\left(\left(\sum_{x}P(x|z)\right)\left( \sum_{y}P(y|z)\right)-\sum_{xy}P(x,y|z)\right)\] Observe that for any \(z\in\mathcal{Z}\), the sums \(\sum\limits_{x}P(x|z)\), \(\sum\limits_{y}P(y|z)\), and \(\sum\limits_{xy}P(x,y|z)\) simply sum all the values in the conditional distribution \((XY)|Z=z\). So these sums all equal \(1\), and the last line in the chain of expressions above is \(0\). Since both sides of the above chain are \(0\), the inequality must be an equality. Since \(\log_{2}x=\frac{x-1}{\ln 2}\) if and only if \(x=1\), the expression inside the logarithm must always be \(1\), which means \[P(X=x|Z=z)P(Y=y|Z=z)=P(X=x,Y=y|Z=z)\] for all \(x,y,z\). Thus \((X\perp\!\!\!\perp Y)|Z\). For the reverse direction, we simply note that \[I(X:Y|Z)=\sum_{xyz}-P(X=x,Y=y|Z=z)\log\left(\frac{P(X=x|Z=z)P(Y=y|Z=z)}{P(X=x, Y=y|Z=z)}\right)\] and if \((X\perp\!\!\!\perp Y)|Z\), then the expression inside the logarithm is always \(1\), so each term of the sum becomes \(0\), and \(I(X:Y|Z)=0\). We now prove Theorem 5.2. Proof.: As in the statement of the theorem, all channels that process \(X,Y,X^{N},Y^{N}\) are assumed to be binarizations. We observe that by the definition of the intrinsic information, \[\forall\overline{X},\overline{Y},\ I(\overline{X}:\overline{Y}\downarrow Z)= 0\Longleftrightarrow\forall\overline{X},\overline{Y},\exists\overline{Z}\text { such that }I(\overline{X}:\overline{Y}|\overline{Z})=0.\] Then using the lemma, we have \[\forall\overline{X},\overline{Y},\exists\overline{Z}\text{ such that }I(\overline{X}:\overline{Y}| \overline{Z})=0\Longleftrightarrow\forall\overline{X},\overline{Y},\exists \overline{Z}\text{ such that }(\overline{X}\perp\!\!\!\perp\overline{Y})|\overline{Z}.\] We can repeat the logic for \(X^{N}\), \(Y^{N}\), \(Z^{N}\). So \[\forall\overline{X},\overline{Y},\ I(\overline{X}:\overline{Y}\downarrow Z)= 0\implies\forall N,\forall\overline{X^{N}},\overline{Y^{N}},\ I(\overline{X^{N}}: \overline{Y^{N}}\downarrow Z^{N})=0\] is equivalent \[\forall\overline{X},\overline{Y},\exists\overline{Z}\text{ such that }(\overline{X}\perp\!\!\!\perp \overline{Y})|\overline{Z}\implies\forall N,\forall\overline{X^{N}},\overline {Y^{N}},\ \exists\overline{Z^{N}}\text{ such that }(\overline{X^{N}}\perp\!\!\!\perp \overline{Y^{N}})|\overline{Z^{N}}\] which is the desired result. Using Theorem 5.2, we can reduce the problem of bound secrecy to a statement simply about probability distributions and independence. Independence-inducing Binarizations We present some progress on proving Conjecture 5.1 for the specific distribution \(XYZ\), as introduced in [6]. This would be sufficient to establish bound secrecy for the distribution, as shown below. \begin{tabular}{|l||c||c|c|c|} \hline \(X\) & 1 & 2 & 3 \\ \(Y(Z)\) & & & & \\ \hline \hline 1 & 2 (0) & 4 (1) & 1 (2) \\ \hline 2 & 1 (3) & 2 (0) & 4 (4) \\ \hline 3 & 4 (5) & 1 (6) & 2 (0) \\ \hline \end{tabular} For this distribution, the value of \(Z\) is determined by the values of \(X\) and \(Y\), and is indicated by the number in parentheses in the cell. The unnormalized probability for that \(xyz\) triplet is given by the number not in parentheses. One method for proving the statement in Theorem 5.2 for this distribution is by strengthening it to the following statement and not allowing Alice to binarize, as in the following conjecture. **Conjecture 6.1**.: For the distribution \(XYZ\), for any \(N\geq 1\) we have \[\sqrt{Y^{N}},\ \exists\overline{Z^{N}}\text{ such that }(X^{N}\perp\!\! \perp\overline{Y^{N}})|\overline{Z^{N}}\] where the channel processing \(Y^{N}\) is assumed to be a binarization. Note that this conjecture implies Theorem 5.2 because if Alice is not allowed to binarize and Eve can still erase correlation by processing \(Z^{N}\), then there would still be no correlation even if Alice binarized her variable. To prove Conjecture 6.1, we must show that for any \(N\), for any binarization that Bob chooses, Eve is able to process her variable such that Alice and Bob's variables are independent given Eve's information. Here, we primarily investigate the cases \(N=1\) and \(N=2\). In the case \(N=1\), it has been proven that for all binarizations \(\overline{Y}\) of \(Y\), Eve can always find a \(\overline{Z}\) such that \(X\perp\!\!\perp\overline{Y}|\overline{Z}\) (Proposition 4 of [6]). We have found an explicit construction of the map \(\overline{Z}\), based on the following value. **Definition 6.2**.: We define the _independence target value_ (ITV) \(\tau(x,y,z)\) for any \(x,y,z\in\mathbb{R}\) as the median of \(\frac{2x+1y+0z}{3}\), \(\frac{1x+0y+2z}{3}\), and \(\frac{0x+2y+1z}{3}\). Bob's map can be defined using the three numbers \(P_{\overline{Y}|Y}(\overline{0},1)=r\), \(P_{\overline{Y}|Y}(\overline{0},2)=s\), and \(P_{\overline{Y}|Y}(\overline{0},3)=t\). Since Bob's map is a binarization, we have that \(P_{\overline{Y}|Y}(\overline{1},1)=1-r\), \(P_{\overline{Y}|Y}(\overline{1},2)=1-s\), and \(P_{\overline{Y}|Y}(\overline{1},3)=1-t\). The probability distribution \(X\overline{Y}Z\) is as follows, using the same notation as before: \begin{tabular}{|c||c|c|c|} \hline \(X\) & 1 & 2 & 3 \\ \(\overline{Y}(Z)\) & & & \\ \hline \hline & \(2r\) (0) & \(2s\) (0) & \(2t\) (0) \\ \(\overline{0}\) & \(s\) (3) & \(4r\) (1) & \(r\) (2) \\ & \(4t\) (5) & \(t\) (6) & \(4s\) (4) \\ \hline \multirow{3}{*}{\(\overline{1}\)} & \(2-2r\) (0) & \(2-2s\) (0) & \(2-2t\) (0) \\ & \(1-s\) (3) & \(4-4r\) (1) & \(1-r\) (2) \\ \cline{1-1} & \(4-4t\) (5) & \(1-t\) (6) & \(4-4s\) (4) \\ \hline \end{tabular} As mentioned in [6], if Eve receives \(z\neq 0\), she knows what \(X\) is, meaning that \(X|Z=z\) is constant and \(X\perp\!\!\perp\overline{Y}|Z=z\). Therefore, we focus our attention on the case that \(Z=0\). We consider the same map \(P_{\overline{Z}|Z}\) as mentioned in the proof of Proposition 4 of [6]. In this map, the nonzero values for \(Z\), namely 1, 2, 3, 4, 5, 6 are mapped to \(\overline{0}\) with probabilities \(c\), \(e\), \(a\), \(f\), \(b\), and \(d\) respectively, and they are mapped to \(\overline{1},\ldots,\overline{6}\) with probabilities \(1-c\), \(1-e\), \(1-a\), \(1-f\), \(1-b\), \(1-d\) respectively. The value \(Z=0\) is mapped to \(\overline{0}\) with probability 1. Under this map, the probability distribution \(P_{X\overline{Y}|\overline{Z}=\overline{0}}\) is as follows: \begin{tabular}{|c||c|c|c|} \hline \(\begin{array}{c}X\\ \widetilde{Y}\end{array}\) & \(1\) & \(2\) & \(3\) \\ \hline \hline \(\begin{array}{c}\widetilde{0}\\ \overline{1}\end{array}\) & \(2r+as+4bt\) & \(4cr+2s+dt\) & \(er+4fs+2t\) \\ \hline \(\begin{array}{c}\overline{1}\end{array}\) & \((2+a+4b)-(2r+as+4bt)\) & \((4c+2+d)-(4cr+2s+dt)\) & \((e+4f+2)-(er+4fs+2t)\) \\ \hline \end{tabular} In order to have \((X\perp\!\!\!\perp\overline{Y})|\overline{Z}=\overline{0}\), we must have the following: \[\frac{2r+as+4bt}{2+a+4b}=\frac{4cr+2s+dt}{4c+2+d}=\frac{er+4fs+2t}{e+4f+2}.\] In the case that the denominators of these fractions are \(0\) and the numerators nonzero, the independence condition is satisfied. If both the numerator and denominator of a fraction are \(0\), it can be ignored since it imposes no additional conditions. In [6], it is proven that for any \(r\), \(s\), \(t\) there exist \(a,b,c,d,e,f\) satisfying the above equations using a topological argument, but here we demonstrate this in a constructive manner using the ITV: **Theorem 6.3**.: _For numbers \(r,s,t\in\mathbb{R}\), there exist \(a,b,c,d,e,f\in[0,1]\) such that_ \[\frac{2r+as+4bt}{2+a+4b}=\frac{4cr+2s+dt}{4c+2+d}=\frac{er+4fs+2t}{e+4f+2}= \tau(r,s,t).\] Proof.: Note that \(\tau(r,s,t)=\tau(s,t,r)=\tau(t,r,s)\). This means that if we can find satisfactory \(a,b\in[0,1]\) such that \[\frac{2r+as+4bt}{2+a+4b}=\tau(r,s,t)\] for any \(r,s,t\in\mathbb{R}\), then by symmetry we can find satisfactory \(c,d\in[0,1]\) such that \[\frac{2s+dt+4cr}{2+d+4c}=\tau(s,t,r)=\tau(r,s,t)\] for the same set of \(r,s,t\in[0,1]\). Similarly, we can also find \(e,f\in[0,1]\) such that \[\frac{2t+er+4fs}{2+e+4f}=\tau(t,r,s)=\tau(r,s,t)\] for the same set of \(r\), \(s\), and \(t\). This means that we only need to show that for all \(r,s,t\in\mathbb{R}\), there exist \(a,b\in[0,1]\) such that \[\frac{2r+as+4bt}{2+a+4b}=\tau(r,s,t). \tag{1}\] If \(r=s=t\), then both sides of the equation above are equal to \(r\) regardless of the choice of \(a,b\). Now, assume that not all three of \(r,s,t\) are equal. Note that since the left and right sides of the equation above are computed from weighted averages of \(r\), \(s\), and \(t\), we can scale the variables by a nonzero constant or add a real number without changing the equation. We can subtract \(\min(r,s,t)\) from all of variables and since all of the variables are not equal, we can divide these new variables by \(\max(r,s,t)-\min(r,s,t)\neq 0\). We have now transformed \((r,s,t)\) to \((r^{\prime},s^{\prime},t^{\prime})\), a permutation of \((0,x,1)\), where \(x\in[0,1]\). Our new variables are thus one of the cyclic shifts of \((0,x,1)\) or \((0,1,x)\) for some \(x\in[0,1]\). In the latter case, we can multiply the triple by \(-1\) and add \(1\) to get a cyclic shift of \((0,1-x,1)\). This means that we can assume without loss of generality that \((r^{\prime},s^{\prime},t^{\prime})\) is some cyclic shift of \((0,x,1)\) for some \(x\in[0,1]\). Since the ITV is invariant under cyclic shifts, we now have \(\tau(r^{\prime},s^{\prime},t^{\prime})=\tau(0,x,1)\), which is the median of \(\frac{x}{3},\frac{2}{3}\), and \(\frac{1+2x}{3}\). Note that \(\frac{x}{3}\) is the least of these three, so the median is \(\frac{\min(2,1+2x)}{3}\). If \(x<\frac{1}{2}\), then this is \(\frac{1+2x}{3}\), and if \(x\geq\frac{1}{2}\), then this is \(\frac{2}{3}\). We now explicitly write out satisfactory \(a\) and \(b\) which satisfy equation 1 based on which of these cyclic shifts of \((0,x,1)\) that \((r,s,t)\) has been transformed into: * \((r^{\prime},s^{\prime},t^{\prime})=(0,x,1)\): If \(x<\frac{1}{2}\), then we can take \(a=0\) and \(b=\frac{1+2x}{4-4x}\). If \(x\geq\frac{1}{2}\), then we can take \(a=0\) and \(b=1\). * \((r^{\prime},s^{\prime},t^{\prime})=(1,0,x)\): If \(x<\frac{1}{2}\), then we can take \(a=0\) and \(b=1\). If \(x\geq\frac{1}{2}\), then we can take \(a=1\) and \(b=0\). * \((r^{\prime},s^{\prime},t^{\prime})=(x,1,0)\): If \(x<\frac{1}{2}\), then we can take \(a=1\) and \(b=0\). If \(x\geq\frac{1}{2}\), then we can take \(a=1\) and \(b=\frac{3}{8}(2x-1)\). This covers all of the cases, so we are done. This resolves the \(N=1\) case. We attempt to extend the use of the ITV for \(N=2\). Bob's map may be parameterized by the values \(a_{ij}:=P_{\overline{Y^{2}}|Y^{2}}(0,ij)\) for \(i,j\in\{1,2,3\}\), where \(Y^{2}=ij\) represents its two components, \((Y_{1},Y_{2})=(i,j)\). In this case, we cannot focus on the \(Z^{2}=00\) case alone, because if \(Z^{2}=01\) for example, Eve is unsure of whether Alice has \(X^{2}=12\), \(X^{2}=22\), or \(X^{2}=32\). If neither component of \(Z^{2}\) are \(0\), Eve will be certain of what Alice has, so \(X\) and \(\overline{Y}\) are already independent in these cases and no processing is necessary. This means that we must repeat the above procedure for \(\overline{Z^{2}}=\overline{00},\overline{01},\overline{02},\ldots,\overline{0 6},\overline{10},\overline{20},\ldots,\overline{60}\). Given a \(\overline{Z^{2}}\) value of \(\overline{ij}\), the fractions will be of the form \[\frac{P\left(\overline{X^{2}}=0,Y^{2}=11,\overline{Z^{2}}=\overline{ij}\right) }{P\left(Y^{2}=11,\overline{Z^{2}}=\overline{ij}\right)}=\cdots=\frac{P\left( \overline{X^{2}}=0,Y^{2}=11,\overline{Z^{2}}=\overline{ij}\right)}{P\left(Y^ {2}=33,\overline{Z^{2}}=\overline{ij}\right)}.\] One natural possibility that is extendable to \(N\geq 3\) for the target value for the fractions corresponding to these \(z^{2}\) values are the following: **Conjecture 6.4**.: Define \(a_{ij}:=P_{\overline{Y^{2}}|Y^{2}}(0,rs)\) for \(r,s\in\{1,2,3\}\), and let the target values \(\tau_{2}:\{00,01,02,\ldots,\)\(06,10,20,\ldots,60\}\rightarrow[0,1]\) be defined as follows: * \(\tau_{2}(0i)=\tau(a_{1j},a_{2j},a_{3j})\), where \(j=\lceil\frac{i}{2}\rceil\) and \(1\leq i\leq 6\), * \(\tau_{2}(i0)=\tau(a_{j1},a_{j2},a_{j3})\), where \(j=\lceil\frac{i}{2}\rceil\) and \(1\leq i\leq 6\), * \(\tau_{2}(00)=\tau(\tau(a_{11},a_{12},a_{13}),\tau(a_{21},a_{22},a_{23}),\tau( a_{31},a_{32},a_{33}))\). Then, there exists a channel \(P_{\overline{Z^{2}}|Z^{2}}\) such that \(P\left(\overline{Y^{2}}=0|X^{2},\overline{Z^{2}}=\overline{z}\right)=\tau_{2}(z)\) for all \(z\) in the domain of \(\tau_{2}\). The choice of \(j=\lceil\frac{i}{2}\rceil\) is motivated by the fact that in this distribution, if Eve receives \(Z=i\), then she knows that Bob has \(Y=\lceil\frac{i}{2}\rceil=j\). We observe that these target values give the correct values for a particular class of Bob's strategies which we term product strategies: **Definition 6.5**.: A _product strategy_ for Bob (who has \(Y^{N}=Y_{1}Y_{2}\ldots Y_{N}\)) is a binarization of \(Y^{N}\) so that the \(N\)-dimensional matrix of probabilities \(P\left(\overline{Y^{N}}=0|Y^{N}=y\right)\) for \(y\in\mathcal{Y}^{N}\) is the tensor product of the \(N\) vectors \(P\left(\overline{Y^{N}}=0|Y_{1}=y_{1}\right)\),..., \(P\left(\overline{Y^{N}}=0|Y_{N}=y_{N}\right)\) where each \(y_{i}\) takes on every value in \(\mathcal{Y}\). The reason that this target value choice works for product strategies is due to the following property of the ITV: **Theorem 6.6**.: _For real numbers \(b_{0},b_{1},b_{2},c_{0},c_{1},c_{2}\in[0,1]\),_ \[\tau(\tau(b_{0}c_{0},b_{1}c_{0},b_{2}c_{0}),\tau(b_{0}c_{1},b_{1}c_{1},b_{2}c_{ 1}),\tau(b_{0}c_{2},b_{1}c_{2},b_{2}c_{2}))=\tau(b_{0},b_{1},b_{2})\tau(c_{0},c _{1},c_{2}).\] Proof.: Note that \(\tau(b_{0}c_{0},b_{1}c_{0},b_{2}c_{0})=c_{0}\tau(b_{0},b_{1},b_{2})\) because we can factor the \(c_{0}\) from the set of weighted averages considered when calculating the ITV. We can use this repeatedly to get that the left-hand side is equal to \[\tau(c_{0}\tau(b_{0},b_{1},b_{2}),c_{1}\tau(b_{0},b_{1},b_{2}),c_{2}\tau(b_{0},b _{1},b_{2}))=\tau(b_{0},b_{1},b_{2})\tau(c_{0},c_{1},c_{2}).\] Note that if the vectors \(P\left(\overline{Y^{2}}=0|Y_{1}=0\right)\) and \(P\left(\overline{Y^{2}}=0|Y_{2}=0\right)\) are \((b_{0},b_{1},b_{2})\) and \((c_{0},c_{1},c_{2})\), respectively, then the product strategy this corresponds to is the map \(a_{ij}=b_{i}c_{j}\) for all \(i,j\in\{0,1,2\}\). Using theorem 6.6, the target value for \(Z^{2}=00\) is equal to \(\tau(b_{0},b_{1},b_{2})\tau(c_{0},c_{1},c_{2})\). Therefore, we can use our results from the \(N=1\) case for each of components for \(Y^{2}\), and then multiply them together to construct our map. However, when Bob does not use product strategies, there exist strategies for Bob for which Eve cannot set each of the fractions to their desired target values: **Theorem 6.7**.: _For \(N=2\), if Bob's transition probabilities are \((a_{ij})=\left(\begin{smallmatrix}1&1&0\\ 0&0&0\\ 0&0&1\end{smallmatrix}\right)\) with an ITV of \(\tau_{2}\) defined in conjecture 6.4, then there does not exist a channel \(P_{\overline{Z^{2}}|Z^{2}}\) such that \(P\left(\overline{Y^{2}}=0|X^{2},\overline{Z^{2}}=\overline{z}\right)=\tau_{2 }(z)\) for all \(z\) in the domain of \(\tau_{2}\)._ Proof.: We create a function in _Mathematica_ which given Bob's transition probabilities and all of the ITV either outputs a channel \(P_{\overline{Z^{2}}|Z^{2}}\) that Eve can use or states that no possible channel exists. This is done by taking the equations corresponding to independence between \(\overline{X^{2}}\) and \(Y^{2}\) given \(\overline{Z^{2}}\) and manipulating them, reducing the problem into linear equations in the probabilities that comprise \(P_{\overline{Z^{2}}|Z^{2}}\), and solving these equations with the LinearProgramming command. Through this function, for this set of Bob's transition probabilities and ITV it was found that no satisfactory \(P_{\overline{Z^{2}}|Z^{2}}\) existed, as desired. In particular, by testing distributions of random numbers, we have found that such target values for Eve exist approximately \(80\%\) of the time. Due to this high percentage and generalizability, we hope to use similarly constructed target values (e.g. \(\tau_{3}(00i)=\tau(\tau(a_{11i},a_{12i},a_{13i}),\tau(a_{21i},a_{22i},a_{23i}), \tau(a_{31i},a_{32i},a_{33i})\)) and so on) for \(N=2\) and beyond. We now address the question of what the shape of Eve's channel should be in the case of \(N=2\). Recall that in the case of \(N=1\), for any binarization that Bob chooses it is possible for Eve to make \(X\perp\!\!\!\!\perp\overline{Y}|\overline{Z}\) using a channel in which \(Z=0\) is sent to \(\overline{Z}=\overline{0}\) with probability \(1\), and no nonzero \(Z=i\) is sent to \(\overline{Z}=\overline{j}\) with positive probability for any \(j\neq 0,i\). Since \(X\perp\!\!\!\!\perp\overline{Y}|Z=i\) for \(i\in\{1,2,\ldots,6\}\), we can freely change the values of the transition probabilities from \(Z=i\) to \(\overline{Z}=\overline{0}\) in order to make \(X\perp\!\!\!\perp\overline{Y}|\overline{Z}=\overline{0}\). We consider a natural extension of this idea to the case \(N=2\), namely channels \(P_{\overline{Z^{2}}|Z^{2}}\) such that the only nonzero transition values are the following: * \(P_{\overline{Z^{2}}|Z^{2}}(\overline{i}\overline{j},ij)\) for all \(i\) and \(j\), * \(P_{\overline{Z^{2}}|Z^{2}}(\overline{i}\overline{0},ij)\) for all \(i\) and \(j\), * \(P_{\overline{Z^{2}}|Z^{2}}(\overline{0}\overline{j},ij)\) for all \(i\) and \(j\), * \(P_{\overline{Z^{2}}|Z^{2}}(\overline{0}\overline{0},00)=1\). In other words, if one writes out the \(Z^{2}\) values in a 2D grid, with corners \(00\), \(06\), \(60\), and \(66\), the only transitions allowed are within a column or row, or from an element to itself. We prove that this cannot work. **Theorem 6.8**.: _There exists a binarization \(P_{\overline{Y^{2}}|Y^{2}}\) such that no channel satisfying the properties above results in \(X\perp\!\!\!\perp\overline{Y}|\overline{Z}=\overline{0}\overline{0}\)._ Proof.: Consider the binarization \(P_{\overline{Y^{2}}|Y^{2}}\) defined by \(P_{\overline{Y^{2}}|Y^{2}}(0,i)=1\) for \(i=11,33\) and \(P_{\overline{Y^{2}}|Y^{2}}(0,i)=0\) otherwise. Observe that for the map above, the only transitions to \(Z=\overline{0}\overline{0}\) are from \(n0\) or \(0n\) for \(n=1,2,\ldots,6\), or from \(00\). We know that the transition probability from \(00\) to \(\overline{0}\overline{0}\) is \(1\). Let the transition probabilities \(P_{\overline{Y^{2}}|Y^{2}}(\overline{n}\overline{0},00)\) for \(n=1,2,\ldots,6\) be \(a_{1},\ldots,f_{1}\) respectively, and similarly let the transition probabilities \(P_{\overline{Y^{2}}|Y^{2}}(\overline{0n},00)\) be \(a_{2},\ldots,f_{2}\) respectively. Then the equations \[\frac{P\left(\overline{X^{2}}=\overline{0},Y^{2}=11,\overline{Z^{2}}=\overline{ ij}\right)}{P\left(Y^{2}=11,\overline{Z^{2}}=\overline{00}\right)}=\cdots=\frac{P \left(\overline{X^{2}}=\overline{0},Y^{2}=11,\overline{Z^{2}}=\overline{00} \right)}{P\left(Y^{2}=33,\overline{Z^{2}}=\overline{ij}\right)}\] become \[\frac{2}{2+a_{1}+a_{2}+4b_{1}+4b_{2}}=\frac{4c_{2}}{2+a_{1}+4b_{1} +4c_{2}+d_{2}}=\frac{4b_{1}+e_{2}}{2+a_{1}+4b_{1}+e_{2}+4f_{2}}=\frac{4c_{1}}{2 +a_{2}+4b_{2}+4c_{1}+d_{1}}\] \[=0\] \[=\frac{d_{1}}{2+4c_{1}+d_{1}+e_{2}+4f_{2}}=\frac{4b_{2}+e_{1}}{2+ a_{2}+4b_{2}+e_{1}+4f_{1}}=\frac{d_{2}}{2+4c_{2}+d_{2}+e_{1}+4f_{1}}=\frac{2}{2+e_ {1}+e_{2}+4f_{1}+4f_{2}}\] The first fraction can never be equal to \(0\) (the fifth expression in the equality) for any values of \(a_{1},\ldots,f_{1}\) and \(a_{2},\ldots,f_{2}\), so we are done. By providing an explicit construction for the 1D case using the ITV, the results in this section suggest a possible approach for generalizing the existence proof first given in [6], which promises to generalize to \(X^{N}Y^{N}Z^{N}\). However, we illustrate some difficulties in performing performing a straightforward generalization to higher dimensions, in particular the case where one of the parties, say Bob, does not use a product strategy. ## 7 A family of candidate distributions We now examine another distribution given in [17] which is believed to be bound secret. The unnormalized probability table is shown below. \[\begin{array}{|c||c|c|c|c|}\hline X&0&1&2&3\\ Y&&&&&\\ \hline\hline 0&1/8&1/8&a&a\\ \hline 1&1/8&1/8&a&a\\ \hline 2&a&a&1/4&0\\ \hline 3&a&a&0&1/4\\ \hline\end{array}\] \[Z\equiv X+Y\pmod{2}\text{ if }X,Y\in\{0,1\},\] \[Z\equiv X\pmod{2}\text{ if }X,Y\in\{2,3\},\] \[Z=(X,Y)\text{ otherwise.}\] It has already been shown that for any \(a>0\), we have \(I(X:Y\downarrow Z)>0\)[17], so to prove that this distribution is bound secret, we only need to show that \(S(X:Y||Z)=0\) for any fixed \(a\). As with the previous distribution, we conjecture the following: **Conjecture 7.1**.: There exists a value \(a\) such that for the distribution \(XYZ\) above, for any \(N\geq 1\) we have \[\forall\overline{X^{N}},\overline{Y^{N}},\ \exists\overline{Z^{N}}\text{ such that }( \overline{X^{N}}\perp\overline{Y^{N}})|\overline{Z^{N}}\] where the channels processing \(X^{N}\) and \(Y^{N}\) are assumed to be binarizations. We have not been able to establish the above statement for the case \(N=1\), but have made some progress in ruling out possible simplifications. For example, in the case of the previous distribution it was possible to prove the above statement for \(N=1\) while only allowing binarizations of \(Y\), which is a strictly harder task than if we had allowed binarizations of \(X\) and \(Y\). We show that the analogous way of strengthening the above claim in the case of the current distribution for \(N=1\) cannot work. In what follows, we consider channels \(P_{\overline{Z}|Z}\) as follows. Observe that \(X\perp\!\!\!\perp Y|Z=z\) (and therefore \(\overline{X}\perp\!\!\!\perp\overline{Y}|Z=z\) for all \(\overline{X}\), \(\overline{Y}\)) for all \(z\not\in\{0,1\}\), so there is no reason to have a transition to \(\overline{Z}=\overline{z}\) except from \(Z=z\) itself. We allow all other transitions except those from \(Z=0\) or \(Z=1\) to \(\overline{Z}=\overline{z}\) with \(\overline{z}\neq\overline{0},\overline{1}\). These transitions would be counterproductive for Eve because it is already the case that \(\overline{X}\perp\!\!\!\perp\overline{Y}|Z=z\), and after this transitions it might be the case that \(\overline{X}\) and \(\overline{Y}\) are dependent given \(\overline{Z}=\overline{z}\). With these restrictions, we parameterize the possible \(\overline{Z}\) channels as follows. Let \(P_{\overline{Z}|Z}(\overline{1},0)=\alpha\) and \(P_{\overline{Z}|Z}(\overline{0},1)=\beta\), so that \(P_{\overline{Z}|Z}(\overline{1},1)=1-\beta\) and \(P_{\overline{Z}|Z}(\overline{0},0)=1-\alpha\). Let the transition probabilities from \(Z=(0,2)\), \((1,2)\), \((0,3)\), \((1,3)\), \((2,0)\), \((3,0)\), \((2,1)\), and \((3,1)\) to \(\overline{Z}=\overline{0}\) be \(a_{0}\), \(b_{0}\), \(c_{0}\), \(d_{0}\), \(e_{0}\), \(f_{0}\), \(g_{0}\), and \(h_{0}\) respectively. Similarly define \(a_{1},\ldots,h_{1}\) to be the transition probabilities from \(Z\)-values not equal to \(0\) or \(1\) to \(\overline{Z}=\overline{1}\). This is illustrated in Tables 1 and 2, where the number in each cell is the transition probability to \(\overline{Z}=\overline{0}\) (Table 1) or \(\overline{1}\) (Table 2) from the \(Z\) value corresponding to the \(XY\) value for that cell. **Theorem 7.2**.: _For all possible values of \(a>0\) in the distribution given in [17], there exists a binarization \(P_{\overline{Y}|Y}\) such that for all \(\overline{Z}\) in the form given by Tables 1 and 2, \(X\perp\!\!\!\perp\overline{Y}|\overline{Z}\)._ Proof.: Fix \(a>0\). In order for \(X\perp\!\!\!\perp\overline{Y}|\overline{Z}\), we need \[\frac{P\left(X=0,\overline{Y}=0,\overline{Z}=0\right)}{P\left(X=0,\overline{Z} =0\right)}=\cdots=\frac{P\left(X=3,\overline{Y}=0,\overline{Z}=0\right)}{P \left(X=3,\overline{Z}=0\right)}\] and \[\frac{P\left(X=0,\overline{Y}=0,\overline{Z}=1\right)}{P\left(X=0,\overline{Z} =1\right)}=\cdots=\frac{P\left(X=3,\overline{Y}=0,\overline{Z}=1\right)}{P \left(X=3,\overline{Z}=1\right)}.\] We claim that there exists a binarization \(P_{\overline{Y}|Y}\) such that for all \(P_{\overline{Z}|Z}\) in the form given by tables 1 and 2, the two equations \(\frac{P\left(X=2,\overline{Y}=0,\overline{Z}=0\right)}{P\left(X=2,\overline{ Z}=0\right)}=\frac{P\left(X=3,\overline{Y}=0,\overline{Z}=0\right)}{P\left(X=3, \overline{Z}=0\right)}\) and \(\frac{P\left(X=2,\overline{Y}=0,\overline{Z}=1\right)}{P\left(X=2,\overline{ Z}=1\right)}=\frac{P\left(X=3,\overline{Y}=0,\overline{Z}=1\right)}{P \left(X=3,\overline{Z}=1\right)}\) cannot be satisfied. Let this binarization be defined by \(P_{\overline{Y}|Y}(\overline{0},0)=w\), \(P_{\overline{Y}|Y}(\overline{0},1)=x\), \(P_{\overline{Y}|Y}(\overline{0},2)=y\), and \(P_{\overline{Y}|Y}(\overline{0},3)=z\) (we will specify \(w,x,y,z\) later). Expanding, the two equations are \[\frac{ae_{0}w+ag_{0}x+2y(1-\alpha)}{ae_{0}+ag_{0}+2(1-\alpha)} =\frac{af_{0}w+ah_{0}x+2z\beta}{af_{0}+ah_{0}+2\beta}\] \[\frac{ae_{1}w+ag_{1}x+2y\alpha}{ae_{1}+ag_{1}+2\alpha} =\frac{af_{1}w+ah_{1}x+2z(1-\beta)}{af_{1}+ah_{1}+2(1-\beta)}\] Observe that, since both of these equations are weighted averages of \(w\), \(x\), \(y\), and \(z\), the equations do not change if an affine transformation is applied to \(w\), \(x\), \(y\), and \(z\) simultaneously. So, in the case that \(w\neq x\) (which will hold for the particular binarization that makes \(X\) and \(\overline{Y}\) dependent regardless of \(P_{\overline{Z}|Z}\)), we can let \(w=0\) and \(x=1\), WLOG. After a bit of simplification, the equations become \[\frac{g_{0}+2\left(\frac{y}{a}\right)\left(1-\alpha\right)}{e_{0} +g_{0}+\frac{2\left(1-\alpha\right)}{a}} =\frac{h_{0}+2\left(\frac{z}{a}\right)\beta}{f_{0}+h_{0}+\frac{2 \beta}{a}}\] \[\frac{g_{1}+2\left(\frac{y}{a}\right)\alpha}{e_{1}+g_{1}+\frac{2 \alpha}{a}} =\frac{h_{1}+2\left(\frac{z}{a}\right)\left(1-\beta\right)}{f_{1}+h_{1 }+\frac{2\left(1-\beta\right)}{a}}\] We claim that the values \(y=2\) and \(z=3a+3\) work (i.e. there are no possible values \(0\leq e_{0},\ldots,h_{1},\alpha,\beta\leq 1\) that satisfy the equations). To prove this, we can again view each fraction as a weighted average. For example, the first fraction in the first equation is a weighted average of the values \(0\), \(1\), and \(y\) with weights \(e_{0}\), \(g_{0}\), and \(\frac{2\left(1-\alpha\right)}{a}\) respectively. Since \(y,z>1\), the minimum possible value of this fraction is achieved when \(e_{0}=g_{0}=1\) and the maximum is achieved when \(e_{0}=g_{0}=0\). Similarly for the second fraction in the first equation, the minimum is achieved when \(f_{0}=h_{0}=1\) and the maximum is achieved when \(f_{0}=h_{0}=0\). Since the maximum of the second fraction is \(z\), which is always larger than \(y\) for the proposed values \(y=2\) and \(z=3a+3\), the first equation is solvable if and only if the minimum of the second fraction is less than or equal to the maximum of the first fraction, that is \[y\geq\frac{1+2\left(\frac{z}{a}\right)\beta}{2+\left(\frac{z}{a}\right)\beta}.\] Solving for \(\beta\), we have \[\beta\leq\frac{a(2y-1)}{2z-2y}.\] Applying the same analysis to the second equation gives that \[1-\beta\leq\frac{a(2y-1)}{2z-2y}\] since the second equation is exactly the same as the first, except with different parameters \(e_{1},\ldots,h_{1}\) and with \(1-\alpha\) substituted for \(\alpha\), and \(\beta\) substituted for \(1-\beta\). We now claim that \[\frac{a(2y-1)}{2z-2y}<\frac{1}{2}\] for our proposed values \(y=2\) and \(z=3a+3\), and therefore satisfying both of the above inequalities is impossible. We have that \(\frac{a(2y-1)}{2z-2y}=\frac{3a}{2(3a+1)}<\frac{3a}{2(3a)}=\frac{1}{2}\) as desired. Theorem 7.2 suggests that establishing Conjecture 7.1 would be significantly more involved than proving Conjecture 6.1, because we would need to consider \(8\) different variables in creating the channel \(P_{\overline{Z}|Z}\). We conjecture one possible minor simplification regarding \(P_{\overline{Z}|Z}\). **Definition 7.3** ([11]).: Consider a binary random variable \(B\). Call a channel \(P_{\overline{B}|B}\) a _Z-shaped channel_ if at least one of \(P_{\overline{B}|B}(\overline{0},0)\), \(P_{\overline{B}|B}(\overline{1},0)\), \(P_{\overline{B}|B}(\overline{0},1)\), and \(P_{\overline{B}|B}(\overline{1},1)\) is zero. We observe the following: **Theorem 7.4**.: _Let \(B\) be a binary random variable. Any channel \(P_{\overline{B}|B}\) is equivalent to using a \(Z\)-shaped channel with some probability, and performing a fair coin flip (with the outcome of the coin determining the outputted bit) otherwise._ Proof.: Let \(P_{\overline{B}|B}(\overline{0},0)=a\), \(P_{\overline{B}|B}(\overline{1},0)=c\), \(P_{\overline{B}|B}(\overline{0},1)=b\), and \(P_{\overline{B}|B}(\overline{1},1)=d\). Suppose that \(\min(a,b,c,d)=b\) (the other cases are similar). Then it can be verified that \(P_{\overline{B}|B}\) is equivalent to performing a fair coin flip with probability \(2b\), and using the channel \(P_{\overline{B}^{\prime}|B}\) defined by \(P_{\overline{B}^{\prime}|B}(\overline{0},0)=1\), \(P_{\overline{B}^{\prime}|B}(\overline{0},1)=\frac{c-b}{a-b}\), and \(P_{\overline{B}^{\prime}|B}(\overline{1},1)=\frac{d-b}{a-b}\) otherwise. If \(a=b=1-c=1-d\neq\frac{1}{2}\), then \(P_{\overline{B}|B}\) is a biased coin, which is a weighted average of a fair coin and a Z-shaped channel always outputting \(\overline{0}\), if \(a>\frac{1}{2}\), or a Z-shaped channel always outputting \(\overline{1}\), if \(a<\frac{1}{2}\). Finally, if \(a=b=c=d=\frac{1}{2}\), then \(P_{\overline{B}|B}\) is already a fair coin, so we are done. In order to make \(\overline{X}\perp\overline{Y}|\overline{Z}\), it seems counterproductive for Eve to throw away her information (by using a coin flip) with some probability. For this reason, we conjecture that we can restrict the space of channels \(P_{\overline{Z}|Z}\) to those such that the transitions between \(0\) and \(1\) and \(\overline{0}\) and \(\overline{1}\) form a Z-shaped channel. **Conjecture 7.5**.: There exists an \(a>0\) in the above distribution such that for all binarizations \(P_{\overline{X}|X}\) and \(P_{\overline{Y}|Y}\), there exists \(P_{\overline{Z}|Z}\) such that at least one of \(P_{\overline{Z}|Z}(\overline{0},0)\), \(P_{\overline{Z}|Z}(\overline{1},0)\), \(P_{\overline{Z}|Z}(\overline{0},1)\), and \(P_{\overline{Z}|Z}(\overline{1},1)\) is \(0\) and \[(\overline{X}\perp\overline{Y})|\overline{Z}.\] Another natural approach for the \(N=1\) case of this distribution is to try and find a suitable ITV, similar to how the \(N=1\) case for the distribution introduced in Section 6. In particular, one would need to find a function \(\upsilon:\mathbb{R}^{4}\to\mathbb{R}\) which maps the four values corresponding to Bob's transition map (\(P_{\overline{Y}|Y}(\overline{0},r-1)\) for \(r\in\{1,2,3,4\}\)) to the target value for each of the four fractions \[\frac{P\left(X=0,\overline{Y}=0,\overline{Z}=\overline{0}\right)}{P\left(X=0, \overline{Z}=\overline{0}\right)}=\cdots=\frac{P\left(X=3,\overline{Y}=0, \overline{Z}=\overline{0}\right)}{P\left(X=3,\overline{Z}=\overline{0}\right)},\] which correspond to \(X\perp\overline{Y}|\overline{Z}=\overline{0}\). Furthermore, in order for this approach to generalize for \(N\geq 2\), an ITV function \(\upsilon_{2}:\mathbb{R}^{16}\to\mathbb{R}\) must be chosen similar to conjecture 6.4. In particular, given the transition values \(b_{rs}:=P_{\overline{Y}^{2}|Y^{2}}(0,(r-1)(s-1))\) for \(r,s\in\{1,2,3,4\}\) (where \((r-1)(s-1)\) indicates the concatenation of \(Y_{1}=r-1\) and \(Y_{2}=s-1\)), one possible candidate for \(\upsilon_{2}\) is as follows: \[\upsilon_{2}(b_{11},b_{12},b_{13},\ldots,b_{44}):=\upsilon(\upsilon(b_{11},b_ {12},b_{13},b_{14}),\upsilon(b_{21},b_{22},b_{23},b_{24}),\ldots,\upsilon(b_{4 1},b_{42},b_{43},b_{44})).\] One drawback of such an ITV for the \(N=2\) case is that the two components of \(Y^{2}:=Y_{1}Y_{2}\) are being treated differently: if the \(b_{ij}\) are placed in a \(4\) by \(4\) table, the \(\upsilon\) is taken over the rows first rather than the columns first, giving priority to \(Y_{1}\). However, for a special class of \(\upsilon\) ITV, this issue is not present: **Definition 7.6**.: Call an ITV \(\upsilon:\mathbb{R}^{4}\to\mathbb{R}\)_row-column equivalent_ if the following statement is true for all \(b_{ij}\in\mathbb{R}\) (\(i,j\in\{1,2,3,4\}\)): \[\upsilon(\upsilon(b_{11},b_{12},b_{13},b_{14}),\upsilon(b_{21},b_{22},b_{23},b_{ 24}),\ldots,\upsilon(b_{41},b_{42},b_{43},b_{44}))=\] \[\upsilon(\upsilon(b_{11},b_{21},b_{31},b_{41}),\upsilon(b_{12},b_{22},b_{32},b_ {42}),\ldots,\upsilon(b_{14},b_{24},b_{34},b_{44})).\] **Theorem 7.7**.: _All ITV of the form \(\upsilon(r,s,t,u):=w_{1}r+w_{2}s+w_{3}t+w_{4}u\) with \(w_{1},w_{2},w_{3},w_{4}\in\mathbb{R}\) are row-column equivalent._ Proof.: Note that \[\upsilon(\upsilon(b_{11},b_{12},b_{13},b_{14}),\upsilon(b_{21},b_{22},b_{23},b_ {24}),\ldots,\upsilon(b_{41},b_{42},b_{43},b_{44}))=\upsilon\left(\sum_{i=1}^{ 4}w_{i}b_{1i},\sum_{i=1}^{4}w_{i}b_{2i},\ldots,\sum_{i=1}^{4}w_{i}b_{4i}\right)\] \[=\sum_{j=1}^{4}\sum_{i=1}^{4}w_{j}w_{i}b_{ij}.\] Since this quantity is symmetric in \(i,j\), the equation in the definition of row-column equivalent is satisfied. We now consider ITV of the form \(\upsilon(r,s,t,u):=w_{1}r+w_{2}s+w_{3}t+w_{4}u\) due to the above property. Note that the fractions of the form \(\frac{P\left(X=0,\overline{Y}=0,\overline{Z}=\overline{0}\right)}{P\left(X=0, \overline{Z}=\overline{0}\right)}\) are weighted averages of the values corresponding to Bob's transition map. As a result, the ITV should also be a weighted average of these values. This means that the condition \(w_{1}+w_{2}+w_{3}+w_{4}=1\) should be imposed on this class of ITV. Now, we consider the \(N=2\) case, with the \(\upsilon\) and \(\upsilon_{2}\) defined above. We can classify Bob's transition values \(\{b_{ij}\}\) based on their \(\upsilon_{2}\) value. In particular, since \(\upsilon\) is a weighted average, we can explicitly construct transformations that preserve the \(\upsilon_{2}\) value: **Definition 7.8**.: For a set of Bob transition values \(\{b_{ij}\}\), a _row transformation_ is defined as considering an \(i\in\{1,2,3,4\}\), and transforming four of the variables \((b_{i1},b_{i2},b_{i3},b_{i4})\) in the following manner (\(d_{i}\in\mathbb{R}\)): \[x\mapsto\upsilon(b_{i1},b_{i2},b_{i3},b_{i4})+d_{i}(x-\upsilon(b_{i1},b_{i2},b _{i3},b_{i4})).\] A _column transformation_ is defined similarly, but the variables \((b_{1i},b_{2i},b_{3i},b_{4i})\) are used instead. **Theorem 7.9**.: _Both row and column transformations preserve the \(\upsilon_{2}\) value._ Proof.: Consider an arbitrary row transformation on \((b_{i1},b_{i2},b_{i3},b_{i4})\). Note that one of the terms in \(\upsilon_{2}\) is \(\upsilon(b_{i1},b_{i2},b_{i3},b_{i4})\), so if we can prove that this does not change, then we are done. Since \(\upsilon\) is a weighted average, the transformation \(x\mapsto x-\upsilon(b_{i1},b_{i2},b_{i3},b_{i4})\) turns the \(\upsilon\) value of these 4 numbers to 0. Furthermore, the transformation \(x\mapsto d_{i}x\) will multiply the \(\upsilon\) value by \(d_{i}\), still leaving it at 0. Finally, the transformation \(x\mapsto x+\upsilon(b_{i1},b_{i2},b_{i3},b_{i4})\) makes the \(\upsilon\) value return to its original value, as desired. For column transformations, apply this same logic in the column-based version (using \(\upsilon(b_{1i},b_{2i},b_{3i},b_{4i})\)) of \(\upsilon_{2}\), as weighted average \(\upsilon\) are row-column equivalent by 7.7. By the above theorem, we can change Bob's transition values for the \(N=2\) case in 8 ways: applying row transformations on \((b_{i1},b_{i2},b_{i3},b_{i4})\) and applying column transformations on \((b_{1i},b_{2i},b_{3i},b_{4i})\) for \(i\in\{1,2,3,4\}\). Now, consider the scenario for a general \(N\) and a set of Bob transition values \(\{b_{i_{1}i_{2}\ldots i_{N}}\}\). Additionally, define \(\upsilon_{N}\) in a recursive manner based on \(\upsilon_{N-1}\) (with \(\upsilon_{1}:=\upsilon\)): \[\upsilon_{N}(\{b_{i_{1}i_{2}\ldots i_{N}}\}):=\upsilon(\upsilon_{N-1}(\{b_{1i_ {2}\ldots i_{N}}\}),\upsilon_{N-1}(\{b_{2i_{2}\ldots i_{N}}\}),\upsilon_{N-1}( \{b_{3i_{2}\ldots i_{N}}\}),\upsilon_{N-1}(\{b_{4i_{2}\ldots i_{N}}\})).\] Note that the \(N\)-dimensional analogue of 7.7 is true, and so we can construct row-column-type transformations for each of the \(N\) dimensions: \[x\mapsto\upsilon(b_{1i_{2}\ldots i_{N}},b_{2i_{2}\ldots i_{N}},b_{3i_{2}\ldots i _{N}},b_{4i_{2}\ldots i_{N}})+d_{i}(x-\upsilon(b_{1i_{2}\ldots i_{N}},b_{2i_{2} \ldots i_{N}},b_{3i_{2}\ldots i_{N}},b_{4i_{2}\ldots i_{N}})).\] The number of transformations of this type is \(N\cdot 4^{N-1}\) (\(N\) choices for which coordinate to vary from 1 to 4, and 4 options for each of the \(N-1\) fixed coordinates). However, note that for \(N\geq 4\), we have \(N\cdot 4^{N-1}\geq 4^{N}\), the number of variables in Bob's transition map, giving us the following theorem: **Theorem 7.10**.: _For \(N\geq 4\), if there exists a family of weighted average ITV \(\upsilon,\upsilon_{2},\upsilon_{3},\dots\) as defined previously, then the row-column-type transformations are linearly dependent of each other (i.e. any non-trivial row-column-type transformation affecting the row/column \(T\) can be constructed from a composition of row-column-type transformations that do not directly act on \(T\))._ Due to the arbitrary nature of the \(N\geq 4\) constraint in the statement of the theorem, we conjecture the following: **Conjecture 7.11**.: Theorem 7.10 holds true for \(N=2\) and \(N=3\). Finally, an alternative approach to resolve the case \(N=1\) is to fix \(P_{\overline{Z}|Z}(\overline{1},0)=\alpha\) and \(P_{\overline{Z}|Z}(\overline{0},1)=\beta\), as well as \(a_{0},b_{0},c_{0},\dots,h_{0},a_{1},b_{1},c_{1},\dots,h_{1}\) (Eve's transition probabilities defined in tables I and II) at \(\frac{1}{2}\), and then vary some subset of \(\{a_{0},b_{0},c_{0},\dots g_{0},h_{0}\}\) and \(\{a_{1},b_{1},c_{1},\dots g_{1},h_{1}\}\) to allow Eve to satisfy the independence condition. We focus on the case \(\overline{Z}=\overline{0}\). Suppose that the channels that Bob use to binarize are \(P_{\overline{Y}|Y}(\overline{0},0)=y_{0}\), \(P_{\overline{Y}|Y}(\overline{0},1)=y_{1}\), \(P_{\overline{Y}|Y}(\overline{0},2)=y_{2}\), and \(P_{\overline{Y}|Y}(\overline{0},3)=y_{3}\), and define \(x_{0}\), \(x_{1}\), \(x_{2}\), and \(x_{3}\) for Alice's binarization similarly. Note that \[\frac{P\left(\overline{X}=\overline{0},\overline{Y}=\overline{0},\overline{Z} =\overline{0}\right)}{P\left(\overline{X}=\overline{0},\overline{Y}= \overline{1},\overline{Z}=\overline{0}\right)}=\frac{(x_{0}+x_{1})(y_{0}+y_{ 1})+(x_{0}+x_{1})(y_{2}+y_{3})+(x_{2}+x_{3})(y_{0}+y_{1})+2(x_{2}y_{2}+x_{3}y _{3})}{(\text{same as above with all }y_{i}\text{ replaced with }1-y_{i})}\] \[=\frac{(x_{0}+x_{1}+x_{2}+x_{3})(y_{0}+y_{1}+y_{2}+y_{3})+(x_{2}-x_{3})(y_{2} -y_{3})}{(x_{0}+x_{1}+x_{2}+x_{3})(4-y_{0}-y_{1}-y_{2}-y_{3})-(x_{2}-x_{3})(y_ {2}-y_{3})}\] Note that in the original formulation of the problem, the desired equality of the independence equation is a collection of weighted averages of the \(\{x_{i}\}\) and \(\{y_{i}\}\). As a result, we can assume WLOG that \(x_{0}+x_{1}+x_{2}+x_{3}=y_{0}+y_{1}+y_{2}+y_{3}=0\) (while in turn losing the probabilistic meaning being these variables). This simplifies the above ratio to \(-1\). Similarly, note that \[\frac{P\left(\overline{X}=\overline{1},\overline{Y}=\overline{0},\overline{Z} =\overline{0}\right)}{P\left(\overline{X}=\overline{1},\overline{Y}=\overline {1},\overline{Z}=\overline{0}\right)}=\frac{(4-x_{0}-x_{1}-x_{2}-x_{3})(y_{0} +y_{1}+y_{2}+y_{3})-(x_{2}-x_{3})(y_{2}-y_{3})}{(4-x_{0}-x_{1}-x_{2}-x_{3})(4-y _{0}-y_{1}-y_{2}-y_{3})+(x_{2}-x_{3})(y_{2}-y_{3})}.\] Using the above simplifications, the ratio reduces to \(\frac{-\zeta}{16+\zeta}\), where \(\zeta:=(x_{2}-x_{3})(y_{2}-y_{3})\). In order for \(\overline{X}\perp\!\!\!\perp\overline{Y}|\overline{Z}=\overline{0}\), we must have these two fractions be equal. However, the equality \(-1=\frac{-\zeta}{16+\zeta}\) fails to hold for any \(\{x_{i}\}\) and \(\{y_{i}\}\). As a result, some isolated perturbations must be done to some of the variables. One possibility is to take \(a_{0}=\frac{1}{2}\) and alter it slightly by a small value. For some \(\epsilon\approx 0\), the new values of the two ratios would be \[\frac{P\left(\overline{X}=\overline{0},\overline{Y}=\overline{0},\overline{Z} =\overline{0}\right)}{P\left(\overline{X}=\overline{0},\overline{Y}= \overline{1},\overline{Z}=\overline{0}\right)}=\frac{\zeta+\epsilon x_{0}y_{2 }}{-\zeta+\epsilon x_{0}(1-y_{2})},\] \[\frac{P\left(\overline{X}=\overline{1},\overline{Y}=\overline{0},\overline{Z} =\overline{0}\right)}{P\left(\overline{X}=\overline{1},\overline{Y}=\overline {1},\overline{Z}=\overline{0}\right)}=\frac{-\zeta+\epsilon(1-x_{0})y_{2}}{16+ \zeta+\epsilon(1-x_{0})(1-y_{2})}.\] In order for these two fractions to be equal, we must have \[(16+\zeta+\epsilon(1-x_{0})(1-y_{2}))(\zeta+\epsilon x_{0}y_{2})=(-\zeta+ \epsilon x_{0}(1-y_{2}))(-\zeta+\epsilon(1-x_{0})y_{2})\implies\] \[16\zeta+16\epsilon x_{0}y_{2}+\epsilon\zeta(x_{0}y_{2}+(1-x_{0})(1-y_{2}))=- \zeta\epsilon(x_{0}(1-y_{2})+(1-x_{0})y_{2})\implies\] \[16\zeta+16\epsilon x_{0}y_{2}+\epsilon\zeta=0.\] If \(\zeta\) is sufficiently small, then an \(\epsilon\) can be chosen to satisfy the given equation, giving us the following theorem: **Theorem 7.12**.: _For \(\overline{X}\) and \(\overline{Y}\) with \(\zeta:=(x_{2}-x_{3})(y_{2}-y_{3})\) sufficiently small, there exists a binarization \(\overline{Z}\) such that \((\overline{X}\perp\!\!\!\perp\overline{Y})|\overline{Z}\)._ In future work we hope to generalize this perturbation technique to solve the problem for more classes of channels \((\overline{X},\overline{Y})\). Conclusion In this paper, we have shown a new relation between two well-known information-theoretic quantities: the intrinsic information and the reduced intrinsic information. Namely, for a given \(P_{XYZ}\), when the reduced intrinsic information of this distribution is \(0\), then so is the intrinsic information. This relation has many important ramifications for significant conjectures in information theory. For example, out of the two long-standing conjectures of the secret-key rate being equal to the reduced intrinsic information and the conjecture of bound secrecy[17], at least one of them must be incorrect. Another implication is that the reduced intrinsic information cannot be used to prove that a distribution is bound secret. Future work in this direction would be to develop an information-theoretic quantity which has the property that it is not necessarily equal to \(0\) if the intrinsic information is equal to \(0\), and use this property to demonstrate that a particular distribution is bound secret. We have also made progress on a possible approach for showing that a bound secret distribution does exist, using the idea of binarization of random variables [6]. In particular, we have reduced bound secrecy to a problem that does not require the use of information-theoretic quantities to formulate, instead using only basic ideas from probability. We have made progress on proving this statement for the candidate distribution introduced in [6], by creating an explicit construction for an information-erasing binarization. The construction makes generalizing the information-erasing binarization much easier compared to the previous non-constructive results. Furthermore, we have also made progress on proving bound secrecy for a family of distributions introduced in [17]. In particular, we show that binarizing \(Y\) alone is not sufficient to create independence between Alice and Bob given Eve, suggesting the underlying difference between proving bound secrecy for this distribution and the candidate distribution introduced in [6]. Additionally, we provide evidence that only Z-shaped channels need to be considered when binarizing. We also provide additional promising approaches for proving bound secrecy for this family of distributions, such as considering a particular class of weighted average target values and the row-column-type transformations they induce, and perturbing a single variable in Eve maps to solve bound secrecy in the \(N=1\) case for a particular class of binarizations. ## IX Acknowledgements We would like to thank the MIT PRIMES-USA program for the opportunity to conduct this research. We would also like to thank Peter Shor for suggesting this problem to us. We also acknowledge Stefan Wolf, Matthias Christandl, and Renato Renner for their helpful answers to our questions regarding their papers.
2302.05888
Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue
With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems. The traditional techniques treat knowledge as part of the input sequence for the dialogue system, prepending a set of knowledge statements in front of dialogue history. However, such a mechanism forces knowledge sets to be concatenated in an ordered manner, making models implicitly pay imbalanced attention to the sets during training. In this paper, we first investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses. We conduct experiments on two commonly used dialogue datasets with two types of transformer-based models and find that models view the input knowledge unequally. To this end, we propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input in these models. With the proposed position embedding method, the experimental results show that each knowledge statement is uniformly considered to generate responses.
Hsuan Su, Shachi H Kumar, Sahisnu Mazumder, Wenda Chen, Ramesh Manuvinakurike, Eda Okur, Saurav Sahay, Lama Nachman, Shang-Tse Chen, Hung-yi Lee
2023-02-12T10:13:00Z
http://arxiv.org/abs/2302.05888v1
# Position Matters! Empirical Study of Order Effect ###### Abstract With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems. The traditional techniques treat knowledge as part of the input sequence for the dialogue system, prepending a set of knowledge statements in front of dialogue history. However, such a mechanism forces knowledge sets to be concatenated in an ordered manner, making models implicitly pay imbalanced attention to the sets during training. In this paper, we first investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses. We conduct experiments on two commonly used dialogue datasets with two types of transformer-based models and find that models view the input knowledge unequally. To this end, we propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input in these models. With the proposed position embedding method, the experimental results show that each knowledge statement is uniformly considered to generate responses. ## 1 Introduction Transformer-based (Vaswani et al., 2017) pretrained language models are widely used to build dialogue systems (Zhang et al., 2020; Xu et al., 2021; Komeili et al., 2021; Roller et al., 2020; Thoppilan et al., 2022; Rae et al., 2021; Chen et al., 2021; Ham et al., 2020; Hosseini-Asl et al., 2020; Bao et al., 2021). In addition to general-purpose dialogue systems, many specialized dialogue systems have been proposed. Representative examples include personalized dialogue systems (Wolf et al., 2019; Zhang et al., 2018; Wu et al., 2021; Cao et al., 2022; Song et al., 2020), knowledge-grounded dialogue systems (Dinan et al., 2019; Kim et al., 2021; Tao et al., 2021; Cai et al., 2020; Liu et al., 2021), and prompting dialogue systems (Su et al., 2022). To build specialized dialogue systems, integrating additional information into the input sequence is necessary. Wolf et al. (2019) prepend persona sentences to personalize the history; while Su et al. (2022); Dinan et al. (2020); Keskar et al. (2019); Xu et al. (2020) prepending task-specific signals to prompt and control the model. These methods prepend additional information in front of the history as a sequence for models' input. Furthermore, the approach generates an unnecessary order among equal knowledge sets since the knowledge is connected in the sequence. Thus models might be influenced by the order and generate imbalanced responses. Previous works focus on how perturbations in dialog history affect models' responses (Sankar et al., 2019; O'Connor and Andreas, 2021; Sinha et al., 2021; Lampinen et al., 2022; Webson and Pavlick, 2021; Xu et al., 2020; Khandelwal et al., 2018). They conduct many experiments and measure the effect of perturbations from the aspect of response quality and information theory to show that these language models are robust and not sensitive to the perturbations in input history. However, dialog history and knowledge are inherently different aspects of a conversation. Dialog history has a temporal property, i.e., the topic and specificity of conversation change as the dialog progresses, whereas knowledge facts are information referenced to generate a response. Although the perturbation in history does not influence the Figure 1: The order effect illustration. Models’ responses are influenced by the order of the input knowledge set. results generated by the model (Sankar et al., 2019; O'Connor and Andreas, 2021), in our early observation, we found that prepending knowledge influences models' responses. For example, Figure 1 demonstrates an example where the model exhibits imbalanced attention to input knowledge, and the order of knowledge influences the generated responses. This might cause the model to generate inappropriate responses since it attends to knowledge that might not be relevant to a dialog context. The contributions of this work are as follows: * We conduct experiments across two typical methods and two models on multiple datasets to show that the order of knowledge sentences does affect generated responses. * We propose a simple approach to alleviate this sentence-level order effect by manipulating the position embedding layers. ## 2 Knowledge-grounded Dialogue Methods In this work, we study the order effect in TransferTransfo (Wolf et al., 2019), which is a state-of-the-art knowledge-grounded method. We train TransferTransfo on two datasets and measure the sentence-level order effect on the test datasets. ### TransferTransfo The TransferTransfo architecture is built on top of GPT-series models, which simply concatenates the knowledge sets and context in a single sequence, putting the reply at the end. To help models distinguish speakers and position of input tokens, it builds three parallel input sequences for word, position, and segments, and fuses them into a single sequence. For the loss function, in addition to a language modeling loss, a next sentence prediction loss is added. The total loss is the weighted sum of the 1) language modeling loss, which is computed as the cross-entropy loss between the predicted logits and the ground truth response and 2) the next-sentence prediction loss, which is a classification loss to distinguish the ground truth response from distractors that are randomly sampled from the dataset. In the original TransferTransfo implementation, the authors have already pointed out that the order of the knowledge set influences the model's performance. To this end, they augment training data by permuting the knowledge sets several times. ### Experimental Setups We conduct experiments on two datasets: **Persona-Chat**(Zhang et al., 2018): This persona-grounded dialogue dataset consists of crowd-sourced dialogues between a pair of annotators provided with 4-5 persona statements each. **Topical-Chat**(Gopalakrishnan et al., 2019): This is a knowledge-grounded dialogue dataset, where the dialogs are constructed by a pair of annotators conversing about specific topics. The annotators are provided with wiki data with 4-5 facts as knowledge sources. In our experimental setup, we shuffle the knowledge set's order 50 times during testing and implement TransferTransfo on GPT (Radford et al., 2018) and GPT-2 (Radford et al., 2019) models. ## 3 The Order Effect of the Knowledge Set Models are said to have an order effect of input if the generated responses are sensitive and influenced by order of input sequence. Previous works (Sankar et al., 2019; O'Connor and Andreas, 2021; Sinha et al., 2021; Lampinen et al., 2022; Webson Figure 2: Input format for GPT-series models. The position ids do not treat knowledge equally but as a sequence. The updated position embeddings show our proposed method, where each knowledge statement is encoded with its own position embeddings, hence, models can treat each input sentence equally during training. The same color of blocks indicates using the same layer to generate embeddings. and Pavlick, 2021; Xu et al., 2020; Khandelwal et al., 2018) focus on whether perturbation in dialogue history affect models' responses. In this work, to be more specific, we investigate if sentence level change in the order of input knowledge sets will result in substantial semantic differences in the generated responses. ### The Order Effect Measurement To address the sentence-level order effect of the input knowledge set in models, we aim to measure the semantic difference given different orders of knowledge sentences. It is intuitive to measure if the response content is influenced by knowledge sets order. In other words, we measure the distribution of response-knowledge relationship in different positions. We build a Natural Language Inference (NLI) classifier to evaluate the degree of entailment between responses and each knowledge in the set. The Natural Language Inference Classifier is built with BERT model (Devlin et al., 2019), trained on the Dialogue NLI dataset (Welleck et al., 2019), which is built on top of Persona-Chat dataset (Zhang et al., 2018). The annotators label the relationship between persona and response in Persona-Chat with entail, neutral, and contradict classes. ### Results and Discussions for Order Effect Figures 3 and 4 show the entailment scores of the response with each position of knowledge. Figure 3 presents the experiments of TransferTransfo Figure 4: Experimental results under LM loss only method, the lines indicate the average of 50 times shuffling results with standard deviation represented in the area. The data with 4 and 5 knowledge sets are displayed separately. Figure 3: Experimental results under TransferTransfo method, the lines indicate the average of 50 times shuffling results with standard deviation represented in the area. The data with 4 and 5 knowledge sets are displayed separately. with GPT and GPT-2 models across Persona-Chat and Topical-Chat datasets. Figure 4 shows the results with "LM Loss only Method", which refers to TransferTransfo without the next sentence prediction. We observe that the distribution of data containing only four knowledge statements is very different compared to data containing five knowledge statements. Hence we show them separately. The NLI classification results are shown with BLUE lines. We can see that the distribution of entailment scores on different positions are imbalanced. In the experiments on the GPT model, (figures 2(a), 2(b), 2(c), 2(d), 2(e), and 2(d)), it can be observed under both TransferTransfo and LM loss only methods, the entailment score on the last position is always the highest. In fact, there is a huge gap between the entailment scores with the first knowledge and the last knowledge statements. This indicates that GPT model focuses more on the last position of knowledge. However, the behavior of GPT-2 is very different from GPT model. From Figures 2(e), 2(f), 2(g), 2(h), 2(e), 2(f), 2(g), and 2(h), we can see that GPT-2 models focus more on the earlier knowledge statements in the sequence rather than the later ones. These results show that the order effect exists across GPT and GPT-2 models (although different) and is influencing models' responses and this needs to be solved. ## 4 Alleviate the Order Effect In this section, we analyse the reason for the order effect in the GPT-series models and propose a method to alleviate the phenomenon. Figure 2 shows the input format of the classic GPT-series. There are three types of embeddings in the model: word embedding to capture the semantic meaning of each word, token embedding to represent the speaker and absolute position embedding that encodes position information of input sequence. Figure 2 shows that the position ids for each knowledge start from zero with different positional embedding layers. In this naive setting, knowledge of the set are treated equally and not input with the order during training. ### Results and Discussion In the same Figures 3 and 4, the RED lines demonstrate the entailment result after applying multiple position embedding. We observe that all the red lines, which are the GPT-series applied multiple position embeddings, are much smoother compared to BLUE lines in both figures. Furthermore, we report the difference between maximum and minimum entailment across the positions in Table 1. It shows that the difference is negligible after applying multiple position embeddings. This indicates that we can alleviate the order effect under models trained with with multiple position embedding. However, we also observed that on Figure 4 some red lines are still as steep as before, which means the order effect still exists. We think that the model trained only with LM loss treats knowledge like history and does not ground models on knowledge sets. Under this scenario, the multiple position embedding doesn't work well. For the measurement of quality, Table 1 shows the perplexity, coherence, and diversity. The details are included in Appendix A.2. We found tiny drops between origin and multiple position embedding. More specifically, our proposed method does not crash the models and can still make models generate plausible responses. ## 5 Conclusions In this paper, we investigate whether the order of knowledge set will influence dialogue models' responses. Our experiments across several datasets show that the GPT-series models unfairly pay attention to the knowledge set and are influenced by order of knowledge. To solve this problem, we study \begin{table} \begin{tabular}{l|l|c c|c c} \hline Model & Method & \multicolumn{2}{c|}{Persona} & \multicolumn{2}{c}{Topical} \\ \hline \multicolumn{5}{c}{} & \multicolumn{2}{c}{TM} & \multicolumn{2}{c}{TM} & \multicolumn{2}{c}{TM} & \multicolumn{2}{c}{LM} \\ \multicolumn{5}{c}{} & \multicolumn{2}{c}{Entailment} & Max - Min \\ \hline \multirow{3}{*}{GPT} & Origin & 0.487 / 0.37 & 0.522 / 0.305 & 0.377 / 0.22 &.046 / 0.041 \\ & Multi Pos & **0.023** / **0.028** & 0.51 / 0.41 & **0.311** / **0.106** &.058 / 0.044 \\ \cline{2-5} & Origin & 0.627 / 0.662 &.075 / 0.085 & 0.052 / 0.306 &.052 / 0.027 \\ \cline{2-5} & Multi Pos & **0.039** / **0.044** & **0.383** / **0.045** & **0.207** / **0.183** & **0.35** / **0.021** \\ \hline \multicolumn{5}{c}{} & \multicolumn{2}{c}{Perplexity \(\downarrow\)} \\ \hline \multirow{3}{*}{GPT} & Origin & 52.29 & 54.31 & 39.31 & 36.80 \\ & Multi Pos & 55.47 & 58.43 & 42.37 & 42.98 \\ \cline{2-5} & Origin & 61.69 & 61.80 & 20.50 & 18.84 \\ \cline{2-5} & Multi Pos & 60.18 & 58.91 & 17.40 & 17.30 \\ \hline \multicolumn{5}{c}{} & \multicolumn{5}{c}{Coherence} \\ \hline \multirow{3}{*}{GPT} & Origin & 0.633 & 0.636 & 0.793 & 0.770 \\ & Multi Pos & 0.644 & 0.621 & 0.732 & 0.744 \\ \cline{2-5} & Origin & 0.661 & 0.667 & 0.840 & 0.843 \\ \cline{2-5} & Multi Pos & 0.648 & 0.662 & 0.830 & 0.831 \\ \hline \multicolumn{5}{c}{} & \multicolumn{5}{c}{Diversity \(\downarrow\)} \\ \hline \multirow{3}{*}{GPT} & Origin & 0.815 & 0.822 & 0.844 & 0.846 \\ & Multi Pos & 0.821 & 0.833 & 0.870 & 0.862 \\ \cline{2-5} & Origin & 0.808 & 0.811 & 0.833 & 0.833 \\ \cline{2-5} & Multi Pos & 0.816 & 0.817 & 0.843 & 0.845 \\ \hline \end{tabular} \end{table} Table 1: The results of measurements. The Max-Min of entailment are reported in 4 knowledge / 5 knowledge. The mean of quality across 50 runs are reported and standard deviation are reported in Appendix A.3. the reason for the phenomenon and propose simple method to alleviate the order effect in models. The experimental results show that our approach reduces the order effect and makes the model select the knowledge uniformly. ## Limitations This work has potential limitations: * We found that on the Figure 3 and 4, The entailment of the methods after applying multiple position embedding (RED lines) are sometimes lower than origin methods(BLUE lines). This is not meet our expectations since we don't want our method to decrease performance. In our opinion, we think the reason might be the embedding method has never been seen before during the pretraining of models, which requires the model's additional efforts to adapt the embedding, thus hurts the performance.. We leave it as future work to be improved. * We also found that the multiple position embedding does not work very well to alleviate the order effect in the LM loss-only settings4. We have discussed this in previous sections. Since LM loss only does not help the model distinguish which parts in the input sequence are knowledge set and thus treat them the same as history. The multiple position embedding will not be trained finely to help the model distinguish. We also left this as a future work to be improved.
2303.17155
Discriminative Class Tokens for Text-to-Image Diffusion Models
Recent advances in text-to-image diffusion models have enabled the generation of diverse and high-quality images. While impressive, the images often fall short of depicting subtle details and are susceptible to errors due to ambiguity in the input text. One way of alleviating these issues is to train diffusion models on class-labeled datasets. This approach has two disadvantages: (i) supervised datasets are generally small compared to large-scale scraped text-image datasets on which text-to-image models are trained, affecting the quality and diversity of the generated images, or (ii) the input is a hard-coded label, as opposed to free-form text, limiting the control over the generated images. In this work, we propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text while achieving high accuracy through discriminative signals from a pretrained classifier. This is done by iteratively modifying the embedding of an added input token of a text-to-image diffusion model, by steering generated images toward a given target class according to a classifier. Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images or retraining of a noise-tolerant classifier. We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier. The code is available at \url{https://github.com/idansc/discriminative_class_tokens}.
Idan Schwartz, Vésteinn Snæbjarnarson, Hila Chefer, Ryan Cotterell, Serge Belongie, Lior Wolf, Sagie Benaim
2023-03-30T05:25:20Z
http://arxiv.org/abs/2303.17155v3
# Discriminative Class Tokens for Text-to-Image Diffusion Models ###### Abstract Recent advances in text-to-image diffusion models have enabled the generation of diverse and high-quality images. However, generated images often fall short of depicting subtle details and are susceptible to errors due to ambiguity in the input text. One way of alleviating these issues is to train diffusion models on class-labeled datasets. This comes with a downside, doing so limits their expressive power: (i) supervised datasets are generally small compared to large-scale scraped text-image datasets on which text-to-image models are trained, and so the quality and diversity of generated images are severely affected, or (ii) the input is a hard-coded label, as opposed to free-form text, which limits the control over the generated images. In this work, we propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text while achieving high accuracy through discriminative signals from a pretrained classifier, which guides the generation. This is done by iteratively modifying the embedding of a single input token of a text-to-image diffusion model, using the classifier, by steering generated images toward a given target class. Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images or retraining of a noise-tolerant classifier. We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier. The code is available at [https://github.com/idansc/discriminative_class_tokens](https://github.com/idansc/discriminative_class_tokens). ## 1 Introduction Text-to-image diffusion models [32, 9] have shown remarkable success in creating diverse and high-quality imagery conditioned on input text. However, they fall short when the input text contains lexical ambiguity or when generating fine-grained details. For instance, a user might wish to render an image of an 'iron' for clothes, but could instead be presented with an image of the elemental metal. One way to alleviate these issues is to use a pretrained classifier to guide the denoising process. One such method mixes the score estimate of a diffusion model with the gradient of the log probability of a pre-trained classifier [9]. However, this approach has the downside of needing a classifier that works on both real and noisy data. It is also possible to condition diffusion on a class label [20]. Unfortunately, training the models on curated datasets prevents them from fully utilizing the expressive power of a diffusion model trained on web-scale image-text pairs. A different line of work fine-tunes a diffusion model, or some of its input tokens, using a small (\(\sim\)5) group of images [13, 22, 36]. These methods have the following drawbacks: (i) training new concepts is usually slow and takes upwards of a few hours, (ii) the result may change the generated images (as compared to the original diffusion model) to fit only the new label or concept, and (iii) generated images are based on features from a small group of images and may not capture the diversity of the entire class. This work introduces a method that accurately captures the desired class, avoiding lexical ambiguity and accurately portraying fine-grained details. It does so while retaining the full expressive power of the pretrained diffusion model while avoiding the above-mentioned drawbacks. Instead of guiding the diffusion process or updating the entire model with the classifier, we only update the representation of a single added token, one corresponding to each class of interest, without tuning the model on labeled images. When learning the token representation corresponding to a given target class, we then iteratively generate new images with a higher class probability according to the pretrained classifier. At each iteration, feedback from the classifier steers the designated discriminative class token. Our optimization process uses a new technique, namely gradient skipping, which only propagates the gradient through the final stage of the diffu sion process. The optimized token can then be utilized to generate additional images using the original diffusion model. Our method has several advantages. First, unlike other class conditional methods such as [9], it only requires an off-the-shelf classifier and does not require a classifier trained on noisy data. Second, our method is fast and allows for "plug-and-play" improvements of generated images by making use of a pre-trained token. This is in comparison to other methods, such as Textual Inversion [13], which can take a few hours to converge. Third, our method employs a classifier trained on an extensive collection of images without needing access to those images. This is beneficial as (i) the token is generated from the full set of class-discriminative features as opposed to features from a small set of images, and (ii) in some cases, such as when privacy concerns are involved, it is desirable to share only the classifier and not the data on which it is trained. We evaluate our method both in fine-grained and coarse-grained settings. In the fine-grained setting, we investigate the ability of our method to generate details of species in the CUB [40] and iNat21 [38] datasets. In the coarse setting, we consider the ImageNet [8] dataset. Our primary metric is the accuracy of the generated samples as measured in two ways: (i) we show that our generated images are more often correctly classified using pre-trained classifiers, in comparison to baselines, and (ii) we show that classification models trained on generated samples, either on their own or in combination with a limited amount of real training data, result in improved accuracy. We also measure the quality and diversity of the generated images compared to SD and another class-conditioned technique, showing that our method is superior in terms of the commonly used Frechet inception distance (FID) [17]. Finally, we include many qualitative examples demonstrating the effectiveness of our approach. In Fig. 1, we resolve ambiguity in the input text and add discriminative features for a given class. In the ambiguous category, besides the _iron_ example, the image of a _tiger cat_ becomes the cat species instead of a tiger, and _Appen-zeller_ moves from depicting a group of people, from the Appenzeller area to the dog species. In the fine-grained category, the bird's throat color is corrected, the shape features of the _coastal woodferm_ are corrected, and the _south american gray fox_ show distinctive features that closely resemble those of the species. ## 2 Related work The field of text-based image generation has been studied extensively, both for GANs and, more recently, for diffusion models [10, 18, 39, 23, 24, 27, 28, 31, 42, 7, 12, 33, 21]. The use of diffusion models has, in particular, enabled an unprecedented capability in generating high-quality diverse images from natural language input with models such as DALL-E2 [30], Imagen [37], Parti [41], Make-A-Scene [12], Stable Diffusion (SD) [34], and CogView2 [11]. A recent line of work extends models of this kind by tun Figure 1: We propose a technique that introduces a token (\(S_{c}\)) corresponding to an external classifier label class \(c\). This improves text-to-image alignment when there is lexical ambiguity and enhances the depiction of intricate details. ing the input embeddings to personalize image generation. In particular, some contributions generate images based on a small group of images: Textual inversion (TI) [13] optimizes the embedding of a new textual token that represents a concept found in a handful of images. Dream-Booth [36] proposes fine-tuning the _full_ image generation model where a unique identifier represents the concept. Both works require 3-5 training images to learn the identity of the target concept. A related line of work enables editing of a given image based on input text or another image [26, 14, 6, 3, 2, 4]. More recently, some have suggested methods to leverage large text-based image generators for image editing. Prompt-to-prompt [16] edits the input prompt directly via manipulation of cross-attention maps, and Imagic [22] optimizes the corresponding textual prompt and fine-tunes the model such that the image is accurately reconstructed. When only a few images are used for training, though, there is always the inherent risk that a concept can become too similar to the original images. In contrast, we aim to steer existing diffusion models toward a more general understanding of class via the informative characteristics that a classifier maintains to discriminate between them, while still taking full advantage of the diversity of the underlying generative model. Furthermore, our method is significantly faster than methods such as TI and can effectively utilize an off-the-shelf classifier to refine an image within minutes. Manipulating an image using classifier conditioning can also provide a counterfactual explanation for classifiers [5, 15, 1]. In that sense, our method might also be used to reveal hidden factors of the classifier used. Since semantic differences are relatively small during each iteration of the image generation, they can easily be detected during the process. ## 3 Method We now describe how discriminative token embeddings are learned. We first introduce conditional diffusion models in general, including the more traditional _classifier guidance_ (not to be confused with our method), and then describe our conditioning approach and the gradient skipping. An overview of our method is provided in Fig. 2. Conditional diffusion modelsDiffusion models [19, 9] estimate a process that generates data \(x\sim p(x)\) from noise. During training, an iterative denoising process predicts step-wise added noise \(x_{T}\sim\mathcal{N}(0,\text{I})\). More specifically, given an input image (or a latent encoding) \(x_{0}\sim p(x)\), one first produces samples \(x_{t}=\sqrt{\alpha_{t}}x_{0}+\sqrt{1-\alpha_{t}}\cdot\epsilon_{t}\), with \(0<\alpha_{T}<\alpha_{T-1}<\dots<\alpha_{0}=1\) being hyperparameters and \(\epsilon_{t}\sim\mathcal{N}(0,\text{I})\). One then trains a neural network to predict the added noise \(\epsilon_{t}\) via an objective function of the form: \[\mathbb{E}_{x,\epsilon_{t},t}[||\hat{\epsilon}_{\theta}(x_{t},t)-\epsilon_{t} ||_{2}^{2}], \tag{1}\] A conditional denoising process, where each denoising step depends on a conditioning input (e.g., a class identifier or a text prompt \(y\)), can be defined similarly: \[\mathbb{E}_{x,\epsilon_{t},t}[||\hat{\epsilon}_{\theta}(x_{t},t,y)-\epsilon_{ t}||_{2}^{2}], \tag{2}\] To condition the diffusion process on a class, gradients calculated with trained classifiers can be used in the denoising process [9]. In particular, gradients of text-image matching models, like CLIP [29], allow text conditioning. Utilizing classifier guidance improves the sample quality and enables a trade-off between sample quality and diversity. There are two main drawbacks to using classifier guidance within the diffusion process: (i) the classifier must be retrained to deal with noised images as every noisy sample generated along the iterative denoising process must be Figure 2: An overview of our method for optimizing a new discriminative token representation (\(v_{c}\)) using a pre-trained classifier. For the prompt ‘A photo of a \(S_{c}\) tiger cat,’ we expect the output generated with the class \(c\) to be ‘tiger cat’. The classifier, however, indicates that the class of the generated image is ‘tiger’. We generate images iteratively and optimize the token representation using cross-entropy. Once \(v_{c}\) has been trained, more images of the target class can be generated by including it in the context of the input text. passed through the classifier, and (ii) the classifier needs to be present throughout the generative process. To deal with this, a _classifier-free_ approach has been proposed [9]. Instead of relying on gradients from an image classifier, this approach approximates the gradient of an implicit classifier by modeling the difference between conditional, \(p_{\theta}(x|y)\), and unconditional, \(p_{\theta}(x)\), denoising modules. The conditional and unconditional modules are parameterized using the same \(\epsilon_{\theta}(x_{t},y)\), and the conditional network becomes unconditional by using an empty sentence, i.e., \(\epsilon_{\theta}(x_{t})=\epsilon_{\theta}(x_{t}\mbox{\tiny{-}}^{\mbox{ \tiny{-}}\mbox{\tiny{-}}})\). The final denoising network is formally expressed as follows: \[\bar{\epsilon}_{\theta}(x,x_{t},y)=(1+w)\epsilon_{\theta}(y_{t},y)-w(\epsilon _{\theta}(x_{t})), \tag{3}\] where \(w\) is a hyperparameter determining the strength of the conditioning guidance. Our method is complementary to both the _classifier-based_ and _classifier-free guidance_, and can be used in conjunction with both. While our method can be deployed using any diffusion model, we consider Stable Diffusion (SD) [33] where the denoising process is applied not directly to the pixel values of the images, but in the lower dimensional hidden dimensions of a neural network. The latent representation is gotten by using a Variational Autoencoder (VAE) model that maps an image into a latent \(z=V(x)\), and decodes from the latent representation back to an image \(x\approx D(z)\). Discriminative Token EmbeddingsClassifiers capture discriminative signals that are useful for discerning between classes. In that sense, pretrained classifiers can be seen as experts in different domains. For example, a bird classifier can provide a compact source of discriminative details that separate one species from another. To avoid relying on a classifier at inference time, and reduce the need to fine-tune the classifier on noised images (as in earlier work), our method fine-tunes a token added to the text encoder's vocabulary that the SD model relies on. Our technique iteratively generates images and refines this added synthetic token (as opposed to a word or subword found in the input language) to associate the generated images with a target class of the pre-trained classifier. The only weights being updated are those of the new class token. The process starts with a discriminative class token \(S_{c}\) and a generic prompt \(p=\) "A photo of a \(S_{c}\)_class_name_", where _class_name_ is the name of of the class. We include the class name as part of the prompt to take advantage of existing knowledge in the pre-trained diffusion model (but only update the embedding for \(S_{c}\) during training). By looking at the images, e.g., in Fig. 1 it is evident that SD has gained knowledge across various domains, including expertise in generating specific bird species with some limitations. Our approach aims to enhance the precision of the generated image by introducing minor semantic modifications that leverage the model's existing knowledge. We now describe how we associate the target class's characteristics with the class token's representation. We denote the embedding of the class token \(S_{c}\) as \(v_{c}\), and learn it by utilizing an image classifier \(C\). To speed up training, \(v_{c}\) is initialized to be the embedding of a related initializer token. For instance, with a bird classifier, we initialize the embedding to that of the 'bird' token in the input encoder. For more general classifiers, such as those trained on ImageNet, we use the indefinite article token 'a' as a base for more generalized concepts. Training starts by generating an image \(x(p)\) conditioned on \(p\), including the \(v_{c}\) representation. We feed the resulting image into the classifier and use cross-entropy loss over the classifier labels, i.e., \[\min_{v_{c}}\operatorname{CE}\left(C\left(\psi_{C}(x(p))\right),\mathds{1}_{c }\right), \tag{4}\] where \(\psi_{C}\) transforms the image to align it with the classifier's expected input (e.g., resizing), and \(\mathds{1}_{c}\) is one hot vector of the target class. Our method does not rely on a given set of images. Instead, it generates images iteratively, starting with the output from SD, where each optimization step shifts the generated images closer to the target class distribution by updating the class token. A single image can be optimized directly and generally converges in a relatively small number of steps. Gradient skippingFig. 3 illustrates the generation of a single image using a diffusion process and the flow of the learning signal. Propagating gradients through all diffusion steps requires a significant amount of memory. In our experiments, propagating gradients solely through the final denoising step (i.e., step \(T\)) produces high-quality images representing the intended class while needing fewer resources. While deeper backpropagation could lead to further enhancements, we do not explore this direction further due to memory constraints. Figure 3: An illustration of the gradient skipping technique (indicated by the red line). During backpropagation, the gradient is propagated only through the final denoising step of the diffusion procedure. Design ChoicesOur approach involves several design choices. (i) _Batch size_: Our goal is not to refine a single image but to find a broad token representation that can generate new images without incurring extra costs. By generating images with different seeds, we get diverse images. A larger batch size picks up more generic discriminative features, but training takes slightly longer to converge. We set the batch size to 5 after experimenting with values of 1-6. More details and examples of generations are shown in the supplementary. (ii) _Number of prompts_: Increasing the number of prompts can introduce additional variability. However, we find too much variability during training harmful to convergence. Thus, we limited the number of prompts used in the training phase to two: \(p_{1}=\)"A high-resolution realistic image of a \(S_{c}\)_label_", and \(p_{2}=\)"A photo of \(S_{c}\)_label_". It is worth noting that one can still utilize the discriminative token across various prompts, as shown in Fig. 7. (iii) _Updated tokens_: Our experiments focus on optimizing only the embedding of \(S_{c}\). While it is possible to update other pre-existing tokens, doing so would modify the model and prevent it e.g. from being used in case of lexical overlap between classes, such as in the case of iron. (iv) _Early stopping strategy_: Please refer to the supplementary for additional details. geNet dataset [8]. ### Quantitative evaluation Evaluation using pre-trained classifiersWe first evaluate the generated images with classifiers trained on real data. We generate 100 images for each class and calculate the accuracy of each method: one employing our class token guidance and another with only SD. In Tab. 1, we show that for ImageNet, which mainly consists of classes at a coarse level of granularity, the vanilla SD can generate most classes accurately (70.5%). By utilizing class token guidance, we get better results in complex cases, such as those with ambiguity, resulting in an improved accuracy of 74.5%. For our method and SD, the seed and textual context were held constant, so the images correspond to each other, with the differences being due to the use of the token. We next assess fine-grained classes. Testing the model on the images generated with labels found in the CUB dataset, the accuracy of SD-generated images is only 39.7%. This emphasizes the inherent limitations of the SD model in generating highly detailed and specific categories, such as bird species. Our approach adds the fine details necessary to improve accuracy to 57.9%. We also assess the accuracy of the generated images using the iNat classifier (trained on 10k species) on the same images generated with the guidance of the CUB classifier. Our approach yields a noteworthy improvement in performance from 28.5% to 32.8%, indicating its potential to enhance fine-grained classes beyond the specific classifier-selected characteristics. Finally, we look at a diverse set of 50 classes from the iNat dataset. SD only generates images that are classified accurately 14.1% of the time. Our method significantly improves accuracy to 25.8%, although there is still room for further improvement. Evaluation by training classifiersIn Tab. 2, we train classifiers on 100 generated images and 0-15 real images. Figure 4: Images generated based on ImageNet classes, using SD or our method. Real images are shown for comparison. Figure 5: A selection of images based on iNat classes generated with Stable Diffusion (SD) and our method. A real image is shown for comparison. (a). Yellow pine chipmunk, (b). Jelly antler, (c). Salamander, (d). Pacific lions mane jelly, (e). Leaf mite, (f). Red sea urchin, (g). Seashore manlow, (h). Sheepshead minnow. When we incorporate the classifier-guided generated images, the real evaluation accuracy is better than when augmenting only with SD-generated images. With only 9 real images per class, we already reach 76.3% accuracy compared to 35.1% with no generated images and 68.3% with SD-only augmentations. A high accuracy indicates that generated images capture a large part of the image distribution necessary to classify images correctly. Our findings also show that utilizing the CUB classifier for generating images can enhance performance when evaluated in the iNat179 setting. Specifically, incorporating our proposed image augmentation method improves accuracy, reaching 58.3% with only 15 real images, compared to the significantly lower accuracy of 49.8% and 28% with SD or when no augmentation is applied. These results indicate that our approach shows potential for augmenting data in low-resource settings by transferring knowledge from diverse classifiers. FID evaluationClass-conditioned image generation models may enjoy the benefit of eliminating ambiguity in generated images. However, datasets used to train these models are limited only to specific classes, and so do not capture the wide variety of images depicting free-form text. In Tab. 3, we utilize the FID [17] to assess the quality of generated images with respect to the real datasets. Our evaluation shows that the text-conditioned method generates higher quality images compared to a prior class-conditioned method (23.0 vs. 47.6). The text-conditioned method of SD, on the other hand, is limited by ambiguity issues and has difficulty in depicting fine details. Our proposed method provides a balance between generating high-quality images accurately and avoiding ambiguity issues. ### Qualitative assessment In Fig. 4, we show various generated samples based on ImageNet classes. From left to right, our method reinforces distinctive features of the dog species, in particular the face.The distinct characteristics of the custard apple and slot images are highlighted with our method. For the mail-bag, the'mail' term appears to confuse SD to generate a mail-related image rather than a mailbag. More ambiguities arise from the term wheel in potter's wheel and the kite class. In some cases, we only partially resolve ambiguity. For instance, beach wagon cars still appear on the beach, and it is still possible to see a kite bird that resembles a kite. Figure 6: Images using class-conditioned LDM, text-conditioned (SD), and our method with ImageNet classifier guidance. Figure 7: Results with different prompts for three classes: (i) tiger cat, (ii) Japanese spaniel, and (iii) beach wagon. Another interesting case is when the method adds another instance of the target class object, as is the case in the pug image. In Fig. 5, we show images generated using labels from the iNat dataset. We sample 50 species from each and compare them using SD and our method. Our method corrects for attributes such as patterns (e.g., (a), and (h)), anatomical issues (e.g., (b), (c), (f)), and resolves lexical ambiguity (e.g., (b), (d), (g)). In Fig. 6, we show that only class-conditioned images appear less natural despite being highly relevant to the class. Our approach allows us to benefit from the advantages of both worlds, producing high-quality images that are both precise and devoid of ambiguity. Another advantage of our approach over simple class conditioning [35] is the flexibility to use trained tokens with various prompts. In Fig. 7, we demonstrate that our discriminative tokens can be employed in different prompts, resulting in minimal semantic changes that primarily affect the object of interest that is relevant to the class. Face attributesOur method is not only capable of enhancing objects and animals. For example, we demonstrate that a classifier based on CelebFaces attributes [25] can be utilized to learn a token representing a facial attribute. We generate a facial image using the prompt "An image of a \(S_{c}\) person's face.". We optimized \(S_{c}\) using a classifier consisting of six convolutional layers followed by two fully connected layers. In Fig. 8, we present our results obtained by training with the guidance of baldness and gender attributes. During training, we observed that the hair feature is more dominant for the 'Not bald' class. Interestingly, for the 'bald' class, the generated image depicts old men and their identity is lost. This finding suggests that age may be a hidden factor in the training data. We explore the idea of revealing concepts in the training data further in the next paragraph. Classifier inversionOur method has the ability to inverse the action of a classifier without access to its trained data. For example, we often observe changes in the background when optimizing for an object's class. As Fig. 9 shows, applying our method with an ImageNet-trained classifier results in an image of a lobster on a plate given the phrase 'American lobster'. Another example is the 'horizontal bar' class, for which our method predominantly generates images containing athletes and a gym environment. We manually assessed ImageNet's training data by classifying 100 images from the 'American lobster' and 'horizontal bar' classes and determining whether they exhibit these features in the training data. For the 'American lobster' class, 55% of the images featured a plate and the lobster in an edible form, and for the 'horizontal bar' class, 95% of the images included an athlete performer. In Fig. 9, we also present some instances from the training data that illustrate these characteristics. Nevertheless, interpreting the results needs to be done with caution. It is possible that this bias toward a certain type of image reflects one local minimum in our optimization process. Figure 8: Demonstration of using discriminative tokens for gender and bald attributes, ‘SD’ shows the initial generation. Figure 9: Examples of revealed features from the training data when using an ImageNet classifier for guidance. ## 5 Conclusion In this paper, we introduced a "plug-and-play" approach for rapidly fine-tuning text-to-image diffusion models by using a discriminative signal. Our approach trains a new token without additional images, enhancing fine-grained details for classifiers pre-trained on datasets such as CUB and iNat and resolving lexical ambiguity. We have also demonstrated how our method can be used to distill generative image models to supplement datasets lacking imagery, edit faces based on attributes classifier, and analyze hidden factors in the training data. Going forward, we aim to extend our approach to other model types beyond classification. ## 6 Acknowledgements This project was supported by a grant from the Tel Aviv University Center for AI and Data Science (TAD). VS, SB and SB are supported by the Pioneer Centre for AI, DNRF grant number P1.
2309.02265
PESTO: Pitch Estimation with Self-supervised Transposition-equivariant Objective
In this paper, we address the problem of pitch estimation using Self Supervised Learning (SSL). The SSL paradigm we use is equivariance to pitch transposition, which enables our model to accurately perform pitch estimation on monophonic audio after being trained only on a small unlabeled dataset. We use a lightweight ($<$ 30k parameters) Siamese neural network that takes as inputs two different pitch-shifted versions of the same audio represented by its Constant-Q Transform. To prevent the model from collapsing in an encoder-only setting, we propose a novel class-based transposition-equivariant objective which captures pitch information. Furthermore, we design the architecture of our network to be transposition-preserving by introducing learnable Toeplitz matrices. We evaluate our model for the two tasks of singing voice and musical instrument pitch estimation and show that our model is able to generalize across tasks and datasets while being lightweight, hence remaining compatible with low-resource devices and suitable for real-time applications. In particular, our results surpass self-supervised baselines and narrow the performance gap between self-supervised and supervised methods for pitch estimation.
Alain Riou, Stefan Lattner, Gaëtan Hadjeres, Geoffroy Peeters
2023-09-05T14:20:08Z
http://arxiv.org/abs/2309.02265v1
# PESTO: PITCH ESTIMATION WITH SELF-SUPERVISED ###### Abstract In this paper, we address the problem of pitch estimation using Self Supervised Learning (SSL). The SSL paradigm we use is equivariance to pitch transposition, which enables our model to accurately perform pitch estimation on monophonic audio after being trained only on a small unlabeled dataset. We use a lightweight (\(<\) 30k parameters) Siamese neural network that takes as inputs two different pitch-shifted versions of the same audio represented by its Constant-Q Transform. To prevent the model from collapsing in an encoder-only setting, we propose a novel class-based transposition-equivariant objective which captures pitch information. Furthermore, we design the architecture of our network to be transposition-preserving by introducing learnable Toeplitz matrices. We evaluate our model for the two tasks of singing voice and musical instrument pitch estimation and show that our model is able to generalize across tasks and datasets while being lightweight, hence remaining compatible with low-resource devices and suitable for real-time applications. In particular, our results surpass self-supervised baselines and narrow the performance gap between self-supervised and supervised methods for pitch estimation. Alain Riou\({}^{1,2}\) &Stefan Lattner\({}^{2}\) &Gaetan Hadjeres\({}^{3}\) &Geoffroy Peeters\({}^{1}\)\({}^{1}\) LTCI, Telecom-Paris, Institut Polytechnique de Paris, France \({}^{2}\) Sony Computer Science Laboratories - Paris, France \({}^{3}\) Sony AI [email protected] ## 1 Introduction Pitch estimation is a fundamental task in audio analysis, with numerous applications, e.g. in Music Information Retrieval (MIR) and speech processing. It involves estimating the fundamental frequency of a sound, which allows to estimate its perceived pitch. Over the years, various techniques have been developed for pitch estimation, ranging from classical methods (based on signal processing) [1, 2, 3, 4] to machine learning approaches [5, 6]. In recent years, deep learning has emerged as a powerful tool for a wide range of applications, outperforming classical methods in many domains. This is notably true in MIR, where deep learning has led to significant advances in tasks such as music transcription [7, 8, 9], genre classification [10, 11, 12], and instrument recognition [13, 14, 15]. Pitch estimation has also benefited greatly from deep learning techniques [16, 17]. However, these deep learning models often require a large amount of labelled data to be trained, and can be computationally expensive, hindering their practical applications in devices with limited computing power and memory capabilities. Additionally, these models are often task-specific and may not generalize well to different datasets or tasks [18]. Therefore, there is a need for a lightweight and generic model that does not require labelled data to be trained. We address this here. We take inspiration from the equivariant pitch estimation [19] and the equivariant tempo estimation [20] algorithms which we describe in part 2. As those, we use a SSL paradigm based on Siamese networks and equivariance to pitch transpositions (comparing two versions of the same sound that have been transposed by a random but known pitch shift). We introduce a new equivariance loss that enforces the model to capture pitch information specifically. This work has the following **contributions**: * we formulate pitch estimation as a multi-class problem (part 3.1); while [19, 20] model pitch/tempo estimation as a regression problem, * we propose a novel class-based equivariance loss (part 3.1) which prevents collapse; while [19] necessitates a decoder, * the architecture of our model is lightweight and transposition-equivariant by design. For this, we introduce Toeplitz fully-connected layers (part 3.4). We evaluate our method on several datasets and show that it outperforms self-supervised baselines on single pitch estimation (part 4.4.1). We demonstrate the robustness of our method to domain-shift and background music, highlighting its potential for real-world applications (part 4.4.2). Our proposed method requires minimal computation resources and is thus accessible to a wide range of users for both research and musical applications. In consideration of accessibility and reproducibility, we make our code and pretrained models publicly available 1. Footnote 1: [https://github.com/SonyCSLParis/pesto](https://github.com/SonyCSLParis/pesto) ## 2 Related Works ### SSL to learn invariant representations. **Siamese networks.** Most common techniques for SSL representation involve Siamese networks [21]. See section 2. The underlying idea is to generate two views of an input, feed them to a neural network, and train the network by applying a criterion between the output embeddings. Various techniques have been developed for generating views 2. Footnote 2: The most common technique involves randomly applying data augmentations to inputs to create pairs of inputs that share semantic content. **Collapse.** However, a major issue with these methods is "collapse", when all inputs are mapped to the same embedding. To address this, various techniques have been proposed. One of the most common is SimCLR [22] which also uses negative samples to ensure that embeddings are far apart through a contrastive loss. Additionally, several regularization techniques have been developed that minimize a loss over the whole batch. Barlow Twins [23] force the cross-correlation between embeddings to be identity, while VICReg [24] add loss terms on the statistics of a batch to ensure that dimensions of the embeddings have high enough variance while remaining independent of each other. On the other hand, [25] explicitly minimize a loss over the hypersphere to distribute embeddings uniformly. Furthermore, incorporating asymmetry between inputs has been shown to improve performance. [26, 27] uses a momentum encoder, while [28] and [29] add a projection head and a stop-gradient operator on top of the network, with [28] also using a teacher network. Finally, [30] incorporates asymmetry to contrastive- and clustering-based representation learning. **Application to audio.** While originally proposed for computer vision, these methods have been successfully adapted to audio and music as well. For example, [31, 32], and [33] respectively adapted [22, 23], and [28] to the audio domain. By training their large models on AudioSet [34], they aim at learning general audio representations that are suited for many downstream tasks. More specifically, [35] successfully adapts contrastive learning to the task of music tagging by proposing more musically-relevant data augmentations. ### SSL to learn equivariant representations. The purpose of the methods described above is to learn a mapping \(f:\mathcal{X}\rightarrow\mathcal{Y}\) that is _invariant_ to a set of transforms \(\mathcal{T}_{\mathcal{X}}\), i.e. so that for any input \(\mathbf{x}\in\mathcal{X}\) and transform \(t\in\mathcal{T}_{\mathcal{X}}\) \[f(t(\mathbf{x}))\approx f(\mathbf{x}) \tag{1}\] However, recent approaches [36, 37, 38] try instead to learn a mapping \(f\) that is _equivariant_ to \(\mathcal{T}_{\mathcal{X}}\), i.e. that satisfies \[f(t(\mathbf{x}))\approx t^{\prime}(f(\mathbf{x})) \tag{2}\] where \(t^{\prime}\in\mathcal{T}_{\mathcal{Y}}\) with \(\mathcal{T}_{\mathcal{Y}}\) a set of transforms that acts on the output space \(\mathcal{Y}\). In other words, if the input is transformed, the output should be transformed accordingly. Representation collapse is hence prevented by design. Equivariant representation learning has mostly been applied to computer vision and usually combines an invariance and an equivariance criterion. E-SSL [36] trains two projection heads on top of an encoder, one to return projections invariant to data augmentations while the other predicts the parameters of the applied data augmentations. [37] predicts separately a semantic representation and a rotation angle of a given input and optimizes the network with a reconstruction loss applied to the decoded content representation rotated by the predicted angle. Finally, SIE [38] creates a pair of inputs by augmenting an input and learns equivariant representations by training a hypernetwork conditioned on the parameters of the augmentation to predict one embedding of the pair from the other. **Application to audio.** Finally, a few successful examples of equivariant learning for solving MIR tasks recently emerged [19, 20]. In particular, [20] introduces a simple yet effective equivariance criterion for tempo estimation while preventing collapse without any decoder or regularization: pairs are created by time-stretching an input with two different ratios, then the output embeddings are linearly projected onto scalars and the network is optimized to make the ratio of the scalar projections match the time-stretching ratio within a pair. ### Pitch estimation. Monophonic pitch estimation has been a subject of interest for over fifty years [39]. The earlier methods typically obtain a pitch curve by processing a candidate-generating function such as cepstrum [39], autocorrelation function (ACF) [40], and average magnitude difference function (AMDF) [41]. Other functions, such as the normalized cross-correlation function (NCCF) [1, 2] and the cumulative mean normalized difference function [3, 42], have also been proposed. On the other hand, [4] performs pitch estimation by predicting the pitch of the sawtooth waveform whose spectrum best matches the one of the input signal. Recently, methods involving machine learning techniques have been proposed [5, 6]. In particular, CREPE [16] is a deep convolutional network trained on a large corpus to predict pitch from raw audio waveforms. SPICE [19] is a self-supervised method that takes as inputs individual Constant-Q Transform (CQT) frames of pitch-shifted inputs and learns the transposition between these inputs. It achieves quite decent results thanks to a decoder that takes as input the predicted pitch and tries to reconstruct the original CQT frame from it. Finally, some works [43, 44] aim at disentangling the pitch and timbre of an input audio, thus predicting pitch as a side effect. In particular, DDSP-inv [45] is a DDSP-based approach [46] that relies on inverse synthesis to infer pitch in a self-supervised way. ## 3 Self-supervised pitch estimation ### Transposition-equivariant objective We focus on the problem of monophonic pitch estimation and model it as a classification task. Our model is com posed of a neural network \(f_{\theta}\) that takes as input an audio signal \(\mathbf{x}\) and returns a vector \(\mathbf{y}=(y_{0},\ldots,y_{i},\ldots,y_{d-1})\in[0,1]^{d}\), which represents the probability distribution of each pitch \(i\). \(y_{i}\) represents the probability that \(i\) is the pitch of \(\mathbf{x}\). We propose here to train \(f_{\theta}\) in a SSL way. For this, similarly to [22, 24, 28, 29, 26], we use data augmentations and Siamese networks. Given \(\mathbf{x}\), we first generate \(\mathbf{x}^{(k)}\) by pitch-shifting \(\mathbf{x}\) by a _known_ number \(k\) of semitones. Then, both \(\mathbf{x}\) and \(\mathbf{x}^{(k)}\) are fed to \(f_{\theta}\) which is trained to minimize a loss function between \(\mathbf{y}\!=\!f_{\theta}(\mathbf{x})\) and \(\mathbf{y}^{(k)}\!=\!f_{\theta}(\mathbf{x}^{(k)})\). **Definition**.: _For two vectors \(\mathbf{y},\mathbf{y}^{\prime}\in\mathbb{R}^{d}\) and \(0\leq k<d\), \(\mathbf{y}^{\prime}\) is a \(k\)-transposition of \(\mathbf{y}\) if and only if for all \(0\leq i<d\)_ \[\begin{cases}y^{\prime}_{i+k}=y_{i}&\text{when }\;0\leq i<d-k\\ y^{\prime}_{i}=0&\text{when }\;i<k\\ y_{i}=0&\text{when }\;i\geq d-k-1\end{cases} \tag{3}\] _Similarly, for \(-d<k\leq 0\), \(\mathbf{y}^{\prime}\) is a \(k\)-transposition of \(\mathbf{y}\) if and only if \(\mathbf{y}\) is a \(-k\)-transposition of \(\mathbf{y}^{\prime}\)._ The concept of \(k\)-transposition is illustrated in Figure 1. Note also that for a vector \(\mathbf{y}\in\mathbb{R}^{d}\), exists at most one vector \(\mathbf{y}^{\prime}\in\mathbb{R}^{d}\) that is a \(k\)-transposition of \(\mathbf{y}\). We can therefore refer to \(\mathbf{y}^{\prime}\) as _the_\(k\)-transposition of this vector \(\mathbf{y}\). **Equivariance loss.** We then design our criterion based on the following assumption: the probability of \(\mathbf{x}\) to have pitch \(i\) is equal to the probability of \(\mathbf{x}^{(k)}\) to have pitch \(i+k\), i.e. \(y_{i}\) should be equal to \(y^{(k)}_{i+k}\)3. In other words, if \(\mathbf{x}^{(k)}\) is a pitch-shifted version of \(\mathbf{x}\), their respective pitch probability distributions should be shifted accordingly, i.e. \(\mathbf{y}^{(k)}\) should be the \(k\)-transposition of \(\mathbf{y}\). Footnote 3: For example, if \(k=2\) semitones, the probability of \(\mathbf{x}\) to be C4 is exactly the probability of \(\mathbf{x}^{(k)}\) to be a D4, and the same holds for any pitch independently of the actual pitch of \(\mathbf{x}\). We take inspiration from [20] to design our equivariance loss. However, in our case, the output of our network \(f_{\theta}\) is not a generic representation but a probability distribution. We therefore adapt our criterion by replacing the learnable linear projection head from [20] by the following _deterministic_ linear form: \[\begin{array}{cccc}\phi&:&\mathbb{R}^{d}&\rightarrow&\mathbb{R}\\ &\mathbf{y}&\mapsto&(\alpha,\alpha^{2},\ldots,\alpha^{d})\mathbf{y}\end{array} \tag{4}\] where \(\alpha\) is a fixed hyperparameter4. Footnote 4: We found \(\alpha=2^{1/36}\) to work well in practice. Indeed, with this formulation, for any \(k\) if \(\mathbf{y}^{\prime}\) is a \(k\)-transposition of \(\mathbf{y}\) then \(\phi(\mathbf{y}^{\prime})=\alpha^{k}\phi(\mathbf{y})\). Hence we define our loss as \[\mathcal{L}_{\text{quiv}}(\mathbf{y},\mathbf{y}^{(k)},k)=h_{\tau}\left(\frac{ \phi(\mathbf{y}^{(k)})}{\phi(\mathbf{y})}-\alpha^{k}\right) \tag{5}\] where \(h_{\tau}\) is the Huber loss function [47], defined by \[h_{\tau}(x)=\begin{cases}\frac{x^{2}}{2}&\text{if }|x|\leq\tau\\ \frac{\tau}{2}+\tau(|x|-\tau)&\text{otherwise}\end{cases} \tag{6}\] **Regularization loss.** Note that if \(\mathbf{y}^{(k)}\) is the \(k\)-transposition of \(\mathbf{y}\) then \(\mathcal{L}_{\text{quiv}}(\mathbf{y},\mathbf{y}^{(k)},k)\) is minimal. However, the converse is not always true. In order to actually enforce pitch-shifted pairs of inputs to lead to \(k\)-transpositions, we further add a regularization term which is simply the shifted cross-entropy (SCE) between \(\mathbf{y}\) and \(\mathbf{y}^{(k)}\), i.e. the cross-entropy between the \(k\)-transposition of \(\mathbf{y}\) and \(\mathbf{y}^{(k)}\): \[\mathcal{L}_{\text{SCE}}(\mathbf{y},\mathbf{y}^{(k)},k)=\sum_{i=0}^{d-1}y_{i} \log\left(y^{(k)}_{i+k}\right) \tag{7}\] with the out-of-bounds indices replaced by 0. The respective contribution of \(\mathcal{L}_{\text{quiv}}\) and \(\mathcal{L}_{\text{SCE}}\) is studied in part 4.4.3. **Invariance loss.**\(\mathcal{L}_{\text{equiv}}\) and \(\mathcal{L}_{\text{SCE}}\) allow our model to learn relative transpositions between different inputs and learn to output probability distributions \(\mathbf{y}\) and \(\mathbf{y}^{(k)}\) that satisfy the equivariance constraints. However, these distributions may still depend on the timbre of the signal. This is because our model actually never observed at the same time two different samples with the same pitch. To circumvent this, we rely on a set \(\mathcal{T}\) of data augmentations that preserve pitch (such as gain or additive white noise). We create augmented views \(\tilde{\mathbf{x}}=t(\mathbf{x})\) of our inputs \(\mathbf{x}\) by applying random transforms \(t\sim\mathcal{T}\). Similarly to [35], we then train our model to be invariant to those transforms by minimizing the cross-entropy between \(\mathbf{y}=f_{\theta}(\mathbf{x})\) and \(\tilde{\mathbf{y}}=f_{\theta}(\tilde{\mathbf{x}})\). \[\mathcal{L}_{\text{inv}}(\mathbf{y},\tilde{\mathbf{y}})=\text{CrossEntropy}( \mathbf{y},\tilde{\mathbf{y}}) \tag{8}\] **Combining the losses.** For a given input sample \(\mathbf{x}\) and a given set of augmentations \(\mathcal{T}\), * we first compute \(\mathbf{x}^{(k)}\) by pitch-shifting \(\mathbf{x}\) by a random number of bins \(k\) (the precise procedure is described in section 3.2); * we then generate two augmented views \(\tilde{\mathbf{x}}=t_{1}(\mathbf{x})\) and \(\tilde{\mathbf{x}}^{(k)}=t_{2}(\mathbf{x}^{(k)})\), where \(t_{1},t_{2}\sim\mathcal{T}\); * we compute \(\mathbf{y}\!=\!f_{\theta}(\mathbf{x})\), \(\tilde{\mathbf{y}}\!=\!f_{\theta}(\tilde{\mathbf{x}})\) and \(\tilde{\mathbf{y}}^{(k)}\!=\!f_{\theta}(\tilde{\mathbf{x}}^{(k)})\). Our final objective loss is then: \[\begin{split}\mathcal{L}(\mathbf{y},\tilde{\mathbf{y}},\tilde{ \mathbf{y}}^{(k)},k)&=\lambda_{\text{inv}}\;\;\mathcal{L}_{\text{ inv}}(\mathbf{y},\tilde{\mathbf{y}})\\ &+\lambda_{\text{equiv}}\;\;\mathcal{L}_{\text{quiv}}(\tilde{ \mathbf{y}},\tilde{\mathbf{y}}^{(k)},k)\\ &+\lambda_{\text{SCE}}\;\;\mathcal{L}_{\text{SCE}}(\tilde{ \mathbf{y}},\tilde{\mathbf{y}}^{(k)},k)\end{split} \tag{9}\] We illustrate this in Figure 2. To set the weights \(\lambda_{*}\) we use the gradient-based method proposed by [48, 49, 50]. Figure 1: Example of \(k\)-transpositions. Visually, \(\mathbf{y}\) and \(\mathbf{y}^{\prime}\) are just translated versions of each other. The sign of \(k\) and its absolute value respectively indicate the direction and the distance of the translation. ### Audio-frontend The inputs \(\mathbf{x}\) are the individual frames of the CQT. We have chosen the CQT as input since its logarithmic frequency scale, in which bins of the CQT exactly correspond to a fixed fraction \(b\) of pitch semitones, naturally leads to pitch-shifting by translation. CQT is also a common choice made for pitch estimation [17, 19, 51]. To compute the CQT, we use the implementation provided in the nnAudio library [52] since it supports parallel GPU computation. We choose \(f_{\min}=27.5\) Hz, which is the frequency of A0 the lowest key of the piano and select a resolution of \(b=3\) bins per semitone. Our CQT has in total \(K=99b\) log-frequency bins, which corresponds to the maximal number of bins for a 16kHz signal. ### Simulating translations. To avoid any boundary effects, we perform pitch-shift by cropping shifted slices of the original CQT input frame as in [19]5. From a computational point of view, it is indeed significantly faster than applying classical pitch shift algorithms based on phase vocoder and resampling. Footnote 5: Specifically, we sample an integer \(k\) uniformly from the range \(\{-k_{max},\ldots,k_{\max}\}\), then generate two CQT outputs, denoted as \(x\) and \(x^{(k)}\), where \(x\) is obtained by cropping the input CQT at indices \([k_{\max},K-k_{\max}-1]\), and \(x^{(k)}\) is obtained by cropping the input CQT at indices \([k_{\max}-k,K-k_{\max}+k-1]\), with \(K\) the total number of bins of the original CQT frame and \(k_{\max}=16\) in practice (see Figure 2). ### Transpostion-preserving architecture The architecture of \(f_{\theta}\) is illustrated in Figure 3. It is inspired by [17]. Each input CQT frame is processed independently: first layer-normed [53] then preprocessed by two 1D-Conv (convolution in the log-frequency dimension) with skip-connections [54], followed by four 1D-Conv layers. As in [17], we apply a non-linear leaky-ReLU (slope 0.3) [55] and dropout (rate 0.2) [56] between each convolutional layer. Importantly, the kernel size and padding of each of these layers are chosen so that the frequency resolution is never reduced. We found in practice that it helps the model to distinguish close but different pitches. The output is then flattened, fed to a final fully-connected layer and normalized by a softmax layer to become a probability distribution of the desired shape. Note that all layers (convolutions 6, elementwise nonlinearities, layer-norm and softmax), except the last final fully-connected layer, preserve transpositions. To make the final fully-connected layer also transposition-equivariant, we propose to use **Toeplitz fully-connected layers**. It simply consists of a standard linear layer without bias but whose weights matrix \(A\) is a Toeplitz matrix, i.e. each of its diagonals is constant. Footnote 6: Convolutions roughly preserve transpositions since the kernels are applied locally, meaning that if two transposed inputs are convolved by the same kernel, then the output results will be almost transpositions of each other as well \[A=\left(\begin{array}{cccccc}a_{0}&a_{-1}&a_{-2}&\cdots&a_{-n+2}&a_{-n+1}\\ a_{1}&a_{0}&a_{-1}&\ddots&\ddots&a_{-n+2}\\ a_{2}&a_{1}&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ a_{m-1}&\cdots&\cdots&\cdots&\cdots&a_{m-n}\end{array}\right) \tag{10}\] Contrary to arbitrary fully-connected layers, Toeplitz matrices are transposition-preserving operations and only have \(m+n-1\) parameters instead of \(mn\). Furthermore, they are mathematically equivalent to convolutions, making them straightforward to implement. Figure 3: Architecture of our network \(f_{\theta}\). The number of channels varies between the intermediate layers, however the frequency resolution remains unchanged until the final Toeplitz fully-connected layer. Figure 2: Overview of the PESTO method. The input CQT frame (log-frequencies) is first cropped to produce a pair of pitch-shifted inputs \((\mathbf{x},\mathbf{x}^{(k)})\). Then we compute \(\tilde{\mathbf{x}}\) and \(\tilde{\mathbf{x}}^{(k)}\) by randomly applying pitch-preserving transforms to the pair. We finally pass \(\mathbf{x}\), \(\tilde{\mathbf{x}}\) and \(\tilde{\mathbf{x}}^{(k)}\) through the network \(f_{\theta}\) and optimize the loss between the predicted probability distributions. ### Absolute pitch inference from \(\mathbf{y}\) Our encoder \(f_{\theta}\) returns a probability distribution over (quantized) pitches. From an input CQT frame \(\mathbf{x}\), we first compute the probability distribution \(f_{\theta}(\mathbf{x})\), then we infer the absolute pitch \(\hat{p}\) by applying the affine mapping: \[\hat{p}(\mathbf{x})=\frac{1}{b}\left(\arg\max f_{\theta}(\mathbf{x})+p_{0}\right) \tag{11}\] where \(b=3\) is the number of bins per semitones in the CQT and \(p_{0}\) is a fixed integer shift that only depends on \(f_{\theta}\). As in [19], we set the integer shift \(p_{0}\) by relying on a set of synthetic data7 with known pitch. Footnote 7: synthetic harmonic signals with random amplitudes and pitch ## 4 Experiments ### Datasets To evaluate the performance of our approach, we consider the two following datasets: 1. _MIR-1K_[57] contains 1000 tracks (about two hours) of people singing Chinese pop songs, with separate vocal and background music tracks provided. 2. _MDB-stem-synth_[58] contains re-synthesized monophonic music played by various instruments. The pitch range of the _MDB-stem-synth_ dataset is wider than the one of _MIR-1K_. The two datasets have different sampling rates and granularity for the annotations. We conduct separate model training and evaluation on both datasets to measure overfitting and generalization performance. In fact, given that our model is lightweight and does not require labelled data, overfitting performance is particularly relevant for real-world scenarios, as it is easy for someone to train on their own dataset, e.g. their own voice. However, we also examine generalization performance through cross-evaluation to ensure that the model truly captures the underlying concept of pitch and does not merely memorize the training data. ### Training details From an input CQT (see part 3.2), we first compute the pitch-shifted CQT (see part 3.3). Then two random data augmentations \(t_{1},t_{2}\sim\mathcal{T}\) are applied with a probability of 0.7. We used white noise with a random standard deviation between 0.1 and 2, and gain with a random value picked uniformly between -6 and 3 dB. The overall architecture of \(f_{\theta}\) (see part 3.4) is implemented in PyTorch [59]. For training, we use a batch size of 256 and the Adam optimizer [60] with a learning rate of \(10^{-4}\) and default parameters. The model is trained for 50 epochs using a cosine annealing learning rate scheduler. Our architecture being extremely lightweight, training requires only 545MB of GPU memory and can be performed on a single GTX 1080Ti. ### Performance metrics We measure the performances using the following metrics. 1. _Raw Pitch Accuracy_ (RPA): corresponds to the percentage of voiced frames whose pitch error 8 is less than 0.5 semitone [61]. Footnote 8: i.e. distance between the predicted pitch and the actual one 2. _Raw Chroma Accuracy_ (RCA): same as RPA but considering the mapping to Chroma (hence allowing octave errors) [61]. Footnote 8: i.e. distance between the predicted pitch and the actual one RCA is only used in our ablation studies. ### Results and discussions #### 4.4.1 Clean signals We compare our results with three baselines: CREPE [16], SPICE [19] and DDSP-inv [45]. CREPE is fully-supervised while SPICE and DDSP-inv are two SSL approaches. To measure the influence of the training set, we train PESTO on the two datasets (_MIR-1K_ and _MDB-stem-synth_) and also evaluate on the two. This allows to test model generalization. We indicate the results in Table 1. We see that PESTO significantly outperforms the two SSL baselines (SPICE and DDSP-inv) even in the cross-dataset scenario (93.5% and 94.6%). Moreover, it is competitive with CREPE (-1.7% and -1.2%) which has 750 times more parameters and is trained in a supervised way on the same datasets. #### 4.4.2 Robustness to background music Background noise and music can severely impact pitch estimation algorithms, making it imperative to develop robust methods that can handle real-world scenarios where background noise is often unavoidable. We therefore test the robustness of PESTO to background music. For this, we use the _MIR-1K_ dataset, which contains separated vocals and background tracks \begin{table} \begin{tabular}{l c c c c} \hline \hline & & & Raw Pitch Accuracy \\ Model & \# params & Trained on & _MIR-1K_ & _MDB-stem-synth_ \\ \hline SPICE [19] & 2.38M & private data & 90.6\% & 89.1\% \\ DDSP-inv [45] & - & _MIR-1K_ / _MDB-stem-synth_ & 91.8\% & 88.5\% \\ \hline PESTO (ours) & 28.9k & _MIR-1K_ & **96.1\%** & 94.6\% \\ PESTO (ours) & 28.9k & _MDB-stem-synth_ & 93.5\% & **95.5\%** \\ \hline CREPE [16] & 22.2M & many (supervised) & 97.8\% & \(96.7\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results of PESTO compared to supervised and self-supervised baselines. CREPE has been trained in a supervised way on a huge dataset containing in particular _MIR-1K_ and _MDB-stem-synth_. It is grayed out as a reference. For DDSP-inv, we report the results when training and evaluating on the same dataset. and allows testing various signal-to-noise (here vocal-to-background) ratios (SNRs). We indicate the results in Table 2. As foreseen, the performance of PESTO when trained on clean vocals (row \(\beta=0\)) and applied to vocal-with-background considerably drop: from 94.8% (clean) to 50.0% (SNR = 0 dB)9. Footnote 9: It should be noted that the difference between the \(96.1\%\) of Table 1 and the 94.8% of Table 2 is due to the fact that we do not apply any data augmentation (gain or additive white noise) when \(\beta=0\). To improve the robustness to background music, we slightly modify our method to train our model on mixed sources. Instead of using gain and white noise as data augmentations, we create an augmented view of our original vocals signal \(\mathbf{x}_{\text{vocals}}\) by mixing it (in the complex-CQT domain) with its corresponding background track \(\mathbf{x}_{\text{background}}\): \[\mathbf{x}=\mathbf{x}_{\text{vocals}}+\beta\mathbf{x}_{\text{background}} \tag{12}\] Then, thanks to \(\mathcal{L}_{\text{inv}}\), the model is trained to ignore the background music for making its predictions. The background level \(\beta\) is randomly sampled for each CQT frame. The influence of the distribution we sample \(\beta\) from is depicted in Table 2. This method significantly limits the drop in performances observed previously and also makes PESTO outperform SPICE in noisy conditions. #### 4.4.3 Ablation study Table 3 depicts the influence of our different design choices. First, we observe that the equivariance loss \(\mathcal{L}_{\text{equiviv}}\) and the final Toeplitz fully-connected layer (eq.(10)) are absolutely essential for our model not to collapse. Moreover, data augmentations seem to have a negligible influence on out-of-domain RPA (-0.2%) but slightly help when training and evaluating on the same dataset (+1.2%). On the other hand, it appears that both \(\mathcal{L}_{\text{inv}}\) and \(\mathcal{L}_{\text{SCE}}\) do not improve in-domain performances but help the model to generalize better. This is especially true for \(\mathcal{L}_{\text{SCE}}\), whose addition enables to improve RPA from 86.9% to 94.6% on _MDB-stem-synth_. Finally, according to the drop of performances in RPA and RCA when removing \(\mathcal{L}_{\text{inv}}\), it seems that the invariance loss prevents octave errors on the out-of-domain dataset. ## 5 Conclusion In this paper, we presented a novel self-supervised learning method for pitch estimation that leverages equivariance to musical transpositions. We propose a class-based equivariant objective that enables Siamese networks to capture pitch information from pairs of transposed inputs accurately. We also introduce a Toeplitz fully-connected layer to the architecture of our model to facilitate the optimization of this objective. Our method is evaluated on two standard benchmarks, and the results show that it outperforms self-supervised baselines and is robust to background music and domain shift. From a musical perspective, our lightweight model is well-suited for real-world scenarios, as it can run on resource-limited devices without sacrificing performance. Moreover, its SSL training procedure makes it convenient to fine-tune on a small unlabeled dataset, such as a specific voice or instrument. Additionally, the resolution of the model is a sixth of a tone but could eventually be increased by changing the resolution of the CQT. Moreover, despite modelling pitch estimation as a classification problem, we make no assumption about scale or temperament. These features make our method still a viable solution, e.g. for instruments that use quartertones and/or for which no annotated dataset exists. We therefore believe that it has many applications even beyond the limitations of Western music. Overall, the idea of using equivariance to solve a classification problem is a novel and promising approach that enables the direct return of a probability distribution over the classes with a single, potentially synthetic, labelled element. While our paper applies this approach to pitch estimation, there are other applications where this technique could be useful, such as tempo estimation. Moreover, modelling a regression task as a classification problem can offer greater interpretability as the output of the network is not a single scalar but a whole probability distribution. Finally, it can generalize better to multi-label scenarios. Our proposed method hence demonstrates the potential of using equivariance to solve problems that are beyond the scope of our current work. In particular, it paves the way towards self-supervised multi-pitch estimation. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{Raw Pitch Accuracy (_MIR-1K_)} \\ Model & clean & 20 dB & 10 dB & 0 dB \\ \hline SPICE [19] & 91.4\% & 91.2\% & \(90.0\%\) & \(81.6\%\) \\ \hline PESTO & & & & \\ \(\beta=0\) & **94.8\%** & 90.7\% & 79.2\% & 50.0\% \\ \(\beta=1\) & 94.5\% & 94.2\% & 92.9\% & **83.1\%** \\ \(\beta\sim\mathcal{U}(0,1)\) & 94.7\% & 94.4\% & 92.9\% & 81.7\% \\ \(\beta\sim\mathcal{N}(0,1)\) & **94.8\%** & **94.5\%** & **93.0\%** & 82.6\% \\ \(\beta\sim\mathcal{N}(0,\frac{1}{2})\) & **94.8\%** & **94.5\%** & 92.9\% & 81.0\% \\ \hline CREPE [16] & 97.8\% & 97.3\% & 95.3\% & 84.8\% \\ \hline \hline \end{tabular} \end{table} Table 2: Robustness of PESTO and other baselines to background music with various Signal-to-Noise ratios. Adding background music to training samples significantly improves the robustness of PESTO (see section 4.4.2). \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{MIR-1K} & \multicolumn{3}{c}{MDB} \\ & RPA & RCA & RPA & RCA \\ \hline PESTO baseline & 96.1\% & 96.4\% & 94.6\% & 95.0\% \\ \hline \multicolumn{3}{l}{_Loss ablations_} \\ w/o \(\mathcal{L}_{\text{equiviv}}\) & 5.8\% & 8.6\% & 1.3\% & 6.1\% \\ w/o \(\mathcal{L}_{\text{inv}}\) & 96.1\% & 96.4\% & 92.5\% & 94.5\% \\ w/o \(\mathcal{L}_{\text{SCE}}\) & 96.1\% & 96.5\% & 86.9\% & 93.8\% \\ \hline \multicolumn{3}{l}{_Miscellaneous_} \\ no augmentations & 94.8\% & 95.4\% & 94.8\% & 95.2\% \\ non-Toeplitz fc & 5.7\% & 8.7\% & 1.2\% & 6.1\% \\ \hline \hline \end{tabular} \end{table} Table 3: Respective contribution of various design choices of PESTO for a model trained on _MIR-1K_. ## 6 Acknowledgements This work has been funded by the ANRT CIFRE convention n\({}^{\circ}\)2021/1537 and Sony France. This work was granted access to the HPC/AI resources of IDRIS under the allocation 2022-AD011013842 made by GENCI. We would like to thank the reviewers and meta-reviewer for their valuable and insightful comments.
2303.07346
Observation of a topological edge state stabilized by dissipation
Robust states emerging at the boundary of a system constitute a hallmark for topological band structures. Other than in closed systems, topologically protected states can occur even in systems with a trivial band structure, if exposed to suitably modulated losses. Here, we study the dissipation-induced emergence of a topological band structure in a non-Hermitian one-dimensional lattice system, realized by arrays of plasmonic waveguides with tailored loss. We obtain direct evidence for a topological edge state that resides in the center of the band gap. By tuning dissipation and hopping, the formation and breakdown of an interface state between topologically distinct regions is demonstrated.
Helene Wetter, Michael Fleischhauer, Stefan Linden, Julian Schmitt
2023-03-13T17:59:55Z
http://arxiv.org/abs/2303.07346v2
# Observation of a topological edge state stabilized by dissipation ###### Abstract Robust states emerging at the boundary of a system constitute a hallmark for topological band structures. Other than in closed systems, topologically protected states can occur even in systems with a trivial band structure, if exposed to suitably modulated losses. Here, we study the dissipation-induced emergence of a topological band structure in a non-Hermitian one-dimensional lattice system, realized by arrays of plasmonic waveguides with tailored loss. We obtain direct evidence for a topological edge state that resides in the center of the band gap. By tuning dissipation and hopping, the formation and breakdown of an interface state between topologically distinct regions is demonstrated. Topology is an important paradigm for our understanding of phases of matter [1], with the quantum Hall effect constituting a prominent example of a topological system isolated from the environment [2]. Interfacing materials with distinct topological properties has remarkable implications leading to localized edge states at the boundary, which due to their robustness against disorder are considered as valuable resource states for quantum technologies [3]. Conceptually, the robustness results from the existence of global integer-valued invariants, which can only change in a phase transition associated with the closing of a gap. Inspired by solid-state systems, topological states in closed Hermitian systems have been experimentally realized in a wide range of platforms, such as ultracold atoms or photonics [4, 5, 6]. Exploring topological phenomena in open systems presents a complementary approach to realize robust edge states, where the coupling between the system and the environment (_e.g._ by pumping or dissipation of particles) acts as a resource rather than a limitation. Starting from the prediction of topological transitions in non-Hermitian quantum walk [7], conceptual questions about the classification of open-system topological phases for non-Hermitian and Lindbladian settings [8, 9, 10, 11, 12], the role of topological invariants and edge states [13, 14, 15, 16, 17], and the band theory [18, 19] have been addressed theoretically. Experimentally, non-Hermitian systems have been realized in photonics, where driven-dissipative effects can be engineered [20]. Combining topologically nontrivial photonic crystals with gain or loss, this has allowed for the observation of topological (lasing) states in waveguides [21, 22], resonator arrays [23, 24] and exciton-polaritons [25, 26]. Topological protection is here, however, inherited from the photonic band structure, and not from a coupling to reservoirs. The implementation of topological phases that solely arise from non-Hermiticity and lack a Hermitian counterpart, as proposed in refs. [8, 9, 27, 28] and realized with mechanical metamaterials [29], acoustic cavities [30], and electrical circuits [31], has so far remained elusive for optical systems. In this Letter, we report measurements of light-matter states with nontrivial topological properties, solely induced by tailored dissipation. Using surface plasmon polaritons (SPPs) confined in waveguide arrays, we obtain signatures for open-system topological edge states by identifying zero-energy modes localized at the boundary of the sample. The underlying one-dimensional (1D) lattice with 4-site unit cell realizes a non-Hermitian extension of the paradigmatic SSH model [32], despite uniform hopping throughout the lattice. By tuning dissipation and hopping, the birth and death of a non-Hermitian topologically protected edge state is demonstrated. Fig. 1: Experimental scheme. (a) Lattice system with nontrivial topology induced solely by dissipation (top) and experimental realization with DLSPP waveguides spaced by distance \(d\), where losses are induced by Chromium stripes of width \(w\) (bottom). (b) Complex-valued energy spectrum for 40 lattice sites with \(g_{1}=|g_{2}|\). For \(g_{2}<0\), the system is topologically trivial and the probability density \(|\Psi|^{2}\) is concentrated in the first two lattice sites (top left). In the topologically nontrivial regime (\(g_{2}>0\)), \(|\Psi|^{2}\) is exponentially localized at the edges (top right) and associated with midgap states at zero energy (red lines). (c) _Left_: Waveguide sample with Chromium stripes (dark gray) arranged to realize trivial (topological) domains in the left (right) sample half. The Chromium-free region on top is used to excite the waveguides by grating coupling of laser light. _Right_: Measured dissipation in a single waveguide \(\mathrm{Im}\beta\) versus \(w\) and hopping \(J\) between two waveguides versus \(d\), along with fits (lines). The basic principle of our topological system, see Fig. 1(a), relies on a 1D lattice with spatially uniform nearest-neighbor hopping \(J\) and spatially varying dissipation at the lattice sites [27, 28]. The unit cell consists of 4 sites spaced by \(d\), which are subject to a gain-loss pattern \((ig_{1},-ig_{2},-ig_{1},ig_{2})\) with real-valued dimensionless amplitudes \(g_{1,2}\). The Bloch Hamiltonian of the open system at wave vector \(k_{x}\) \[\hat{H}_{k_{x}}=J\begin{pmatrix}ig_{1}&1&0&e^{-4ik_{x}d}\\ 1&-ig_{2}&1&0\\ 0&1&-ig_{1}&1\\ e^{4ik_{x}d}&0&1&ig_{2}\end{pmatrix}-iJg_{0}\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1} \tag{1}\] is a non-Hermitian matrix, _i.e._, \(\hat{H}\neq\hat{H}^{\dagger}\), with complex energy eigenvalues \(E\). Here, \(g_{0}>0\) accounts for a loss at all lattice sites, which governs our implementation using purely dissipative waveguides, see Fig. 1(a) (bottom). Due to the global loss the steady-state is here the trivial vacuum. Nevertheless, as we will demonstrate in our experiments, the system possesses a nontrivial _non-equilibrium_ topology, protected by the symmetries of the Liouvillian \(\mathcal{L}\) that governs the dynamics of the density matrix \(\rho\), according to \(\dot{\rho}=\mathcal{L}\rho\)[12]. The dynamical generators fulfill time reversal (T), charge-conjugation (C), and chiral (S) symmetries [33], characterizing the class BDI [1], which can have a nontrivial topology in 1D. The topological character of the 4-site model becomes apparent when considering the energy band structure and eigenstates in a finite-length lattice for different loss patterns, as shown in Fig. 1(b); for simplicity, we consider the symmetric case \(g_{1}=|g_{2}|\) and \(g_{0}=|g_{2}|\) with \(g_{2}\) as the control parameter. In the different parameter regimes, \(\mathrm{diag}(\hat{H})=(0,0,0,0)\) for \(g_{2}=0\) (lossless trivial, I), \(\mathrm{diag}(\hat{H})=-2iJ|g_{2}|(0,0,1,1)\) for \(g_{2}<0\) (dissipative trivial, II), and \(\mathrm{diag}(H)=-2iJg_{2}(0,1,1,0)\) for \(g_{2}>0\) (topologically nontrivial, III), respectively (blue and red circles in Fig. 1(b)). Phase (I) exhibits a metal-like gapless band structure with probability densities \(|\Psi|^{2}\) delocalized in the bulk, while the band structure in phase (II) is gapped. In striking contrast, phase (III) features two midgap states at zero energy in \(\mathrm{Re}E\) (red lines), which are localized at the boundary of the system and decay exponentially into the bulk. Due to the dissipative nature of the system, \(\mathrm{Im}E<0\) for all states. Note that while there is not yet a general understanding of non-equilibrium invariants for density matrices, a topological number for the non-Hermitian system has been identified theoretically as a global Berry phase, which takes discrete values \(W=0\) or \(1\) for \(g_{1}g_{2}<0\) or \(g_{1}g_{2}>0\) in phases (II) or (III), respectively [27, 14]. Based on the non-equilibrium symmetries and the above mentioned spectral and spatial signatures, topological states solely induced by dissipation rather than Hermitian band engineering are theoretically expected in our system. To experimentally investigate the topological properties of the non-Hermitian lattice system, we utilize SPPs confined in evanescently coupled arrays of dielectric loaded SPP waveguides (DLSPPW) with tailored losses. The samples are fabricated by two-step electron beam lithography [33, 34]. Figure 1(a) (bottom) outlines a typical waveguide structure, realized by depositing PMMA ridges of about 200nm width and center-to-center spacing \(d\) on top of a glass substrate, previously coated with a low-absorption 60nm-thin Gold layer to host the plasmonic part of the polaritons. Losses at individual lattice sites are introduced and controlled by adding Chromium stripes of variable width \(w\) below the ridges. An exemplary sample containing two different loss patterns in the unit cells, corresponding to the topologically trivial (II) and nontrivial (III) region, is shown in Fig. 1(c). The SPP evolution in the array is excited with a 980nm laser at \(15\,\mathrm{\SIUnitSymbolMicro W}\) optical power and characterized by leakage radiation microscopy [33, 6]. By varying \(w\) and \(d\), both the additional absorption \(\mathrm{Im}\beta\) from Chromium and the hopping \(J\) (both in units \(\mathrm{\SIUnitSymbolMicro m}^{-1}\)) can be accurately controlled, see Fig. 1(c) (right); here \(\beta\) denotes the complex-valued propagation constant in a single waveguide [6]. In terms of eq. (1), the dissipation parameters follow as \(g_{2}=\mathrm{Im}\beta/(2J)\). The shown data has been Figure 2: SPP evolution for metal-like (left, I), dissipative topologically trivial (middle, II) and dissipation-induced topological domains (right, III). (a) Real-space intensity distribution for injection at the edge (top) and at the first site of a unit cell in the bulk (bottom), for a sample of 48 waveguides spaced by \(d=1.4\,\mathrm{\SIUnitSymbolMicro m}\), corresponding to \(J=0.045(3)\mathrm{\SIUnitSymbolMicro m}^{-1}\). Solid and dashed lines indicate sample boundaries and unit cells. For the used \(w=0.7\,\mathrm{\SIUnitSymbolMicro m}\) in (II) and (III), \(\mathrm{Im}\beta=0.1\,\mathrm{\SIUnitSymbolMicro m}^{-1}\) and \(g_{1}=|g_{2}|=1.1\). (b) Simulated intensity evolution in the arrays from (a) with \(J=0.045\,\mathrm{\SIUnitSymbolMicro m}^{-1}\) and \(\mathrm{Im}\beta=0.1\,\mathrm{\SIUnitSymbolMicro m}^{-1}\) (\(0.01\,\mathrm{\SIUnitSymbolMicro m}^{-1}\)) for the lossy [red in (a)] and low-loss [blue in (a)] waveguides, in good agreement with the experimental results. (c) Momentum-resolved energy spectra with coshaped band for bulk excitation, band gap in the trivial and flat band in the topological domains, respectively, for edge excitation, along with theory prediction (top). obtained by measuring the decay of the intensity in a single waveguide and by recording the coherent oscillation between two adjacent waveguides in separate samples, respectively. First, we study the evolution of the SPPs upon injecting a wave packet at the edge and in the bulk of the waveguide arrays, respectively, for three loss patterns according to phases (I) (metal-like), (II) (dissipative trivial), and (III) (dissipative topological). Figure 2(a) shows the real-space SPP intensity distributions obtained by imaging the leakage radiation. For phase (I) (Fig. 2(a), left column), the SPP evolution mimics a two-state quantum walk of a particle in a periodic potential. This metal-like behaviour is highlighted by a conical diffusion of the signal along with a characteristic interference pattern when injecting the wave packet in the bulk; upon injection at the edge, the wave packet simply propagates in the \(-x\) direction. With losses as in the topologically trivial phase (II) (Fig. 2(a), middle), the diffusion is inhibited and an oscillation of the intensity between two neighboring low-loss waveguides is observed for both excitation protocols. In contrast, with losses as in the topologically nontrivial phase (III) (Fig. 2(a), right), the edge excitation reveals apart from the overall damping a quasi-stationary intensity evolution that remains locked to the outermost waveguide. This localization occurs at the edge only, as understood from the corresponding probability density \(|\Psi|^{2}\) shown in Fig. 1(b) (top), while excitation in the bulk reveals the same oscillatory dynamics as in phase (II), except for a phase shift due to the neighboring low-loss waveguide now lying above the excited one. The phenomenology observed in the measured real-space intensity is in good agreement with numerical simulations based on coupled mode theory [33], see Fig. 2(b), giving conclusive evidence that the non-Hermitian model is well-captured by our DLSPPW platform. Figure 2(c) shows the momentum-resolved occupation of the energy bands within the first two Brillouin zones from \(k_{x}=-2\pi/d\) to \(2\pi/d\), as obtained by recording the leakage radiation in the back-focal Fourier plane of the microscope objective. In the metal-like phase (I) and for bulk excitation, the spectrum matches the expected \(\cos\)-shaped energy band that complements the independently observed ballistic transport in real space. The spectrum agrees with the simulated one in Fig. 2(c), except for a circular segment at \(k_{z}\lesssim 6.5\,\upmu\mathrm{m}^{-1}\) that is well understood to arise from unconfined SPP propagation outside of the array, also visible in the gray shaded area in Fig. 2(a). In the dissipative topologically trivial phase (II) and for edge excitation, the momentum distribution considerably changes. Two energy bands separated by a gap near \(k_{z}=6.56(2)\)\(\upmu\mathrm{m}^{-1}\) and visible at \(k_{x}\approx-0.5\pi/d\) are observed. In the topologically nontrivial phase (III), on the other hand, the momentum distribution exhibits only a single flat energy band centered at \(k_{z}=6.59(2)\)\(\upmu\mathrm{m}^{-1}\), with a spectral width determined by losses and residual transport in the bulk. By comparing with Fig. 1(b), the data gives evidence for a topological zero-state in the band gap. A unique feature of the investigated system lies in the fact that topological properties emerge as a consequence of dissipation alone, in a lattice which would be otherwise topologically trivial. To systematically test this dissipation-induced birth of topological order, we focus on an interface between two distinct domains prepared in phases (II) and (III), respectively, and gradually increase the loss \(\mathrm{Im}\beta\). Figure 3(a) shows the real-space SPP evolution for increasing Chromium widths \(w\), after consistently exciting the same low-loss waveguide, which is located at the interface. The conical intensity spread into the bulk for \(w=0\), previously seen in Fig. 2, is gradually transformed into a quasi-stationary, _i.e._, transversely localized occupation of the interface waveguide. Remarkably, despite larger absorption \(\mathrm{Im}\beta\), the SPP propagation length is significantly enhanced. This is understood from the increase of \(\mathrm{Im}E\) for the topological zero-states for large enough \(g_{2}\gtrsim 0.7\) [see Fig. 1(b) (bottom panel, red line)], marking a clear distinction point from its more lossy topologically trivial counterpart in a phase-II system for \(g_{2}<0\) [Fig. 1(b) (bottom panel, gray line)]. Thus, the intensity becomes more localized and long-lived at the interface due to the presence of topologically distinct domains. The extended propagation distance is visually more striking in the line profiles at \(z=50\,\upmu\)m in Fig. 3(b). Quantitatively, Fig. 3(c) shows the fitted \(1/e\) decay length of the intensity at the interface as a function of \(w\), along with the theoretically expected decay length at the interface waveguide and in the bulk, confirming the genuine dissipation-enhanced topological robustness of the interface state. Fig. 3: Dissipation-induced emergence of interface state between distinct topological domains. (a) Interface between phase-(II) and phase-(III) domains realized by different loss patterns, and real-space intensity distribution \(I(x,z)\) upon excitation of the interface waveguide (red) for increasing losses \(\mathrm{Im}\beta=\{0,0.06,0.09,0.1\}\upmu\mathrm{m}^{-1}\) from larger Chromium widths \(w\). Losses correspond to \(g_{1}=|g_{2}|=\{0,0.7,1,1.1\}\) and, as before, \(J=0.045(3)\upmu\mathrm{m}^{-1}\). (b) Line profiles of the intensity averaged around \(z=50\,\upmu\)m for low (top), intermediate (center) and high losses (bottom). (c) Decay length \(\ell\) of intensity in interface waveguide for cases in (a). Despite increased Chromium absorption, an enhanced propagation distance is observed, providing evidence for a topologically robust state. Shaded areas show simulations for the interface (red) and bulk (gray), where the enhanced \(\ell\) indicates the robustness of the interface beyond \(g_{2}\approx 0.7\). Conversely to the discussed formation of an interface state upon introducing dissipation, we next focus on _breaking_ the topological protection at the interface by increasing the hopping \(J\) in the presence of dissipation. For this, we have reduced the waveguide spacing \(d\), while keeping the losses in the interfaced phases (II) and (III) fixed at \(\mathrm{Im}\beta=0.1\,\upmu\mathrm{m}^{-1}\) [with \(w=0.7\,\upmu\mathrm{m}\) as in Fig. 2]. Figure 4(a) shows the real-space SPP evolution after injection of a wave packet at the boundary between a trivial phase-(II) domain and the vacuum (left). Not surprisingly, the population oscillates between the two low-loss waveguides as before, but now with a larger wave vector \(k_{z}\) as \(J\) is increased; see the blue data in Fig. 4(b). At the phase (II)-(III) interface shown in Fig. 4(a) (right), on the contrary, the SPP population stays localized at the interface waveguide without any appreciable transverse transport or oscillation; see the red data in Fig. 4(b). Eventually, at \(J\approx 0.09\,\upmu\mathrm{m}^{-1}\), the tunnel coupling becomes so large that the SPPs diffuse away from the interface into the bulk, lifting the topological protection of the interface state. The diminished topological robustness is understood to result from a delocalization of the interface mode with increased \(J\), as shown in Fig. 4(c). The calculated probability density \(|\Psi|^{2}\) is shown for the trivial edge (label A) and interface (B) modes, along with bulk modes. For larger \(J\), we find the transverse localization length of the interface mode to extend further, revealing an exponential decay of \(|\Psi|^{2}\) into the bulk with a maximum probability that consistently occurs at the interface waveguide. For the trivial edge, on the other hand, the maximum of the probability soon shifts away from the boundary of the sample and generally does not exhibit an exponential decay towards the bulk region. This phenomenology shares a close analogy with the topological edge states encountered in the SSH model [32], where--despite being a Hermitian system with a two-site unit cell and alternating hopping --the probability density of the dimerized edge modes decays exponentially. In the non-Hermitian 4-site model, the alternating hopping is replaced by an effective (de-)coupling of pairs of gain-gain or loss-loss (mixed gain-loss) lattice sites [27], and a dimerization with two- and four-site periodicity occurs in \(\mathrm{Re}\Psi\) and \(\mathrm{Im}\Psi\), respectively. This notion of exponential localization supports our claim that a topologically protected state is indeed observed. Finally, the spectrum of the complex energy eigenvalues in Fig. 4(c) provides a physical picture about the broken topological protection of the interface state (colored circles; energies of bulk modes are shown in gray) near \(J\approx 0.095\,\upmu\mathrm{m}^{-1}\). For small \(J\), the eigenvalues of the zero-states with \(\mathrm{Re}E=0\) fall in the band gap and are separated in \(\mathrm{Im}E\) (blue, green circles); note that due to \(\mathrm{Im}E\neq 0\) the non-Hermitian system is not \(\mathcal{PT}\)-symmetric [20]. As \(J\) increases, the imaginary gap closes and the eigenvalues coalesce at an exceptional point (yellow circle), followed by an opening of a real energy gap, which lifts the edge state degeneracy and merges both with the bulk bands, visible in Fig. 4(c) (red circles). At the exceptional point, the system has lost its topological character. In conclusion, we have experimentally demonstrated open-system topological states induced by dissipation alone, using SPP waveguide arrays with uniform hopping and spatially-distributed loss. Evidence for the topological nature of the non-Hermitian system is obtained from a localized midgap edge state between distinct topological domains. By independently tuning dissipation and hopping, both the emergence and breaking of topological order is observed. For the future, lowering the SPP losses may enable direct measurements of the topological invariant by interferometry [35] and give access to non-Hermitian Floquet engineering by modulated loss and hopping [36; 34]. An intriguing perspective lies in the implementation of open-system topological states with optical quantum gases within optically-active microcavities [37; 38], opening ways to study the fate of topological order in the presence of fluctuations in one and two dimensions [39]. Fig. 4: Breaking the topological protection at an exceptional point. (a) Real-space intensity distribution for increasing hopping \(J\), realized by reduced waveguide spacings \(d=\{1.8,1.6,1.4,1.2,1.0\}\upmu\mathrm{m}\) at \(w=0.7\,\upmu\mathrm{m}\) (\(\mathrm{Im}\beta=0.1\,\upmu\mathrm{m}^{-1}\)). For excitation at the trivial edge (left), the population oscillates with increasing frequency as \(J\) becomes larger. Upon injecting light into the boundary between domains (III) and (II) (middle), a quasi-stationary interface state is observed, which at \(J\approx 0.09\,\upmu\mathrm{m}^{-1}\) becomes delocalized in the bulk, in agreement with simulations (right). (b) Frequency \(k_{z}\) fitted to the evolution at the excited waveguide with \(f(z)=a_{1}\cos(k_{z}z+\phi)e^{-z/\ell}+a_{0}\). Solid line and vertical bars give numerical results, and error bars denote uncertainties of fit parameter. (c) _Left_: Calculated probability density \(|\Psi|^{2}\) at the boundaries in (a), labelled by A and B, along with bulk modes (gray). A delocalization of the interface mode (red) is visible for too large \(J\), indicating the loss of topological character. _Right_: Complex eigenenergies of the two topological (colored) and 38 bulk (gray) states, as \(J\) is varied. For \(J\approx 0.095\,\upmu\mathrm{m}^{-1}\), the eigenvalues coalesce at an exceptional point (yellow), closing the imaginary and opening the real band gap. We thank V. Zimmermann and Z. Fedorova for discussions. S.L., M.F. and J.S. acknowledge support from the DFG within SFB/TR 185 (277625399). J.S. acknowledges support by the EU (ERC, TopoGrand, 101040409), and by DFG within the Cluster of Excellence ML4Q (EXC 2004/1-390534769).
2302.10513
Dynamic Euclidean Bottleneck Matching
A fundamental question in computational geometry is for a set of input points in the Euclidean space, that is subject to discrete changes (insertion/deletion of points at each time step), whether it is possible to maintain an approximate bottleneck matching in sublinear update time. In this work, we answer this question in the affirmative for points on a real line and for points in the plane with a bounded geometric spread. For a set $P$ of $n$ points on a line, we show that there exists a dynamic algorithm that maintains a bottleneck matching of $P$ and supports insertion and deletion in $O(\log n)$ time. Moreover, we show that a modified version of this algorithm maintains a minimum-weight matching with $O(\log n)$ update (insertion and deletion) time. Next, for a set $P$ of $n$ points in the plane, we show that a ($6\sqrt{2}$)-factor approximate bottleneck matching of $P_k$, at each time step $k$, can be maintained in $O(\log{\Delta})$ amortized time per insertion and $O(\log{\Delta} + |P_k|)$ amortized time per deletion, where $\Delta$ is the geometric spread of $P$.
A. Karim Abu-Affash, Sujoy Bhore, Paz Carmi
2023-02-21T08:35:45Z
http://arxiv.org/abs/2302.10513v1
# Dynamic Euclidean Bottleneck Matching ###### Abstract A fundamental question in computational geometry is for a set of input points in the Euclidean space, that is subject to discrete changes (insertion/deletion of points at each time step), whether it is possible to maintain an approximate bottleneck matching in sublinear update time. In this work, we answer this question in the affirmative for points on a real line and for points in the plane with a bounded geometric spread. For a set \(P\) of \(n\) points on a line, we show that there exists a dynamic algorithm that maintains a bottleneck matching of \(P\) and supports insertion and deletion in \(O(\log n)\) time. Moreover, we show that a modified version of this algorithm maintains a minimum-weight matching with \(O(\log n)\) update (insertion and deletion) time. Next, for a set \(P\) of \(n\) points in the plane, we show that a \((6\sqrt{2})\)-factor approximate bottleneck matching of \(P_{k}\), at each time step \(k\), can be maintained in \(O(\log\Delta)\) amortized time per insertion and \(O(\log\Delta+|P_{k}|)\) amortized time per deletion, where \(\Delta\) is the geometric spread of \(P\). Keywords:Bottleneck matching Minimum-weight matching Dynamic matching. ## 1 Introduction Let \(P\) be a set of \(n\) points in the plane. Let \(G=(P,E)\) denote the complete graph over \(P\), which is an undirected weighted graph with \(P\) as the set of vertices and the weight of every edge \((p,q)\in E\) is the Euclidean distance \(|pq|\) between \(p\) and \(q\). For a perfect matching \(M\) in \(G\), let \(bn(M)\) be the length of the longest edge. A perfect matching \(M^{*}\) is called a bottleneck matching of \(P\), if for any other perfect matching \(M\), \(bn(M)\geq bn(M^{*})\). Computing Euclidean bottleneck matching was studied by Chang et al. [13]. They proved that such kind of matching is a subset of 17-RNG (relative neighborhood graph) and presented an \(O(n^{3/2}\log^{1/2}n)\)-time algorithm to compute a bottleneck matching. In fact, a major caveat of the Euclidean bottleneck matching algorithms was that they relied on Gabow and Tarjan [17] as an initial step (as also noted by Katz and Sharir [21]). In recent work, Katz and Sharir [21] showed that the Euclidean bottleneck matching for a set of \(n\) points in the plane can be computed in \(O(n^{\omega/2}\log n)\) deterministic time, where \(\omega\approx 2.37\) is the exponent of matrix multiplication. For general graphs of \(n\) vertices and \(m\) edges, Gabow and Tarjan [17] gave an algorithm for maximum bottleneck matching that runs in \(O(n^{5/2}\sqrt{\log n})\) time. Bottleneck matchings were also studied for points in higher dimensions and in other metric spaces [16], with non-crossing constraints [3, 4], and on multichromatic instances [2]. In many applications, the input instance changes over a period of time, and the typical objective is to build dynamic data structures that can update solutions efficiently rather than computing everything from scratch. In recent years, several dynamic algorithms were designed for geometric optimization problems; see [5, 9, 10, 11, 12]. Motivated by this, we study the bottleneck matching for dynamic point set in the Euclidean plane. In our setting, the input is a set of points in the Euclidean plane and the goal is to devise a dynamic algorithm that maintains a bottleneck matching of the points and supports dynamic changing of the input due to insertions and deletions of points. Upon a modification to the input, the dynamic algorithm should efficiently update the bottleneck matching of the new set. ### Related Work Euclidean matchings have been a major subject of investigation for several decades due to their wide range of applications in operations research, pattern recognition, statistics, robotics, and VLSI; see [14, 23]. The Euclidean minimum-weight matching, where the objective is to compute a perfect matching with the minimum total weight, was studied by Vaidya [29] who gave the first sub-cubic algorithm (\(O(n^{5/2}\log^{4}n)\)) by exploiting geometric structures. Varadrajan [30] presented an \(O(n^{3/2}\log^{5}n)\)-time algorithm for computing a minimum-weight matching in the plane, which is the best-known running time for Euclidean minimum-weight matching till date. Agarwal et al. [7] gave a near quadratic time algorithm for the bipartite version of the problem, improving upon the sub-cubic algorithm of Vaidya [29]. Several recent approximation algorithms were developed with improved running times for bipartite and non-bipartite versions; see [6, 8, 25]. Dynamic Graph Matching.In this problem, the objective is to maintain a maximal cardinality matching as the input graph is subject to discrete changes, i.e., at each time step, either a vertex (or edge) is added or deleted. Dynamic graph matching algorithms have been extensively studied over the past few decades. However, most of these algorithms consider dynamic graphs which are subject to discrete edge updates, as also noted by Grandoni et al. [27]. Sankowski [26] showed how to maintain the size of the maximum matching with \(O(n^{1.495})\) worst-case update time. Moreover, it is known that maintaining an exact matching requires polynomial update time under complexity conjectures [1]. Therefore, most of the research has been focused on maintaining an approximate solution. It is possible to maintain a 2-approximate matching with constant amortized update time [27]. However, one can maintain a \((1+\varepsilon)\)-approximate solution in the fully-dynamic setting with update time \(O(\sqrt{m}/\varepsilon^{2})\)[19]. Online Matching.Karp, Vazirani, and Vazirani studied the bipartite vertex-arrival model in their seminal work [20]. Most of the classical online matching algorithms are on the server-client paradigm, where one side of a bipartite graph is revealed at the beginning. Raghvendra [24] studied the online bipartite matching problem for a set of points on a line (see also [22]). Gamlath et al. [18] studied the online matching problem on edge arrival model. Despite of the remarkable progress of the online matching problem over the decades, the online minimum matching with vertex arrivals has not been studied (where no side is revealed at the beginning). ### Our contribution In Section 2, we present a dynamic algorithm that maintains a bottleneck matching of a set \(P\) of \(n\) points on a line with \(O(\log n)\) update (insertion or deletion) time. Then, in Section 3, we generalize this algorithm to maintain a minimum-weight matching of \(P\) with \(O(\log n)\) update time. For a set \(P\) of points in the plane with bounded geometric spread \(\Delta\), in Section 4, we present a dynamic algorithm that maintains a \((6\sqrt{2})\)-approximate bottleneck matching of \(P_{k}\), at each time step \(k\), and supports insertion in \(O(\log\Delta)\) amortized time and deletion in \(O(\log\Delta+|P_{k}|)\) amortized time. ## 2 Dynamic Bottleneck Matching in 1D Let \(P=\{p_{1},p_{2},\ldots,p_{n}\}\) be a set of \(n\) points located on a horizontal line, such that \(p_{i}\) is to the left of \(p_{i+1}\), for every \(0\leq i<n\). In this section, we present a dynamic algorithm that maintains a bottleneck matching of \(P\) with logarithmic update time. Throughout this section, we assume that \(n\) is even and two points are added or deleted in each step. However, our algorithm can be generalized for every \(n\) and every constant number of points added or deleted in each step, regardless of the parity of \(n\); see Section 3. **Observation 1**: _There exists a bottleneck matching \(M\) of \(P\), such that each point \(p_{i}\in P\) is matched to a point from \(\{p_{i-1},p_{i+1}\}\)._ Proof: Let \(M^{\prime}\) be a bottleneck matching of \(P\) in which there exists at least one point \(p_{i}\) that is not matched to \(p_{i-1}\) or to \(p_{i+1}\). We do the following for each such a point \(p_{i}\). Let \(p_{i}\) be the leftmost point in \(P\) that is matched in \(M^{\prime}\) to a point \(p_{j}\), where \(j>i+1\). Let \(p_{j^{\prime}}\) be the point that is matched to \(p_{i+1}\), and notice that \(j^{\prime}>i+1\). Let \(M^{\prime\prime}\) be the matching obtained by replacing the edges \((p_{i},p_{j})\) and \((p_{i+1},p_{j^{\prime}})\) in \(M^{\prime}\) by the edges \((p_{i},p_{i+1})\) and \((p_{j},p_{j^{\prime}})\); see Figure 1. Clearly, \(|p_{i}p_{i+1}|\leq|p_{i}p_{j}|\) and \(|p_{j}p_{j^{\prime}}|\leq\max\{|p_{i}p_{j}|,|p_{i+1}p_{j^{\prime}}|\}\). Therefore, \(M^{\prime\prime}\) is also a bottleneck matching in which \(p_{i}\) is matched to \(p_{i+1}\). Throughout the rest of this section, we refer to the bottleneck matching that satisfies Observation 1 as the optimal matching, and notice that this matching is unique. ### Preprocessing Let \(M\) be the optimal matching of \(P\) and let \(bn(M)\) denote its bottleneck. Clearly, \(M\) can be computed in \(O(n)\) time. We maintain \(M\) in a full AVL tree \(\mathcal{T}\), such that the leaves of \(\mathcal{T}\) are the points of \(P\), and each intermediate node has exactly two children and contains some extra information, propagated from its children. For a node \(v\) in \(\mathcal{T}\), let \(T_{v}\) be the sub-tree of \(\mathcal{T}\) rooted at \(v\), and let \(P_{v}\) be the subset of \(P\) containing the points in the leaves of \(T_{v}\). For each node \(v\) in \(\mathcal{T}\), let \(lc(v),rc(v)\) be the left and the right children of \(v\), respectively, and \(p(v)\) be the parent of \(v\). Each node \(v\) in \(\mathcal{T}\) contains the following seven attributes about the optimal matching of the points in \(P_{v}\): 1. LeftMost\((v)\) - the leftmost point in \(P_{v}\). 2. RightMost\((v)\) - the rightmost point in \(P_{v}\). 3. \(\pi(v)=|\textsc{RightMost}(lc(v))\textsc{LeftMost}(rc(v)|\) - the Euclidean distance between RightMost\((lc(v))\) and RightMost\((lc(v))\). 4. All\((v)\) - cost of the matching of the points in \(P_{v}\). 5. All-L\((v)\) - cost of the matching of the points in \(P_{v}\setminus\{\textsc{LeftMost}(v)\}\). 6. All-R\((v)\) - cost of the matching of the points in \(P_{v}\setminus\{\textsc{RightMost}(v)\}\). 7. All-LR\((v)\) - cost of the matching of the points in \(P_{v}\setminus\{\textsc{LeftMost}(v)\), RightMost\((v)\}\). Now, we describe how to compute the values of the attributes in each node \(v\). The computation is bottom-up. That is, we first initialize the attributes of the leaves and then, for each intermediate node \(v\), we compute its attributes from the attributes of its children \(lc(v)\) and \(rc(v)\). For each leaf \(v\) in \(\mathcal{T}\), we set \(\textsc{All}(v)\) and \(\textsc{All-LR}(v)\) to be \(\infty\), \(\textsc{All-L}(v)\) and \(\textsc{All-R}(v)\) to be \(0\), and \(\textsc{LeftMost}(v)\) and \(\textsc{RightMost}(v)\) to be \(v\). For each intermediate \(v\) in \(\mathcal{T}\), we compute its attributes as follows. \[\textsc{All}(v)\leftarrow \min\Big{\{}\max\big{\{}\textsc{All}(lc(v))\,,\textsc{All}(rc(v)) \big{\}}\,,\] \[\max\big{\{}\textsc{All-R}(lc(v))\,,\textsc{All-L}(rc(v))\,,\pi(v )\big{\}}\Big{\}}\,.\] \[\text{All-L}(v)\leftarrow \min\Big{\{}\max\left\{\text{All-L}(lc(v))\,,\text{All-L}(rc(v)) \right\},\] \[\max\left\{\text{All-LR}(lc(v))\,,\text{All-L}(rc(v))\,,\pi(v) \right\}\Big{\}}\,.\] \[\text{All-R}(v)\leftarrow \min\Big{\{}\max\left\{\text{All-R}(lc(v))\,,\text{All-R}(rc(v)) \right\},\] \[\max\left\{\text{All-R}(lc(v))\,,\text{All-LR}(rc(v))\,,\pi(v) \right\}\Big{\}}\,.\] \[\text{All-LR}(v)\leftarrow \min\Big{\{}\max\left\{\text{All-L}(lc(v))\,,\text{All-R}(rc(v)) \right\},\] \[\max\left\{\text{All-LR}(lc(v))\,,\text{All-LR}(rc(v))\,,\pi(v) \right\}\Big{\}}\,.\] Clearly, these values can be computed in constant time, for each node \(v\) in \(\mathcal{T}\), given the attributes of its children. Therefore, the preprocessing time is \(O(n)\). Lemma 1: _Let \(r^{*}\) be the root of \(\mathcal{T}\). Then, \(\text{All}(r^{*})=bn(M)\)._ Proof: For a node \(v\) in \(\mathcal{T}\) where \(|P_{v}|\) is even, let \(M_{v}\) denote the optimal matching of the points in \(P_{v}\), and let \(\textsc{Mlr}_{v}\) denote the optimal matching of the points in \(P_{v}\setminus\{\textsc{LeftMost}(v),\textsc{RightMost}(v)\}\). For a node \(v\) in \(\mathcal{T}\) where \(|P_{v}|\) is odd, let \(\textsc{Mlr}_{v}\) denote the optimal matching of the points in \(P_{v}\setminus\{\textsc{LeftMost}(v)\}\), and let \(\textsc{Mr}_{v}\) denote the optimal matching of the points in \(P_{v}\setminus\{\textsc{RightMost}(v)\}\). To prove the lemma, we prove a stronger claim. For each node \(v\) in \(\mathcal{T}\), we prove that * if \(|P_{v}|\) is even, then \(\text{All}(v)=bn(M_{v})\), \(\text{All-L}(v)=\text{All-R}(v)=\infty\), and \(\text{All-LR}(v)=bn(\textsc{Mlr}_{v})\). * if \(|P_{v}|\) is odd, then \(\text{All}(v)=\text{All-LR}(v)=\infty\), \(\text{All-L}(v)=bn(\textsc{Mlr}_{v})\), and \(\text{All-R}(v)=bn(\textsc{Mr}_{v})\). The proof is by induction on the height of \(v\) in \(\mathcal{T}\). **Base case:** The claim holds for each leaf \(v\) in \(\mathcal{T}\), since \(|P_{v}|=1\) and we initialize the attributes of \(v\) by the values \(\text{All}(v)=\text{All-LR}(v)=\infty\) and \(\text{All-L}(v)=\text{All-R}(v)=0\). Moreover, for each node \(v\) in height one, we have \(|P_{v}|=2\) and \(v\) has two leaves \(l\) and \(r\) at height zero. Therefore, \[\text{All}(r) = \min\Big{\{}\max\left\{\text{All}(l)\,,\text{All}(r)\right\} \,,\max\left\{\text{All-R}(l)\,,\text{All-L}(r)\,,\pi(v)\right\}\Big{\}}\] \[= \min\Big{\{}\max\left\{\infty\,,\infty\right\},\max\left\{0\,,0 \,,|lr|\right\}\Big{\}}=|lr|\,.\] \[\text{All-L}(v) = \min\Big{\{}\max\left\{\text{All-L}(l)\,,\text{All}(r)\right\} \,,\max\left\{\text{All-LR}(l)\,,\text{All-L}(r)\,,\pi(v)\right\}\Big{\}}\] \[= \min\Big{\{}\max\left\{\infty\,,0\right\},\max\left\{0\,,\infty \,,|lr|\right\}\Big{\}}=\infty\,.\] \[\text{All-R}(v) = \min\Big{\{}\max\big{\{}\text{All-L}(l)\,,\text{All-R}(r)\big{\}}\,,\] \[\qquad\max\big{\{}\text{All-R}(l)\,,\text{All-LR}(r)\,,\pi(v)\big{\}} \Big{\}}\] \[= \min\Big{\{}\max\big{\{}0\,,\infty\big{\}}\,,\max\big{\{}\infty\,, 0\,,|lr|\big{\}}\Big{\}}=\infty\,.\] \[\text{All-LR}(v) = \min\Big{\{}\max\big{\{}\text{All-L}(l)\,,\text{All-R}(r)\big{\}}\,,\] \[\qquad\max\big{\{}\text{All-LR}(l)\,,\text{All-LR}(r)\,,\pi(v) \big{\}}\Big{\}}\] \[= \min\Big{\{}\max\big{\{}0\,,0\big{\}}\,,\max\big{\{}\infty\,, \infty\,,|lr|\big{\}}\Big{\}}=0\,.\] **Induction step:** We prove the claim for each node \(v\) at height \(h>1\). Let \(l=lc(v)\) and \(r=rc(v)\). Let \(p\) and \(q\) be the rightmost and the leftmost points in \(P_{l}\) and \(P_{r}\), respectively. Thus, \(\pi(v)=|pq|\). We distinguish between four cases. **Case 1: \(|P_{v}|\) is even and both \(|P_{l}|\) and \(|P_{r}|\) are even.** Since \(|P_{v}|\) is even, \(M_{v}\) consists of the optimal matching \(M_{l}\) of \(P_{l}\) and the optimal matching \(M_{r}\) of \(P_{r}\), and \(bn(M_{v})=\max\{bn(M_{l}),bn(M_{r})\}\). Moreover, \(\text{Mlr}_{v}\) consists of the optimal matching \(\text{Mlr}_{l}\) of \(P_{l}\setminus\{\text{LeftMost}(l),\text{RightMost}(l)\}\), the optimal matching \(\text{Mlr}_{r}\) of \(P_{r}\setminus\{\text{LeftMost}(r),\text{RightMost}(r)\}\), and the edge \((p,q)\). Thus, \(bn(\text{Mlr}_{v})=\max\{bn(\text{Mlr}_{l}),bn(\text{Mlr}_{r}),|pq|\}\). By the induction hypothesis, \(\text{All}(l)=bn(M_{l})\), \(\text{All}(r)=bn(M_{r})\), \(\text{All-LR}(l)=bn(\text{Mlr}_{l})\), \(\text{All-LR}(r)=bn(\text{Mlr}_{r})\), and \(\text{All-L}(l)=\text{All-R}(l)=\text{All-L}(l)=\text{All-R}(l)=\infty\). Therefore, we have \[\text{All}(r) = \min\Big{\{}\max\big{\{}\text{All}(l)\,,\text{All}(r))\big{\}}\,, \max\big{\{}\text{All-R}(l)\,,\text{All-L}(r)\,,\pi(v)\big{\}}\Big{\}}\] \[= \min\Big{\{}\max\big{\{}bn(M_{l})\,,bn(M_{r})\big{\}}\,,\max \big{\{}\infty\,,\infty\,,|pq|\big{\}}\Big{\}}\] \[= \max\{bn(M_{l})\,,bn(M_{r})\}=bn(M_{v})\,.\] \[\text{All-L}(v) = \min\Big{\{}\max\big{\{}\text{All-L}(l)\,,\text{All}(r)\big{\}}\,,\] \[\qquad\max\big{\{}\text{All-LR}(l)\,,\text{All-L}(r)\,,\pi(v) \big{\}}\Big{\}}\] \[= \min\Big{\{}\max\big{\{}\infty\,,bn(M_{r})\big{\}}\,,\max\big{\{} bn(\text{Mlr}_{l})\,,\infty\,,|pq|\big{\}}\Big{\}}=\infty\,.\] \[\text{All-R}(v) = \min\Big{\{}\max\big{\{}\text{All-R}(l)\,,\text{All-LR}(r)\,,\pi( v)\big{\}}\Big{\}}\] \[= \min\Big{\{}\max\big{\{}bn(M_{l})\,,\infty\big{\}}\,,\max\big{\{} \infty\,,bn(\text{Mlr}_{r})\,,|pq|\big{\}}\Big{\}}=\infty\,.\] \[\text{All-LR}(v) =\;\min\Big{\{}\max\left\{\text{All-L}(l)\,,\text{All-R}(r)\right\},\] \[\qquad\qquad\max\left\{\text{All-LR}(l)\,,\text{All-LR}(r)\,,\pi(v) \right\}\Big{\}}\] \[=\;\min\Big{\{}\max\left\{\infty\,,\infty\right\},\max\left\{bn( \text{Mlr}_{l})\,,bn(\text{Mlr}_{r})\,,|pq|\right\}\Big{\}}\] \[=\;\max\left\{bn(\text{Mlr}_{l})\,,bn(\text{Mlr}_{r})\,,|pq| \right\}=bn(\text{Mlr}_{v})\,.\] **Case 2: \(|P_{v}|\) is even and both \(|P_{l}|\) and \(|P_{r}|\) are odd.** Since \(|P_{v}|\) is even, \(M_{v}\) consists of the optimal matching \(\text{Mlr}_{r}\) of \(P_{r}\setminus\{\text{LeftMost}(r)\}\), the optimal matching \(\text{Mlr}_{l}\) of \(P_{l}\setminus\{\text{RightMost}(l)\}\), and the edge \((p,q)\). Thus, \(bn(M_{v})=\max\{bn(\text{Mlr}_{r}),bn(\text{Mlr}_{l}),|pq|\}\). Moreover, \(\text{Mlr}_{v}\) consists of the optimal matching \(\text{Ml}_{l}\) of \(P_{l}\setminus\{\text{LeftMost}(l)\}\) and the optimal matching \(\text{Mr}_{r}\) of \(P_{r}\setminus\{\text{RightMost}(r)\}\), and \(bn(\text{Mlr}_{v})=\max\{bn(\text{Ml}_{l}),bn(\text{Mlr}_{r})\}\). By the induction hypothesis, \(\text{All}(l)=\text{All-LR}(l)=\text{All-LR}(r)=\infty\), \(\text{All-R}(l)=bn(\text{Mlr}_{l})\), \(\text{All-L}(l)=bn(\text{Ml}_{l})\), \(\text{All-R}(r)=bn(\text{Mlr}_{r})\), and \(\text{All-L}(r)=bn(\text{Mlr}_{r})\). Therefore, we have \[\text{All}(r) =\;\min\Big{\{}\max\left\{\text{All}(l)\,,\text{All}(r)\right) \right\},\max\left\{\text{All-R}(l)\,,\text{All-L}(r)\,,\pi(v)\right\}\Big{\}}\] \[=\;\min\Big{\{}\max\left\{\infty\,,\infty\right\},\max\left\{bn( \text{Mlr}_{l})\,,bn(\text{Mlr}_{r})\,,|pq|\right\}\Big{\}}\] \[=\;\max\left\{bn(\text{Mlr}_{l})\,,bn(\text{Mlr}_{r})\,,|pq| \right\}=bn(M_{v})\,.\] \[\text{All-L}(v) =\;\min\Big{\{}\max\left\{\text{All-L}(l)\,,\text{All-L}(r)\right\},\] \[\qquad\qquad\max\left\{\text{All-LR}(l)\,,\text{All-L}(r)\,,\pi(v) \right\}\Big{\}}\] \[=\;\min\Big{\{}\max\left\{\infty\,,bn(\text{Mlr}_{r})\right\}, \max\left\{bn(\text{Mlr}_{l})\,,\infty\,,|pq|\right\}\Big{\}}=\infty\,.\] \[\text{All-LR}(v) =\;\min\Big{\{}\max\left\{\text{All-L}(l)\,,\text{All-R}(r)\right\},\] \[\qquad\qquad\max\left\{\text{All-LR}(l)\,,\text{All-LR}(r)\,,\pi(v )\right\}\Big{\}}\] \[=\;\min\Big{\{}\max\left\{bn(\text{Mlr}_{l})\,,bn(\text{Mlr}_{r} )\right\},\max\left\{\infty\,,\infty\,,|pq|\right\}\Big{\}}\] \[=\;\max\left\{bn(\text{Ml}_{l})\,,bn(\text{Mlr}_{r})\right\}=bn( \text{Mlr}_{v})\,.\] **Case 3: \(P_{v}\) is odd, \(|P_{l}|\) is even, and \(|P_{r}|\) is odd.** Since \(P_{v}\) is odd, there is no optimal matching \(M_{v}\) of \(P_{v}\), and thus \(bn(M_{v})=\infty\). Moreover, \(\text{Mlr}_{v}\) consists of the optimal matching \(M_{l}\) of \(P_{l}\) and the optimal matching \(\text{Mr}_{r}\) of \(P_{r}\setminus\{\text{RightMost}(r)\}\), and \(\text{Ml}_{v}\) consists of the optimal matching \(\text{Mlr}_{l}\) of \(P_{l}\setminus\{\text{LeftMost}(l),\text{RightMost}(l)\}\), the optimal matching \(\text{Ml}_{r}\) of \(\{\textsc{LeftMost}(r)\}\), and the edge \((p,q)\). Thus, \(bn(\textsc{Mr}_{v})=\max\{bn(M_{l}),bn(\textsc{Mr}_{r})\}\) and \(bn(\textsc{Ml}_{v})=\max\{bn(\textsc{Ml}_{l}),bn(\textsc{Ml}_{r}),|pq|\}\). By the induction hypothesis, \(\textsc{All}(r)=\textsc{All-LR}(r)=\textsc{All-L}(l)=\textsc{All-R}(l)=\infty\), \(\textsc{All}(l)=bn(M_{l})\), \(\textsc{All-LR}(l)=bn(\textsc{Ml}_{l})\), \(\textsc{All-R}(r)=bn(\textsc{Mr}_{r})\), and \(\textsc{All-L}(r)=bn(\textsc{Ml}_{r})\). Therefore, we have \[\textsc{All}(r) =\] \[=\] \[\textsc{All-L}(v) =\] \[\qquad\qquad\max\left\{\textsc{All-LR}(l),\textsc{All-L}(r),\pi(v) \right\}\] \[= \min\left\{\max\left\{\infty\,,\infty\right\},\max\left\{bn( \textsc{Ml}_{l})\,,bn(\textsc{Ml}_{r})\,,|pq|\right\}\right\}\] \[= \max\left\{bn(\textsc{Ml}_{l})\,,bn(\textsc{Ml}_{r})\,,|pq| \right\}=bn(\textsc{Ml}_{v})\,.\] \[\textsc{All-R}(v) =\] \[= \min\left\{\max\left\{\textsc{All-R}(l)\,,\textsc{All-LR}(r)\, \pi(v)\right\}\right\}\] \[= \min\left\{\max\left\{bn(M_{l})\,,bn(\textsc{Mr}_{r})\right\}, \max\left\{\infty\,,\infty\,,|pq|\right\}\right\}\] \[= \max\left\{bn(M_{l})\,,bn(\textsc{Mr}_{r})\right\}=bn(\textsc{ Mr}_{v})\,.\] \[\textsc{All-LR}(v) =\] \[\qquad\qquad\max\left\{\textsc{All-LR}(l)\,,\textsc{All-LR}(r)\, \pi(v)\right\}\Big{\}}\] \[= \min\left\{\max\left\{\infty\,,bn(\textsc{Mr}_{r})\right\},\max \left\{bn(\textsc{Ml}_{l})\,,\infty\,,|pq|\right\}\right\}=\infty\,.\] **Case 4: \(P_{v}\) is odd, \(|P_{l}|\) is odd, and \(|P_{r}|\) is even.** This case is symmetric to Case 3. ### Dynamization Let \(P=\{p_{1},p_{2},\ldots,p_{n}\}\) be the set of points at some time step and let \(\mathcal{T}\) be the AVL tree maintaining the optimal matching \(M\) of \(P\). Let \(r\) denote the root of \(\mathcal{T}\). In the following, we describe how to update \(\mathcal{T}\) when inserting two points to \(P\) or deleting two points from \(P\). #### 2.2.1 Insertion Let \(q\) and \(q^{\prime}\) be the two points inserted to \(P\). We describe the procedure for inserting \(q\). The same procedure is applied for inserting \(q^{\prime}\). We initialize a leaf node corresponding to \(q\) and insert it to \(\mathcal{T}\). Then, we update the attributes of the intermediate nodes along the path from \(q\) to the root of \(\mathcal{T}\). Let \(M^{\prime}\) be the optimal matching of \(P\cup\{q,q^{\prime}\}\). Then, by Lemma 1, after inserting \(q\) and \(q^{\prime}\) to \(P\), \(\textsc{All}(r)=bn(M^{\prime})\). #### 3.1.2 Deletion Let \(q\) and \(q^{\prime}\) be the two points deleted from \(P\). We describe the procedure for deleting \(q\). The same procedure is applied for deleting \(q^{\prime}\). Assume w.l.o.g. that \(q\) is the right child of \(p(q)\). If the left child \(t\) of \(p(q)\) is a leaf, then we set the attributes of \(t\) to \(p(q)\), remove \(q\) and \(t\) from \(\mathcal{T}\), and update the attributes of the intermediate nodes along the path from \(p(q)\) to the root of \(\mathcal{T}\); see Figure 2(top). Otherwise, the left child \(t\) of \(p(q)\) is an intermediate node with left leaf \(l\) and right leaf \(r\). We set the attributes of \(l\) to \(t\) and the attributes of \(r\) to \(q\), remove \(l\) and \(r\) from \(\mathcal{T}\), and update the attributes of the intermediate nodes along the path from \(p(q)\) to the root of \(\mathcal{T}\); see Figure 2(bottom). Let \(M^{\prime}\) be the optimal matching of \(P\setminus\{q,q^{\prime}\}\). Then, by Lemma 1, after deleting \(q\) and \(q^{\prime}\) from \(P\), \(\textsc{All}(r)=bn(M^{\prime})\). Finally, since we use an AVL tree, we may need to make some rotations after an insertion or a deletion. For each rotation performed on \(\mathcal{T}\), we also update the attributes of the (constant number of) intermediate nodes involved in the rotation. Lemma 2: _The running time of an update operation (insertion or deletion) is \(O(\log n)\)._ Proof: Since \(\mathcal{T}\) is an AVL tree, the height of \(\mathcal{T}\) is \(O(\log n)\)[15]. Each operation requires updating the attributes of the nodes along the path from a leaf to the root, and each such update takes \(O(1)\) time. Moreover, each rotation also requires updating the attributes of the nodes involved in the rotation, and each such update also takes \(O(1)\) time. Since in insertion there is at most one rotation and in deletion there are at most \(O(\log n)\) rotations, the total running time of each insertion and each deletion is \(O(\log n)\). The following theorem summarizes the result of this section. Theorem 3.1: _Let \(P\) be a set of \(n\) points on a line. There exists a dynamic algorithm that maintains a bottleneck matching of \(P\) and supports insertion and deletion in \(O(\log n)\) time._ ## 3 Extensions for 1D In this section, we extend our algorithm to maintain a minimum-weight matching of \(P\) (instead of bottleneck matching). Moreover, we extend the algorithm to allow inserting/deleting a constant (even or odd) number of points to/from \(P\). ### Minimum-weight matching We modify our algorithm to maintain a minimum-weight matching and support insertion and deletion, without affecting the running time. The difference lies in the way we compute the attributes of the intermediate nodes from their children. That is, for each intermediate node \(v\), we compute its attributes as follows: \[\textsc{All}(v)\leftarrow \min\left\{\textsc{All}(lc(v))+\textsc{All}(rc(v))\right.,\] \[\textsc{All-R}(lc(v))+\textsc{All-L}(rc(v))+\pi(v)\right\}.\] \[\textsc{All-L}(v)\leftarrow \min\left\{\textsc{All-L}(lc(v))+\textsc{All}(rc(v))\right.,\] \[\textsc{All-LR}(lc(v))+\textsc{All-L}(rc(v))+\pi(v)\right\}.\] \[\textsc{All-R}(v)\leftarrow \min\left\{\textsc{All}(lc(v))+\textsc{All-R}(rc(v))\right.,\] \[\textsc{All-R}(lc(v))+\textsc{All-LR}(rc(v))+\pi(v)\right\}.\] \[\textsc{All-LR}(v)\leftarrow \min\left\{\textsc{All-L}(lc(v))+\textsc{All-R}(rc(v))\right.,\] \[\textsc{All-LR}(lc(v))+\textsc{All-LR}(rc(v))+\pi(v)\right\}.\] Notice that the running time of an update operation (\(O(\log n)\) per insertion or deletion) is as in the bottleneck matching. The proof of the correctness of this algorithm for the minimum-weight matching is similar to the proof of the correctness of the bottleneck matching. ### Insertion and deletion of \(k\) points Let \(P\) be a set of \(n\) points on a line. In this section, we extend our algorithm to support insertion/deletion of \(k\) points to/from \(P\) at each time step. Notice that since we allow \(k\) to be odd, \(n\) can be odd and the matching should skip one point. Even though there are linear different candidate points that could be skipped, we can still maintain a bottleneck matching with \(O(k\log n)\) time per \(k\) insertions or deletions, by adding some more attributes for each node. Each node \(v\) in \(\mathcal{T}\) contains the following four attributes, in addition to the seven attributes that are described in Section 2.1. 8. All-\(1(v)\) - cost of the matching of \(|P_{v}|-1\) points of \(P_{v}\). 9. All-\(1\)-\(\mathrm{L}(v)\) - cost of the matching of \(|P_{v}|-1\) points of \(P_{v}\setminus\{\textsc{LeftMost}(v)\}\). 10. All-\(1\)-\(\mathrm{R}(v)\) - cost of the matching of \(|P_{v}|-1\) points of \(P_{v}\setminus\{\textsc{RightMost}(v)\}\). 11. All-\(1\)-\(\mathrm{LR}(v)\) - cost of the matching of \(|P_{v}|-1\) points of \(P_{v}\setminus\{\textsc{LeftMost}(v)\), RightMost\((v)\}\). For each leaf \(v\) in \(\mathcal{T}\), we initialize All-\(1(v)\) to be \(0\), and All-\(1\)-\(\mathrm{L}(v)\), All-\(1\)-\(\mathrm{R}(v)\), and All-\(1\)-\(\mathrm{LR}(v)\) to be \(\infty\). For each intermediate node \(v\) in \(\mathcal{T}\), we compute its attributes as follows. \[\textsc{All}(v)\leftarrow \min\Big{\{}\max\left\{\textsc{All}(lc(v))\,,\textsc{All}(rc(v)) \right\},\] \[\max\left\{\textsc{All-R}(lc(v))\,,\textsc{All-L}(rc(v))\,,\pi(v) \right\}\Big{\}}\,.\] \[\textsc{All-L}(v)\leftarrow \min\Big{\{}\max\left\{\textsc{All-L}(lc(v))\,,\textsc{All}(rc(v) )\right\},\] \[\max\left\{\textsc{All-LR}(lc(v))\,,\textsc{All-L}(rc(v))\,,\pi(v) \right\}\Big{\}}\,.\] \[\max\left\{\textsc{All-R}(lc(v))\,,\textsc{All-LR}(rc(v))\,,\pi(v) \right\}\Big{\}}\,.\] \[\max\left\{\textsc{All-LR}(rc(v))\,,\textsc{All-LR}(rc(v))\,,\pi(v) \right\}\Big{\}}\,.\] \[\textsc{All-1}(v)\leftarrow \min\Big{\{}\max\left\{\textsc{All-1}(lc(v))\,,\textsc{All}(rc(v) )\right\},\] \[\max\left\{\textsc{All-1}(lc(v))\,,\textsc{All-1}(rc(v))\right\},\] \[\max\left\{\textsc{All-1}\text{-R}(lc(v))\,,\textsc{All-L}(rc(v) )\,,\pi(v)\right\},\] \[\max\left\{\textsc{All-R}(lc(v))\,,\textsc{All-1}\text{-L}(rc(v) )\,,\pi(v)\right\}\Big{\}}\,.\] \[\textsc{All-1-L}(v)\leftarrow \min\Big{\{}\max\left\{\textsc{All-1}\text{-L}(lc(v))\,,\textsc{ All}(rc(v))\right\},\] \[\max\left\{\textsc{All-L}(lc(v))\,,\textsc{All-1}(rc(v))\right\},\] \[\max\left\{\textsc{All-1}\text{-LR}(lc(v))\,,\textsc{All-L}(rc(v) )\,,\pi(v)\right\},\] \[\max\left\{\textsc{All-LR}(lc(v))\,,\textsc{All-1}\text{-L}(rc(v) )\,,\pi(v)\right\}\Big{\}}\,.\] All-\(1\)-\(\mathrm{R}(v)\leftarrow \min\Big{\{}\max\left\{\textsc{All-1}(lc(v))\,,\textsc{All-R}(rc(v))\right\},\] \[\max\left\{\textsc{All-1}\text{-R}(lc(v))\,,\textsc{All-1}\text{- R}(rc(v))\right\},\] \[\max\left\{\textsc{All-1}\text{-R}(lc(v))\,,\textsc{All-1}\text{- R}(rc(v))\,,\pi(v)\right\},\] \[\max\left\{\textsc{All-R}(lc(v))\,,\textsc{All-1}\text{-LR}(rc(v) )\,,\pi(v)\right\}\Big{\}}\,.\] All-1-LR\((v)\leftarrow\)min \[\max\left\{\mbox{\sc All-1-L}(lc(v))\,,\mbox{\sc All-R}(rc(v)) \right\},\] \[\max\left\{\mbox{\sc All-L}(lc(v))\,,\mbox{\sc All-1-R}(rc(v)) \right\},\] \[\max\left\{\mbox{\sc All-1-LR}(lc(v))\,,\mbox{\sc All-LR}(rc(v)) \,,\pi(v)\right\},\] \[\max\left\{\mbox{\sc All-LR}(lc(v))\,,\mbox{\sc All-1-LR}(rc(v) )\,,\pi(v)\right\}\Big{\}}\,.\] Let \(r^{*}\) be the root of \(\mathcal{T}\). In the case that \(n\) is even, let \(M\) be the bottleneck matching for \(P_{r^{*}}\) satisfying Observation 1. In the case that \(n\) is odd, let \(M_{q}\) be the bottleneck matching for \(P_{r^{*}}\setminus\{q\}\) satisfying Observation 1. Let \(M^{\prime}\) the bottleneck matching such that \(bn(M^{\prime})=\min_{q\in P_{r^{*}}}\{bn(M_{q})\}\). Lemma 3: _Let \(r^{*}\) be the root of \(\mathcal{T}\)._ * _If_ \(n\) _is even, then_ All\((r^{*})=bn(M)\)_._ * _If_ \(n\) _is odd, then_ All-\(1(r^{*})=bn(M^{\prime})\)_._ The proof of Lemma 3 is similar to the proof of Lemma 1. Moreover, the insertion and the deletion operations are done as in Section 2.2. After each operation, we update the attributes (including the new attributes) of the intermediate nodes along the path from a leaf to the root. The running time of \(O(k\log n)\) is obtained by performing the operation (insertion or deletion) \(k\) times. That is, when we are requested to insert/delete \(k\) points we add/remove them one by one. Thus, the \(O(\log n)\) time per update operation is performed \(k\) times. ## 4 Dynamic Bottleneck Matching in 2D Let \(\mathcal{P}=P_{1}\cup P_{2}\cup\dots\) be a set of \(n\) points in the plane, such that each set \(P_{k+1}\) is obtained by adding a pair of points to \(P_{k}\) or by removing a pair of points from \(P_{k}\). Let \(\lambda_{k}\) be the distance between the closest pair of points in \(P_{k}\). In our setting, we assume that we are given a bounding box \(\mathcal{B}\) of side length \(\Lambda\) and a constant \(\lambda>0\), such that \(P_{k}\) is contained in \(\mathcal{B}\) and \(\lambda\leq\lambda_{k}\), for each \(k\geq 1\), and \(\Delta=\frac{\Lambda}{\lambda}\) is polynomially bounded in \(n\), i.e., \(\log\Delta=O(\log n)\). At each time step \(k\in\mathbb{N}\), either a pair of points of \(P\) is inserted or deleted. Let \(P_{k}\) be the set of points at time step \(k\) and let \(M_{k}^{*}\) be a bottleneck matching of \(P_{k}\) of bottleneck \(bn(M_{k}^{*})\). In this section, we present a dynamic data structure supporting insertion in \(O(\log\Delta)\) time and deletion in \(O(\log\Delta+|P_{k}|)\) time, such that a perfect matching \(M_{k}\) of \(P_{k}\) of bottleneck at most \(6\sqrt{2}\cdot bn(M_{k}^{*})\) can be computed in \(O(\log\Delta+|P_{k}|)\) time. Let \(\mathcal{B}\) be the bounding box containing the points of \(\mathcal{P}\). Set \(c=\lceil\log\Delta\rceil\). For each integer \(0\leq i\leq c\), let \(\Pi_{i}\) be the grid obtained by dividing \(\mathcal{B}\) into cells of side length \(2^{i}\cdot\lambda\). We say that two cells are adjacent in \(\Pi_{i}\) if they share a side or a corner in \(\Pi_{i}\). Let \(P=P_{k}\) be the set of points at some time step \(k\). For each grid \(\Pi_{i}\), we define an undirected graph \(G_{i}\), such that the vertices of \(G_{i}\) are the non-empty cells of \(\Pi_{i}\), and there is an edge between two non-empty cells in \(G_{i}\) if these cells are adjacent in \(\Pi_{i}\). For a vertex \(v\) in \(G_{i}\), let \(P_{v}\) be the set of points of \(P\) that are contained in the cell in \(\Pi_{i}\) corresponding to \(v\). For a connected component \(C\) in \(G_{i}\), let \(P(C)=\bigcup_{v\in C}P_{v}\), i.e., the set of the points contained in the cells corresponding to the vertices of \(C\). Moreover, we assume that each graph \(G_{i}\) has a parity bit that indicates whether all the connected components of \(G_{i}\) contain an even number of points or not. Lemma 4: _Let \(C\) be a connected component in \(G_{i}\). If \(|P(C)|\) is even, then there exists a perfect matching of the points of \(P(C)\) of bottleneck at most \(3\sqrt{2}\cdot 2^{i}\cdot\lambda\). Moreover, this matching can be computed in \(O(|P(C)|)\) time._ Proof: Let \(G_{C}\) be the subgraph of \(G_{i}\) induced by \(C\). Let \(T\) be a spanning tree of \(G_{C}\) and assume that \(T\) is rooted at a vertex \(r\). We construct a perfect matching of the points of \(P(C)\) iteratively by considering \(T\) bottom-up as follows. Let \(v\) be the deepest vertex in \(T\) which is not a leaf, and let \(v_{1},v_{2},\ldots,v_{j}\) be its children in \(T\). Notice that \(v_{1},v_{2},\ldots,v_{j}\) are leaves. Let \(P^{\prime}=\bigcup_{1\leq i\leq j}P_{v_{i}}\) be the set of the points contained in the cells corresponding to \(v_{1},v_{2},\ldots,v_{j}\). If \(|P^{\prime}|\) is even, then we greedily match the points in \(P^{\prime}\) and remove the vertices \(v_{1},v_{2},\ldots,v_{j}\) from \(T\). Otherwise, \(|P^{\prime}|\) is odd. In this case, we select an arbitrary point \(p\) from the cell corresponding to \(v\) and greedily match the points in \(P^{\prime}\cup\{p\}\). Moreover, we remove \(p\) from the cell corresponding to \(v\) and remove \(v_{1},v_{2},\ldots,v_{j}\) from \(T\). We continue this procedure until the root \(r\) is encountered, i.e., until \(v=r\). Since \(|P(C)|\) is even and in each iteration, we match an even number of points, the number of the points in the last iteration is even and we get a perfect matching of the points of \(P(C)\). Moreover, since in each iteration we match points from the cell corresponding to \(v\) and its at most eight neighbors in \(\Pi_{i}\), and these cells are contained in \(3\times 3\) cells-block, the length of each edge in the matching is at most \(3\sqrt{2}\cdot 2^{i}\cdot\lambda\). Since the degree of each vertex in \(G_{C}\) is at most eight, computing \(T\) takes \(O(|C|)\), and matching the points of \(P^{\prime}\) in each iteration takes \(O(|P^{\prime}|)\). Therefore, computing the matching of the points of \(P(C)\) takes \(O(|P(C)|)\) time. Let \(M^{*}\) be a bottleneck matching of \(P\) and let \(bn(M^{*})\) be its bottleneck. Lemma 5: _If \(bn(M^{*})\leq 2^{i}\cdot\lambda\), then, for every connected component \(C\) in \(G_{i}\), \(|P(C)|\) is even._ Proof: Assume by contradiction that there is a connected component \(C\) in \(G_{i}\), such that \(|P(C)|\) is odd. Thus, at least one point \(p\in P(C)\) is matched in \(M^{*}\) to a point \(q\notin P(C)\). Therefore, \(|pq|>2^{i}\cdot\lambda\), which contradicts that \(bn(M^{*})\leq 2^{i}\cdot\lambda\). Theorem 4.1: _In \(O(\log\Delta)\) time we can compute a value \(t\), such that \(t<bn(M^{*})\leq 6\sqrt{2}\cdot t\). Moreover, we can compute a perfect matching \(M\) of \(P\) of bottleneck at most \(6\sqrt{2}\cdot bn(M^{*})\) in \(O(\log\Delta+|P|)\) time._ Proof: Let \(i\) be the smallest integer such that all the connected components in \(G_{i}\) have an even number of points. Thus, by Lemma 5, \(bn(M^{*})>2^{i-1}\cdot\lambda\), and, by Lemma 4, there exists a perfect matching of \(P\) of bottleneck at most \(3\sqrt{2}\cdot 2^{i}\cdot\lambda\). Therefore, by taking \(t=2^{i-1}\cdot\lambda\), we have \(t<bn(M^{*})\leq 6\sqrt{2}\cdot t\). Since each graph \(G_{i}\) has a parity bit, we can compute \(t\) in \(O(\log\Delta)\) time. Moreover, by Lemma 4, we can compute a perfect matching \(M\) of \(P\) of bottleneck at most \(3\sqrt{2}\cdot 2^{i}\cdot\lambda\) in \(O(|P|)\) time. Therefore, \(bn(M)\leq 3\sqrt{2}\cdot 2^{i}\cdot\lambda\leq 6\sqrt{2}\cdot bn(M^{*})\). ### Preprocessing We first introduce a data structure that will be used in the preprocessing. #### 4.1.1 Disjoint-set data structure A _disjoint-set data structure_ is a data structure that maintains a collection \(D\) of disjoint dynamic sets of objects and each set in \(D\) has a representative, which is some member of the set (see [15] for more details). Disjoint-set data structures support the following operations: * Make-Set\((x)\) creates a new set whose only member (and thus representative) is the object \(x\). * Union\((S_{i},S_{j})\) merges the sets \(S_{i}\) and \(S_{j}\) and choose either the representative of \(S_{i}\) or the representative of \(S_{j}\) to be the representative of the resulting set. * Find-Set\((x)\) returns the representative of the (unique) set containing \(x\). It has been proven in [28] that performing a sequence of \(m\) Make-Set, Union, or Find-Set operations on a disjoint-set data structures with \(n\) objects requires total time \(O(m\cdot\alpha(n))\), where \(\alpha(n)\) is the extremely slow-growing _inverse Ackermann function_. More precisely, it has been shown that the amortized time of each one of the operations Make-Set, Union, and Find-Set is \(O(1)\). We associate each set \(S\) in \(D\) with a variable \(v_{S}\) that represents the parity of \(S\) depending on the number of points in \(S\). We also modify the operations Make-Set\((x)\) to initialize the parity variable of the created set to be odd, and Union\((S_{i},S_{j})\) to update the parity variable of the joined set according to the parities of \(S_{i}\) and \(S_{j}\). Moreover, we define a new operation Change-Parity\((S)\) that inverses the parity of the set \(S\). Notice that these changes do not affect the performance of the data structure. We now describe how to initialize our data structure, given the bounding box \(\mathcal{B}\), the constant \(\lambda\), and an initial set \(P_{1}\). Set \(c=\lceil\log\Delta\rceil\). For each integer \(0\leq i\leq c\), let \(\Pi_{i}\) be the grid obtained by dividing \(\mathcal{B}\) into cells of side length \(2^{i}\cdot\lambda\). For each grid \(\Pi_{i}\), we use a disjoint-set data structure \(DSS_{i}\) to maintain the connected components of \(G_{i}\) that is defined on \(\Pi_{i}\) and \(P_{1}\). That is, the objects of \(DSS_{i}\) are the non-empty cells of \(\Pi_{i}\), and if two non-empty cells share a side or a corner in \(\Pi_{i}\), then they are in the same set in \(DSS_{i}\). This data structure guarantees that each connected component in \(G_{i}\) is a set in \(DSS_{i}\). As mentioned above, constructing each \(DSS_{i}\) can be done in \(O(|P_{1}|)\) time. Therefore, the preprocessing time is \(O(\log\Delta\cdot|P_{1}|)\). ### Dynamization Let \(P\) be the set of points at some time step. In the following, we describe how to update each structure \(DSS_{i}\) when inserting two points to \(P\) or deleting two points from \(P\). #### 4.2.1 Insertion Let \(p\) and \(q\) be the two points inserted to set \(P\). We describe the procedure for inserting \(p\). The same procedure is applied for inserting \(q\). For each grid \(\Pi_{i}\), we do the following; see Procedure 1. Let \(Cell_{i}(p)\) be the cell containing \(p\) in \(\Pi_{i}\). If \(Cell_{i}(p)\) contains points of \(P\), then we find the set containing \(Cell_{i}(p)\) in \(DSS_{i}\) and change its parity. Otherwise, we make a new set in \(DSS_{i}\) containing the cell \(Cell_{i}(p)\) and merge (union) it with all the sets in \(DSS_{i}\) that contain a non-empty adjacent cell of \(Cell_{i}(p)\), and update the parity of the joined set. ``` 1:for each \(0\leq i\leq c\)do 2:\(Cell_{i}(p)\leftarrow\) the cell containing \(p\) in \(\Pi_{i}\) 3:if\(Cell_{i}(p)\cap P=\emptyset\)then /* \(Cell_{i}(p)\) contains only \(p\) */ 4: Make-Set\((Cell_{i}(p))\) 5:for each non-empty adjacent cell \(C\) of \(Cell_{i}(p)\)do 6:\(S_{C}\leftarrow\)Find-Set\((C)\) 7:\(S_{p}\leftarrow\)Find-Set\((Cell_{i}(p))\) 8: Union\((S_{C},S_{p})\) 9:else /* \(Cell_{i}(p)\) contains points other than \(p\) */ 10:\(S_{i}(p)\leftarrow\)Find-Set\((Cell_{i}(p))\) 11:Change-Parity\((S_{i}(p))\) ``` **Procedure 1** Insert\((p)\) Lemma 6: Insert\((p)\) _takes amortized \(O(\log\Delta)\) time._ Proof: Finding the cell containing \(p\) in each grid \(\Pi_{i}\) can be done in constant time. If \(Cell_{i}(p)\) contains points of \(P\), then we change the parity of the set containing \(Cell_{i}(p)\) in \(DSS_{i}\) in constant time. Otherwise, making a new set in \(DSS_{i}\) and merging it with at most eight sets in \(DSS_{i}\) that contain non-empty adjacent cells of \(Cell_{i}(p)\) can be also done in amortized constant time. Since \(c=\lceil\log\Delta\rceil\), Insert\((p)\) takes amortized \(O(\log\Delta)\) time. #### 4.2.2 Deletion Let \(p\) and \(q\) be the two points deleted from \(P\). We describe the procedure for deleting \(p\). The same procedure is applied for deleting \(q\). Let \(Cell_{i}(p)\) be the cell containing \(p\) in \(\Pi_{i}\) and let \(S_{i}(p)\) be the set containing \(Cell_{i}(p)\) in \(DSS_{i}\). For each grid \(\Pi_{i}\), we change the parity of \(S_{i}(p)\) in \(DSS_{i}\). Then, we find the smallest \(i\) such that, in \(\Pi_{i}\), \(Cell_{i}(p)\) contains no other points of \(P\) than \(p\). If no such \(\Pi_{i}\) exists, then we do not make any change. If all the adjacent cells of \(Cell_{i}(p)\) are empty, then we just remove \(S_{i}(p)\) from \(DSS_{i}\). Otherwise, we check whether removing \(Cell_{i}(p)\) disconnects the component containing it. That is, we check whether there are two non-empty adjacent cells of \(Cell_{i}(p)\) that were in the same set \(S_{i}(p)\) together with \(Cell_{i}(p)\) in \(DSS_{i}\) and after removing \(Cell_{i}(p)\) they should be in different sets. If there are two such cells, then we remove the set \(S_{i}(p)\) from \(DSS_{i}\) and reconstruct new sets for the cells in \(S_{i}(p)\setminus\{Cell_{i}(p)\}\). Lemma 7: _There is at most one grid \(\Pi_{i}\), such that removing \(Cell_{i}(p)\) disconnects the component containing it in \(DSS_{i}\)._ Proof: Assume by contradiction that there are two grids \(\Pi_{i}\) and \(\Pi_{j}\), such that \(i<j\) and removing \(Cell_{i}(p)\) and \(Cell_{j}(p)\) disconnect the component containing it in \(DSS_{i}\) and in \(DSS_{j}\), respectively. Let \(\sigma_{1}\) and \(\sigma_{2}\) be two non-empty adjacent cells of \(Cell_{i}(p)\) in \(\Pi_{i}\) that were in the same set \(S_{i}(p)\) together with \(Cell_{i}(p)\) in \(DSS_{i}\). Notice that \(\sigma_{1}\) and \(\sigma_{2}\) are contained in the \(3\times 3\) cells-block around \(Cell_{i}(p)\) in \(\Pi_{i}\); see Figure 3. Moreover, one of the corners of \(Cell_{i}(p)\) is a grid-vertex in \(\Pi_{i+1}\), as depicted in Figure 3. Therefore, \(\sigma_{1}\) and \(\sigma_{2}\) are either in the same cell or in adjacent cells in \(\Pi_{i+1}\), and in \(\Pi_{j}\), for each \(j\geq i+1\). This contradicts that \(Cell_{j}(p)\) disconnects the component containing it in \(DSS_{j}\). Lemma 8: _Deleting \(p\) from \(P\) takes amortized \(O(\log\Delta+|P|)\) time._ Proof: Changing the parity of \(S_{i}(p)\) in \(DSS_{i}\) can be done in constant time, for each \(1\leq i\leq c\). Finding the smallest \(i\) such that \(Cell_{i}(p)\) contains no other points of \(P\) than \(p\) takes \(O(\log\Delta)\) time. If all the adjacent cells of \(Cell_{i}(p)\) are empty, then we just remove \(S_{i}(p)\) from \(DSS_{i}\) in constant time. Otherwise, reconstruct new sets for the cells in amortized \(S_{i}(p)\setminus\{Cell_{i}(p)\}\) in \(O(|S_{i}(p)|)=O(|P|)\) time. Since \(c=\lceil\log\Delta\rceil\), Deleting \(p\) from \(P\) takes amortized \(O(\log\Delta+|P|)\) time. The following theorem summarizes the result of this section. Theorem 4.1: _Let \(P\) be a set of points in the plane and let \(\Delta\) be the geometric spread of \(P\). There exists a dynamic algorithm that maintains a \((6\sqrt{2})\)-approximate bottleneck matching of \(P_{k}\), at each time step \(k\), and supports insertion in \(O(\log\Delta)\) amortized time and deletion in \(O(\log\Delta+|P_{k}|)\) amortized time._ Figure 3: \(\Pi_{i}\) (in black) and \(\Pi_{i+1}\) (in red). The \(3\times 3\) cells-block (in blue) around \(Cell_{i}(p)\) in \(\Pi_{i}\). One of the corners of \(Cell_{i}(p)\) is a grid-vertex in \(\Pi_{i+1}\).
2308.05623
Renormalized stress-energy tensor on global anti-de Sitter space-time with Robin boundary conditions
We study the renormalized stress-energy tensor (RSET) for a massless, conformally coupled scalar field on global anti-de Sitter space-time in four dimensions. Robin (mixed) boundary conditions are applied to the scalar field. We compute both the vacuum and thermal expectation values of the RSET. The vacuum RSET is a multiple of the space-time metric when either Dirichlet or Neumann boundary conditions are applied. Imposing Robin boundary conditions breaks the maximal symmetry of the vacuum state and results in an RSET whose components with mixed indices have their maximum (or maximum magnitude) at the space-time origin. The value of this maximum depends on the boundary conditions. We find similar behaviour for thermal states. As the temperature decreases, thermal expectation values of the RSET approach those for vacuum states and their values depend strongly on the boundary conditions. As the temperature increases, the values of the RSET components tend to profiles which are the same for all boundary conditions. We also find, for both vacuum and thermal states, that the RSET on the space-time boundary is independent of the boundary conditions and determined entirely by the trace anomaly.
Thomas Morley, Sivakumar Namasivayam, Elizabeth Winstanley
2023-08-10T15:12:20Z
http://arxiv.org/abs/2308.05623v2
Renormalized stress-energy tensor on global anti-de Sitter space-time with Robin boundary conditions ###### Abstract We study the renormalized stress-energy tensor (RSET) for a massless, conformally coupled scalar field on global anti-de Sitter space-time in four dimensions. Robin (mixed) boundary conditions are applied to the scalar field. We compute both the vacuum and thermal expectation values of the RSET. The vacuum RSET is a multiple of the space-time metric when either Dirichlet or Neumann boundary conditions are applied. Imposing Robin boundary conditions breaks the maximal symmetry of the vacuum state and results in an RSET whose components with mixed indices have their maximum (or maximum magnitude) at the space-time origin. The value of this maximum depends on the boundary conditions. We find similar behaviour for thermal states. As the temperature decreases, thermal expectation values of the RSET approach those for vacuum states and their values depend strongly on the boundary conditions. As the temperature increases, the values of the RSET components tend to profiles which are the same for all boundary conditions. We also find, for both vacuum and thermal states, that the RSET on the space-time boundary is independent of the boundary conditions and determined entirely by the trace anomaly. ## 1 Introduction In the absence of a full theory of quantum gravity, quantum field theory in curved space-time (QFTCS) provides us with an effective theory in which we study quantum fields propagating on a background classical curved space-time. In QFTCS, the renormalized expectation value of the stress-energy tensor operator (RSET) \(\langle\hat{T}_{\mu\nu}\rangle\) plays a pivotal role. The expectation value of this operator is used as the source term in the semiclassical version of Einstein's field equations (1) (here and through this paper we use units in which \(\hbar=c=G=1\)): \[R_{\mu\nu}-\frac{1}{2}R\,g_{\mu\nu}+g_{\mu\nu}\Lambda=8\pi\langle\hat{T}_{\mu \nu}\rangle, \tag{1}\] and therefore governs the backreaction effect of the quantum field on the space-time geometry. In this paper we consider the RSET for a quantum scalar field on global anti-de Sitter (adS) space-time. Although this is a maximally symmetric space-time, quantum fields on this background have rich properties, not least because of the need to impose boundary conditions on the field due to the fact that adS is not a globally hyperbolic space-time. The study of quantum fields on adS was initiated many years ago [1], where a massless, conformally coupled scalar field was studied, subject to either "transparent" or "reflective" boundary conditions. The latter correspond to either Dirichlet (the field vanishes on the boundary) or Neumann (the normal derivative of the field vanishes on the boundary) boundary conditions. The vacuum state retains the maximal symmetry of the underlying geometry when either Dirichlet or Neumann boundary conditions are applied and the vacuum expectation value of the RSET is a constant multiple of the space-time metric [2; 3; 4]. The introduction of a nonzero temperature breaks this symmetry but, nonetheless, the thermal expectation value of the RSET for a massless, conformally coupled scalar field can be found using an elegant method based on time-periodicity properties of the thermal Green's function [3]. The simplest boundary conditions, as studied in [1; 2; 3; 4; 5], are by no means the only possibilities [1; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. The wide range of valid boundary conditions gives rise to an extensive set of possible quantum states that can be studied. Amongst the various possible boundary conditions, in this paper we focus on Robin (mixed) boundary conditions (see, for example, [7; 9; 14] for more general boundary conditions that can be applied). For a massless, conformally coupled scalar field, Robin boundary conditions correspond to the vanishing of a linear combination of the field and its normal derivative on the boundary. Such boundary conditions break the maximal symmetry of the vacuum state [6; 8; 14; 16; 17]. The renormalized vacuum polarization (VP, the square of the scalar field) was computed in [18] for a massless, conformally coupled scalar field on four-dimensional adS with Robin boundary conditions applied to all field modes. For both vacuum expectation values (v.e.v.s) and thermal expectation values (t.e.v.s) it was found that, on the space-time boundary, the VP has the same value for all boundary conditions except for Dirichlet, where the value was different. The same conclusion was reached recently [19] on three-dimensional adS for a scalar field with nonzero mass and values of the coupling to the space-time curvature for which Robin boundary conditions can be applied. As a result, while Dirichlet boundary conditions are the most widely considered in the literature due to their simplicity, it is the Neumann boundary conditions which give the generic behaviour of the VP on the space-time boundary. In contrast, if Robin boundary conditions are applied only to a subset of the scalar field modes corresponding to \(s\)-wave perturbations, then the VP for a massless, conformally coupled scalar field on four-dimensional adS always takes the Dirichlet value on the boundary [6]. In this paper we explore whether this result extends to the v.e.v.s and t.e.v.s of the RSET for a massless, conformally coupled scalar field on four-dimensional adS. In [6], applying Robin boundary conditions just to the \(s\)-wave modes, it is found that the RSET on the space-time boundary again takes the same value as for Dirichlet boundary conditions. Here we follow [18] and apply Robin boundary conditions to _all_ field modes. As in [18], we employ Euclidean methods to find the v.e.v.s and t.e.v.s of the RSET, paying particular attention to how these depend on the parameter describing the Robin boundary conditions. Our paper is structured as follows. In Section 2 we outline the construction of the vacuum and thermal Green's functions for a massless, conformally coupled, scalar field on four-dimensional adS. This is followed, in Section 3, with the calculation of the expectation values for the RSET in both vacuum and thermal states, including a brief discussion of the numerical methods employed. The results for the v.e.v.s and t.e.v.s for the RSET are given in Sections 4 and 5 respectively. The behaviour of these quantities approaching the space-time boundary is explored further in Section 6. Finally we present our conclusions in Section 7. ## 2 Euclidean Green's functions AdS space-time is a maximally symmetric solution of Einstein's field equations of general relativity, with constant negative curvature. In global coordinates \((t,\rho,\theta,\phi)\) the metric is \[ds^{2}=L^{2}\sec^{2}\rho\,[-dt^{2}+d\rho^{2}+\sin^{2}\rho\,(d\theta^{2}+\sin^{ 2}\theta\,d\phi^{2})], \tag{2}\] where \(\,0\leq\rho<\pi/2\), \(0\leq\theta<\pi\), and \(0\leq\phi<2\pi\). In four dimensions, the cosmological constant (\(\Lambda<0\)) is related to the adS radius of curvature, \(L\), via \(\Lambda=-3/L^{2}\). In adS the time coordinate is periodic with \(t\in(-\pi,\pi]\) and the end points identified. This results in somewhat unphysical closed time-like curves. This is circumvented by considering the covering space (CadS) where the time coordinate is 'unwrapped' to give \(-\infty<t<\infty\). We work in Euclidean space where the Green's function is a unique, well-defined distribution. The Euclidean metric is obtained from the adS metric (2) by performing a Wick rotation, \(t\to i\tau\), leading to \[ds^{2}=L^{2}\sec^{2}\rho\,\,[d\tau^{2}+d\rho^{2}+\sin^{2}\rho\,(d\theta^{2}+ \sin^{2}\theta\,d\phi^{2})]. \tag{3}\] The vacuum \(G^{\rm E}_{\zeta,0}(x,x^{\prime})\) and thermal \(G^{\rm E}_{\zeta,\beta}(x,x^{\prime})\) Euclidean Green's functions for a massless, conformally coupled scalar field take the form [18] \[G^{\rm E}_{\zeta,0}(x,x^{\prime})=\frac{1}{8\pi^{2}L^{2}}\cos\rho \cos\rho^{\prime}\int_{\omega=-\infty}^{\infty}d\omega\,e^{i\omega\Delta\tau} \sum_{\ell=0}^{\infty}(2\ell+1)P_{\ell}(\cos\gamma)g_{\omega\ell}(\rho,\rho^{ \prime}), \tag{4}\] \[G^{\rm E}_{\zeta,\beta}(x,x^{\prime})=\frac{\kappa}{8\pi^{2}L^{2 }}\cos\rho\,\cos\rho^{\prime}\sum_{n=-\infty}^{\infty}e^{in\kappa\Delta\tau} \sum_{\ell=0}^{\infty}(2\ell+1)P_{\ell}(\cos\gamma)g_{\omega\ell}(\rho,\rho^{ \prime}), \tag{5}\] where \(\omega\) is the frequency, \(g_{\omega\ell}(\rho,\rho^{\prime})\) is the radial Green's function and \(P_{\ell}(x)\) is a Legendre polynomial. The angular separation of the space-time points, \(\gamma\), is given by \[\cos\gamma=\cos\theta\cos\theta^{\prime}+\sin\theta\sin\theta^{\prime}\cos \Delta\phi. \tag{6}\] For a thermal state at inverse temperature \(\beta\), the frequency \(\omega\) takes the quantized values \(\omega=n\kappa\) where \(\kappa\) is related to the inverse temperature by \[\kappa=\frac{2\pi}{\beta}. \tag{7}\] The radial Green's function \(g_{\omega\ell}(\rho,\rho^{\prime})\) satisfies the inhomogenous equation \[\left\{\frac{d}{d\rho}\left(\sin^{2}\rho\frac{d}{d\rho}\right)-\omega^{2}\sin ^{2}\rho-\ell(\ell+1)\right\}g_{\omega\ell}(\rho,\rho^{\prime})=\delta(\rho- \rho^{\prime}), \tag{8}\] and takes the form \[g_{\omega\ell}(\rho,\rho^{\prime})=\frac{p_{\omega\ell}(\rho_{<})q_{\omega \ell}(\rho_{>})}{N_{\omega\ell}}, \tag{9}\] where \(\rho_{<}=\min\{\rho,\rho^{\prime}\}\) and \(\rho_{>}=\max\{\rho,\rho^{\prime}\}\), with \(N_{\omega\ell}\) a normalization constant. Here \(p_{\omega\ell}\) and \(q_{\omega\ell}\) are solutions of the homogeneous version of (8) and can be written in terms of conical (Mehler) functions. The function \(p_{\omega\ell}(\rho)\) is regular at the origin \(\rho=0\) and takes the form \[p_{\omega\ell}(\rho)=\frac{1}{\sqrt{\sin\rho}}P^{-\ell-1/2}_{i\omega-1/2}(\cos \rho), \tag{10}\] where \(P^{\nu}_{\mu}(z)\) are associated Legendre functions. At \(\rho=\pi/2\), the function \(q_{\omega\ell}(\rho)\) satisfies Robin boundary conditions: \[q_{\omega\ell}(\rho)\cos\zeta+\frac{dq_{\omega\ell}(\rho)}{d\rho}\sin\zeta=0, \tag{11}\] where \(\zeta\in[0,\pi)\) is the Robin parameter. The value \(\zeta=0\) corresponds to Dirichlet boundary conditions, while \(\zeta=\pi/2\) gives Neumann boundary conditions. Imposing (11) on the general solution of the homogeneous version of (8) gives \[q_{\omega\ell}=\frac{1}{\sqrt{\sin\rho}}\left[C^{\zeta}_{\omega\ell}P^{-\ell- 1/2}_{i\omega-1/2}(\cos\rho)+P^{-\ell-1/2}_{i\omega-1/2}(-\cos\rho)\right], \tag{12}\] where the constant \(C^{\zeta}_{\omega\ell}\) is given by \[C^{\zeta}_{\omega\ell}=\frac{2|\Gamma(\frac{i\omega+\ell+2}{2})|^{2}\tan\zeta -|\Gamma(\frac{i\omega+\ell+1}{2})|^{2}}{2|\Gamma(\frac{i\omega+\ell+2}{2})|^{ 2}\tan\zeta+|\Gamma(\frac{i\omega+\ell+1}{2})|^{2}}. \tag{13}\] We have \(C^{0}_{\omega\ell}=-1\) for Dirichlet boundary conditions and \(C^{\pi/2}_{\omega\ell}=1\) for Neumann boundary conditions. The normalization constant \(N_{\omega\ell}\) is then given by [18] \[N_{\omega\ell}=\frac{2}{|\Gamma(\ell+1+i\omega)|^{2}}. \tag{14}\] Following [18], we now write the vacuum and thermal Euclidean Green's function with Robin boundary conditions as follows: \[G^{\rm E}_{\zeta,0}(x,x^{\prime}) = G^{\rm E}_{\rm D,0}(x,x^{\prime})\cos^{2}\zeta+G^{\rm E}_{\rm N,0} (x,x^{\prime})\sin^{2}\zeta+G^{\rm E}_{\rm R,0}(x,x^{\prime})\sin 2\zeta, \tag{15}\] \[G^{\rm E}_{\zeta,\beta}(x,x^{\prime}) = G^{\rm E}_{\rm D,\beta}(x,x^{\prime})\cos^{2}\zeta+G^{\rm E}_{ \rm N,\beta}(x,x^{\prime})\sin^{2}\zeta+G^{\rm E}_{\rm R,\beta}(x,x^{\prime}) \sin 2\zeta, \tag{16}\] where \(G^{\rm E}_{\rm D,0}(x,x^{\prime})\) and \(G^{\rm E}_{\rm D,\beta}(x,x^{\prime})\) are the vacuum and thermal Euclidean Green's functions with Dirichlet boundary conditions, given by \[G^{\rm E}_{\rm D,0}(x,x^{\prime}) = \frac{1}{16\pi^{2}L^{2}}\frac{\cos\rho\cos\rho^{\prime}}{\sqrt{ \sin\rho\sin\rho^{\prime}}}\int_{\omega=-\infty}^{\infty}d\omega\,e^{i\omega \Delta\tau}\sum_{\ell=0}^{\infty}(2\ell+1)P_{\ell}(\cos\gamma)|\Gamma(\ell+1+i \omega)|^{2} \tag{17}\] \[\quad\times P_{i\omega-1/2}^{-\ell-1/2}(\cos\rho_{<})\left[P_{i \omega-1/2}^{-\ell-1/2}(-\cos\rho_{>})-P_{i\omega-1/2}^{-\ell-1/2}(\cos\rho_{> })\right],\] \[G^{\rm E}_{\rm D,\beta}(x,x^{\prime}) = \frac{\kappa}{16\pi^{2}L^{2}}\frac{\cos\rho\cos\rho^{\prime}}{ \sqrt{\sin\rho\sin\rho^{\prime}}}\sum_{n=-\infty}^{\infty}e^{in\kappa\Delta \tau}\sum_{\ell=0}^{\infty}(2\ell+1)P_{\ell}(\cos\gamma)|\Gamma(\ell+1+in \kappa)|^{2}\] (18) \[\quad\times P_{in\kappa-1/2}^{-\ell-1/2}(\cos\rho_{<})\left[P_{ in\kappa-1/2}^{-\ell-1/2}(-\cos\rho_{>})-P_{in\kappa-1/2}^{-\ell-1/2}(\cos\rho_{> })\right],\] \(G^{\rm E}_{\rm N,0}(x,x^{\prime})\) and \(G^{\rm E}_{\rm N,\beta}(x,x^{\prime})\) are the vacuum and thermal Euclidean Green's functions, with Neumann boundary conditions, given by \[G^{\rm E}_{\rm N,0}(x,x^{\prime}) = \frac{1}{16\pi^{2}L^{2}}\frac{\cos\rho\cos\rho^{\prime}}{\sqrt{ \sin\rho\sin\rho^{\prime}}}\int_{\omega=-\infty}^{\infty}d\omega\,e^{i\omega \Delta\tau}\sum_{\ell=0}^{\infty}(2\ell+1)P_{\ell}(\cos\gamma)|\Gamma(\ell+1+ i\omega)|^{2} \tag{19}\] \[\quad\times P_{i\omega-1/2}^{-\ell-1/2}(\cos\rho_{<})\left[P_{i \omega-1/2}^{-\ell-1/2}(-\cos\rho_{>})+P_{i\omega-1/2}^{-\ell-1/2}(\cos\rho_ {>})\right],\] \[G^{\rm E}_{\rm N,\beta}(x,x^{\prime}) = \frac{\kappa}{16\pi^{2}L^{2}}\frac{\cos\rho\cos\rho^{\prime}}{ \sqrt{\sin\rho\sin\rho^{\prime}}}\sum_{n=-\infty}^{\infty}e^{in\kappa\Delta \tau}\sum_{\ell=0}^{\infty}(2\ell+1)P_{\ell}(\cos\gamma)|\Gamma(\ell+1+in\kappa )|^{2}\] (20) \[\quad\times P_{in\kappa-1/2}^{-\ell-1/2}(\cos\rho_{<})\left[P_{ in\kappa-1/2}^{-\ell-1/2}(-\cos\rho_{>})+P_{in\kappa-1/2}^{-\ell-1/2}(\cos\rho_{>}) \right],\] \(G^{\rm E}_{\rm R,0}(x,x^{\prime})\) and \(G^{\rm E}_{\rm R,\beta}(x,x^{\prime})\) are the vacuum and thermal regular contributions (not Green's functions), given by \[G^{\rm E}_{\rm R,0}(x,x^{\prime}) = \frac{1}{16\pi^{2}L^{2}}\frac{\cos\rho\,\cos\rho^{\prime}}{\sqrt{ \sin\rho\,\sin\rho^{\prime}}}\int_{\omega=-\infty}^{\infty}d\omega\,e^{i\omega \Delta\tau}\sum_{\ell=0}^{\infty}D_{\omega\ell}^{\zeta}P_{\ell}(\cos\gamma)P_{ i\omega-1/2}^{-\ell-1/2}(\cos\rho)\,P_{i\omega-1/2}^{-\ell-1/2}(\cos\rho^{\prime}), \tag{21}\] \[G^{\rm E}_{\rm R,\beta}(x,x^{\prime}) = \frac{\kappa}{16\pi^{2}L^{2}}\frac{\cos\rho\,\cos\rho^{\prime}}{ \sqrt{\sin\rho\,\sin\rho^{\prime}}}\sum_{n=-\infty}^{\infty}e^{in\kappa\Delta \tau}\sum_{\ell=0}^{\infty}D_{\omega\ell}^{\zeta}P_{\ell}(\cos\gamma)P_{in\kappa -1/2}^{-\ell-1/2}(\cos\rho)\,P_{in\kappa-1/2}^{-\ell-1/2}(\cos\rho^{\prime}), \tag{22}\] where the constants \(D_{\omega\ell}^{\zeta}\) are given by \[D_{\omega\ell}^{\zeta}=(2\ell+1)|\Gamma(1+\ell+i\omega)|^{2}\left[\frac{2|\Gamma (\frac{i\omega+\ell+2}{2})|^{2}\cos\zeta-|\Gamma(\frac{i\omega+\ell+1}{2})|^{2 }\sin\zeta}{2|\Gamma(\frac{i\omega+\ell+2}{2})|^{2}\sin\zeta+|\Gamma(\frac{i \omega+\ell+1}{2})|^{2}\cos\zeta}\right], \tag{23}\] with \(\omega=n\kappa\) in the thermal sum (22). It is clear from (21, 22), that \(G^{\rm E}_{\rm R,0}(x,x^{\prime}),G^{\rm E}_{\rm R,\beta}(x,x^{\prime})\) will diverge if there are values of the Robin parameter \(\zeta\) satisfying \[2\tan\zeta=-\frac{|\Gamma(\frac{i\omega+\ell+1}{2})|^{2}}{|\Gamma(\frac{i\omega +\ell+2}{2})|^{2}}. \tag{24}\] We therefore have an upper limit for the Robin parameter \(\zeta_{\rm crit}\approx 0.68\pi\)[18], beyond which there are unstable modes, so we restrict our consideration of Robin boundary conditions to values of the Robin parameter \(\zeta<\zeta_{\rm crit}\). ## 3 Expectation values of the RSET Having derived expressions for the vacuum and thermal Green's functions for the massless, conformally coupled, scalar field on four-dimensional adS, we now determine the v.e.v.s and t.e.v.s of the RSET. For a quantum state \(|\psi\rangle\) the RSET expectation value is given in terms of the Euclidean Green's function \(G^{\rm E}(x,x^{\prime})\) for that state by \[\langle\psi|\hat{T}_{\mu\nu}(x)|\psi\rangle=\lim_{x^{\prime}\to x}\left\{ \mathcal{T}_{\mu\nu}(x,x^{\prime})G^{\rm E}(x,x^{\prime})\right\}, \tag{25}\] where \(\mathcal{T}_{\mu\nu}(x,x^{\prime})\) is the second order differential operator [20] \[\begin{split}\mathcal{T}_{\mu\nu}=\frac{2}{3}g_{\nu}^{\ \nu^{\prime}}\nabla_{\mu}\nabla_{\nu^{\prime}}-\frac{1}{6}g_{\mu\nu}g^{\rho \sigma^{\prime}}\nabla_{\rho}\nabla_{\sigma^{\prime}}-\frac{1}{3}g_{\mu}^{\ \mu^{\prime}}g_{\nu}^{\ \nu^{ \prime}}\nabla_{\mu^{\prime}}\nabla_{\nu^{\prime}}\\ \qquad\qquad\qquad+\frac{1}{3}g_{\mu\nu}\nabla_{\rho}\nabla^{\rho }+\frac{1}{6}\Big{(}R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\Big{)},\end{split} \tag{26}\] where \(g_{\mu\nu^{\prime}}\) represents the bivector of parallel transport between the points \(x\) and \(x^{\prime}\). Using (16, 25) we find the following expression for the vacuum/thermal expectation value of the stress energy tensor \(\langle\hat{T}_{\mu\nu}\rangle^{\zeta}\): \[\langle\hat{T}_{\mu\nu}\rangle^{\zeta}=\lim_{x^{\prime}\to x}\left\{ \mathcal{T}_{\mu\nu}(x,x^{\prime})G^{\rm E}_{\rm D}(x,x^{\prime})\right\}\cos ^{2}\zeta+\lim_{x^{\prime}\to x}\left\{\mathcal{T}_{\mu\nu}(x,x^{\prime})G^{ \rm E}_{\rm N}(x,x^{\prime})\right\}\sin^{2}\zeta\\ +\lim_{x^{\prime}\to x}\left\{\mathcal{T}_{\mu\nu}(x,x^{\prime})G^{ \rm E}_{\rm R}(x,x^{\prime})\right\}\sin 2\zeta. \tag{27}\] The final term in (27) is regular in the coincidence limit. The expectation values are renormalized by subtracting the Hadamard parametrix from the Euclidean Green's function before applying the differential operator \(\mathcal{T}_{\mu\nu}\) and then bringing the space-time points together. Since the subtraction terms are independent of the quantum state, we obtain \[\langle\hat{T}_{\mu\nu}\rangle^{\zeta}_{\rm ren}\,=\langle\hat{T}_{\mu\nu} \rangle^{\rm D}_{\rm ren}\cos^{2}\zeta+\langle\hat{T}_{\mu\nu}\rangle^{\rm N} _{\rm ren}\sin^{2}\zeta+\lim_{x^{\prime}\to x}\left\{\mathcal{T}_{\mu\nu}(x, x^{\prime})G^{\rm E}_{\rm R}(x,x^{\prime})\right\}\sin 2\zeta. \tag{28}\] From henceforth, the'ren' subscript will be omitted and it can be assumed that all \(\langle\hat{T}_{\mu\nu}\rangle\) terms are renormalised. The t.e.v.s of the RSET with Dirichlet and Neumann boundary conditions have been determined in [3] (see (3.13), in which there is a typographical error which is corrected below): \[\langle\hat{T}_{\mu\nu}\rangle^{\rm D/N}_{\beta}= \frac{1}{8\pi^{2}L^{4}}\left\{\left[-\frac{1}{120}+\frac{4}{3} \cos^{4}\rho\ f_{3}\left(\frac{\beta}{L}\right)\right]g_{\mu\nu}+\left[\frac{ 16}{3}\cos^{4}\rho\,f_{3}\left(\frac{\beta}{L}\right)\right]\tau_{\mu}\tau_{ \nu}\right\}\] \[+\left[\frac{1}{6}(3\csc^{2}\rho)S_{0}\left(\frac{\beta}{L},\rho \right)+\cot\rho\left(1-\frac{2}{3}\cos^{2}\rho\right)C_{1}\left(\frac{\beta} {L},\rho\right)+2\cos^{2}\rho\ S_{2}\left(\frac{\beta}{L},\rho\right)\right] \tau_{\mu}\tau_{\nu}\] \[+\left[\frac{1}{6}(3\csc^{2}\rho-4)S_{0}\left(\frac{\beta}{L}, \rho\right)+\cot\rho\left(\frac{2}{3}\sin^{2}\rho-1\right)C_{1}\left(\frac{ \beta}{L},\rho\right)-\frac{2}{3}\cos^{2}\rho\ S_{2}\left(\frac{\beta}{L},\rho \right)\right]\rho_{\mu}\rho_{\nu}\right\}, \tag{29}\] where \(g_{\mu\nu}\) is the space-time metric (3), and \(\tau_{\mu}\), \(\rho_{\mu}\) are unit vectors in the \(\tau\) and \(\rho\) directions, respectively. The Dirichlet boundary condition corresponds to the \(+\) sign whilst the Neumann boundary condition has the \(-\) sign. The functions \(f_{m}\), \(S_{m}\) and \(C_{m}\) are given by [3] \[f_{m}(x)=\sum_{n=1}^{\infty}n^{m}(e^{nx}-1)^{-1}, \tag{30}\] \[S_{m}(x,\rho)=\sum_{n=1}^{\infty}n^{m}(-1)^{n}(e^{nx}-1)^{-1}\sin (2n\rho),\] (31) \[C_{m}(x,\rho)=\sum_{n=1}^{\infty}n^{m}(-1)^{n}(e^{nx}-1)^{-1}\cos (2n\rho). \tag{32}\] To obtain the corresponding vacuum expectation value from (29), we take the limit as \(\beta\to\infty\), as will be discussed in Section 4. The expression \(\lim\limits_{x^{\prime}\to x}\{\mathcal{T}_{\mu\nu}(x,x^{\prime})G_{\mathrm{R}, \beta}^{\mathrm{E}}(x,x^{\prime})\}\), in the final term in (28), has been evaluated with mathematica and the nonzero components of this contribution to the v.e.v.s of the RSET are given by \[\langle\hat{T}_{\tau}^{\tau}\rangle_{\mathrm{R},0}^{\zeta}=\frac{ \cos\rho\cot^{3}\rho}{192L^{4}\pi^{2}}\sum_{\ell=0}^{\infty}\,\int_{\omega=- \infty}^{\infty}\,D_{\omega\ell}^{\zeta}\Bigg{\{}-2\,\chi_{\omega\ell}^{2}\,[P _{i\omega-1/2}^{-3/2-\ell}(\cos\rho)]^{2}\,\sin^{2}\rho\] \[\qquad\qquad-P_{i\omega-1/2}^{-1/2-\ell}(\cos\rho)\Bigg{[}2(\ell ^{2}-3\omega^{2}+(1+\ell^{2}+3\omega^{2})\cos 2\rho)P_{i\omega-1/2}^{-1/2-\ell}( \cos\rho)\] \[\qquad\qquad+2\,\Upsilon_{\omega\ell}\,P_{i\omega-1/2}^{-5/2- \ell}(\cos\rho)\sin^{2}\rho\Bigg{]}-(3+4\ell)\,\chi_{\omega\ell}\,P_{i\omega-1 /2}^{-3/2-\ell}(\cos\rho)P_{i\omega-1/2}^{-1/2-\ell}(\cos\rho)\sin 2\rho \Bigg{\}}, \tag{33}\] \[\langle\hat{T}_{\rho}^{\rho}\rangle_{\mathrm{R},0}^{\zeta}=\frac{ \cos\rho\cot^{3}\rho}{192L^{4}\pi^{2}}\sum_{\ell=0}^{\infty}\,\int_{\omega=- \infty}^{\infty}\,D_{\omega\ell}^{\zeta}\Bigg{\{}-2\,\chi_{\omega\ell}^{2}\,[ P_{i\omega-1/2}^{-3/2-\ell}(\cos\rho)]^{2}\sin^{2}\rho\] \[\qquad\qquad+P_{i\omega-1/2}^{-1/2-\ell}(\cos\rho)\bigg{[}2(-1-2 \ell+(1+\ell)\cos 2\rho)P_{i\omega-1/2}^{-1/2-\ell}(\cos\rho)\] \[\qquad\qquad-2\,\Upsilon_{\omega\ell}\,P_{i\omega-1/2}^{-5/2- \ell}(\cos\rho)\sin^{2}\rho\Big{]}-(5+4\ell)\,\chi_{\omega\ell}\,P_{i\omega-1 /2}^{-3/2-\ell}(\cos\rho)P_{i\omega-1/2}^{-1/2-\ell}(\cos\rho)\sin 2\rho \Bigg{\}}, \tag{35}\] with \(\langle\hat{T}_{\phi}^{\phi}\rangle_{\mathrm{R},0}^{\zeta}=\langle\hat{T}_{ \theta}^{\theta}\rangle_{\mathrm{R},0}^{\zeta}\), where \(D_{\omega\ell}^{\zeta}\) is given by (23), and \(\Upsilon_{\omega\ell}\), \(\chi_{\omega\ell}\) are \[\Upsilon_{\omega\ell}=4+6\ell^{3}+\ell^{4}+5\omega^{2}+\omega^{4} +6\ell(2+\omega^{2})+\ell^{2}(13+2\omega^{2}), \tag{36}\] \[\chi_{\omega\ell}=1+2\ell+\ell^{2}+\omega^{2}. \tag{37}\] For the corresponding expressions for the t.e.v.s we replace \(\omega\) with \(n\kappa\) and change the integral to a sum to obtain the following nonzero components \[\langle\hat{T}_{\tau}^{\tau}\rangle_{\mathrm{R},\beta}^{\zeta}= \frac{\kappa\cos\rho\cot^{3}\rho}{192L^{4}\pi^{2}}\sum_{\ell=0}^{\infty}\, \sum_{n=-\infty}^{\infty}\,D_{n\ell}^{\zeta}\Bigg{\{}-2\,\chi_{n\ell}^{2}\,[ P_{ins\kappa-1/2}^{-3/2-\ell}(\cos\rho)]^{2}\,\sin^{2}\rho\] \[\qquad\qquad-P_{in\kappa-1/2}^{-1/2-\ell}(\cos\rho)\Bigg{[}2(\ell ^{2}-3n^{2}\kappa^{2}+(1+\ell^{2}+3n^{2}\kappa^{2})\cos 2\rho)P_{in\kappa-1/2}^{-1/2-\ell}( \cos\rho)\] \[\qquad\qquad+2\,\Upsilon_{n\ell}\,P_{in\kappa-1/2}^{-5/2-\ell}( \cos\rho)\sin^{2}\rho\Bigg{]}-(3+4\ell)\,\chi_{n\ell}\,P_{in\kappa-1/2}^{-3/2- \ell}(\cos\rho)P_{in\kappa-1/2}^{-1/2-\ell}(\cos\rho)\sin 2\rho\Bigg{\}}, \tag{38}\] \[\langle\hat{T}_{\rho}^{\rho}\rangle_{\mathrm{R},\beta}^{\zeta}= \frac{\kappa\cos\rho\cot^{3}\rho}{192L^{4}\pi^{2}}\sum_{\ell=0}^{ \infty}\,\sum_{n=-\infty}^{\infty}\,D_{n\ell}^{\zeta}\Bigg{\{}-6\,\chi_{n\ell} ^{2}\,[P_{in\kappa-1/2}^{-3/2-\ell}(\cos\rho)]^{2}\,\sin^{2}\rho\] \[\qquad\qquad+P_{in\kappa-1/2}^{-1/2-\ell}(\cos\rho)\Bigg{[}2(-1-2 \ell+(1+\ell)\cos 2\rho)P_{in\kappa-1/2}^{-1/2-\ell}(\cos\rho)\] \[\qquad\qquad+6\,\Upsilon_{n\ell}\,P_{in\kappa-1/2}^{-5/2-\ell}( \cos\rho)\sin^{2}\rho\Bigg{]}+5\,\chi_{n\ell}\,P_{in\kappa-1/2}^{-3/2-\ell}( \cos\rho)P_{in\kappa-1/2}^{-1/2-\ell}(\cos\rho)\sin 2\rho\Bigg{\}}, \tag{39}\] \[\langle\hat{T}_{\theta}^{\theta}\rangle_{\mathrm{R},\beta}^{\zeta} =\frac{\kappa\cos\rho\cot^{3}\rho}{192L^{4}\pi^{2}}\sum_{\ell=0}^ {\infty}\ \sum_{n=-\infty}^{\infty}\ D_{n\ell}^{\zeta}\Bigg{\{}-2\,\chi_{n\ell}^{2}\,[P _{in\kappa-1/2}^{-3/2-\ell}(\cos\rho)]^{2}\sin^{2}\rho\] \[\qquad+P_{in\kappa-1/2}^{-1/2-\ell}(\cos\rho)\Big{[}-2(-1-2\ell-2 \ell^{2}+(1+\ell)^{2}\cos 2\rho)P_{in\kappa-1/2}^{-1/2-\ell}(\cos\rho)\] \[\qquad-2\,\Upsilon_{n\ell}\,P_{in\kappa-1/2}^{-5/2-\ell}(\cos \rho)\sin^{2}\rho\Big{]}-(5+4\ell)\,\chi_{n\ell}\,P_{in\kappa-1/2}^{-3/2-\ell }(\cos\rho)P_{in\kappa-1/2}^{-1/2-\ell}(\cos\rho)\sin 2\rho\Bigg{\}}, \tag{40}\] and \(\langle\hat{T}_{\phi}^{\phi}\rangle_{\mathrm{R},\beta}^{\zeta}=\langle\hat{T} _{\theta}^{\theta}\rangle_{\mathrm{R},\beta}^{\zeta}\). The \(D_{n\ell}^{\zeta},\Upsilon_{n\ell}\) and \(\chi_{n\ell}\) terms are obtained from the corresponding terms in (23, 37) by replacing \(\omega\) with \(n\kappa\). The v.e.v.s and t.e.v.s of the RSET are calculated numerically using mathematica. The sums in (29) converge extremely rapidly and are straightforward to compute. The remaining contributions (33-40) which arise when we impose Robin boundary conditions involve either a double infinite summation (for the t.e.v.s) or an integral and a summation (for the v.e.v.s). For the v.e.v.s, we performed the integral over \(\omega\) first before summing over \(\ell\). For fixed \(\ell\), the integral over \(\omega\) is rapidly convergent, and we integrated over the interval \(|\omega|\leq 100\). For the t.e.v.s, the sum over \(n\) again converges rapidly, and we summed over \(n\) with a magnitude of less than or equal to 50. As was found in the computation of the VP [18], the sum over \(\ell\) exhibits nonuniform convergence with respect to the radial coordinate, \(\rho\), converging more quickly nearer the origin and much slower as the space-time boundary is approached (see Figure 1). For the v.e.v.s, we summed over \(0\leq\ell\leq 100\) and for the t.e.v.s \(0\leq\ell\leq 80\). We used a smaller range of values of \(n\) and \(\ell\) for the t.e.v.s compared to the v.e.v.s due to increased computation time required for the function evaluations. We estimated the errors in truncating the sums and integrals as follows. For the v.e.v.s, for a selection of values of the Robin parameter \(\zeta\) and radial coordinate \(\rho\), we compared our results obtained by integrating over \(|\omega|\leq 100\) and summing over \(0\leq\ell\leq 100\) with those found from increasing the maximum values of the magnitudes of \(\omega\) and \(\ell\) to 170. For example, for \(\zeta=3\pi/10\) and \(\rho=94\pi/200\), by this method we estimate the relative error in (33) to be of order \(10^{-2}\). The relative error was much smaller further away from the space-time boundary, and is estimated to be of order \(10^{-18}\) at \(\rho=3\pi/10\) and \(\rho=\pi/20\) Figure 1: Log-log plot of the \(\ell\)-summand in \(\langle\hat{T}_{T}^{\tau}\rangle_{\mathrm{R},\beta}^{\zeta}\) (38) as a function of \(\ell\) for a selection of values of the radial coordinate \(\rho\). With \(\zeta=3\pi/5\) and \(\kappa=1/2\), we have performed the sum over \(|n|\leq 50\). It can be seen that, as \(\rho\) increases, the \(\ell\)-summand decreases at a much slower rate with increasing \(\ell\), resulting in a sum over \(\ell\) which converges more slowly for larger values of \(\rho\). However, the contributions to the RSET in (33-35) contribute only a small proportion of the overall value. For example, the value of (33) at \(\rho=94\pi/200\) and \(\zeta=3\pi/10\) as a fraction of the total v.e.v. of \(\langle\hat{T}_{\tau}^{\tau}\rangle_{0}^{\zeta}\) was \(\sim 5\times 10^{-4}\), meaning that the errors in the numerical calculations of the contributions (33-35) are much less significant in the final results. The same holds for the t.e.v.s. We employed the same method to estimate the relative errors in the t.e.v.s, and find that the errors depend strongly on both the temperature and the radial coordinate \(\rho\). For \(\kappa=1/2\) and \(\zeta=\pi/10\), the relative error in the numerical computation of (38) at \(\rho=80\pi/200\), for example, was \(\sim 3.5\times 10^{-5}\). Indeed, as a result of increasing errors encountered close to the boundary, the numerical calculation of (38), for \(\kappa=1/2\) was performed up to \(\rho=85\pi/200\) only. The relative errors near the space-time boundary improved somewhat with increasing \(\kappa\). For \(\rho=90\pi/200\), for instance, the relative errors in (38) were \(\sim 2\times 10^{-10}\) and \(\sim 5\times 10^{-11}\) for \(\kappa=2\) and \(\kappa=2\pi\) respectively. ## 4 Vacuum expectation value of the RSET with Robin boundary conditions In the low temperature limit (\(\beta\to\infty\)), the v.e.v. of the RSET, \(\langle\hat{T}_{\mu\nu}\rangle_{0}^{\rm D/N}\), derived from (29) reduces to [3] \[\langle\hat{T}_{\mu\nu}\rangle_{0}^{\rm D/N}=-\frac{1}{960\pi^{2}L^{4}}g_{\mu\nu} \tag{41}\] in agreement with the calculation in [4] for a scalar field with general mass and coupling and Dirichlet boundary conditions. In particular, the v.e.v. (41) is identical for Dirichlet and Neumann boundary conditions. This does not occur for the VP (where the v.e.v.s for Dirichlet and Neumann boundary conditions are different), and can be understood as follows. For both Dirichlet and Neumann boundary conditions, the vacuum state is maximally symmetric, and therefore the RSET will be a constant multiple of the metric, \(\langle\hat{T}_{\mu\nu}\rangle_{0}^{\rm D/N}=\alpha g_{\mu\nu}\) for some constant \(\alpha\). Taking the trace, \(\alpha=\langle\hat{T}_{\mu}^{\mu}\rangle_{0}^{\rm D/N}/4\). For a massless, conformally coupled scalar field, the trace \(\langle\hat{T}_{\mu}^{\mu}\rangle_{0}^{\rm D/N}\) is fixed to be the trace anomaly, which, on four-dimensional adS is [4] \[\langle\hat{T}_{\mu}^{\mu}\rangle=-\frac{1}{240\pi^{2}L^{4}}. \tag{42}\] Therefore, for a massless, conformally coupled scalar field, the RSET for a maximally symmetric state is entirely determined by the trace anomaly and is independent of any boundary conditions applied. While the v.e.v of the RSET with either Dirichlet or Neumann boundary respects the maximum symmetry of the underlying space-time, this is not the case when Robin boundary conditions are applied, as can be seen in Figure 2. For all values of \(\zeta\neq 0,\pi/2\) each component of the RSET varies with the radial coordinate. The energy density \(-\langle\hat{T}_{\tau}^{\tau}\rangle_{0}^{\zeta}\) is positive throughout the space-time, reaching the common vacuum Dirichlet/Neumann value at the space-time boundary. This is in contrast to the findings in [6] where the energy density is negative on most of the space-time and only becomes positive as the boundary is reached. This is due to the application of Robin boundary conditions to only a subset of the modes in [6]. The other components of the RSET take the same constant values when \(\zeta=0\) or \(\pi/2\) and Dirichlet or Neumann boundary conditions are applied. The quantities plotted in Figure 2 are greatest at the space-time origin and converge to the Dirichlet/Neumann value as the space-time boundary is reached (\(\rho\to\pi/2\)). Their values at the space-time origin increase as \(\zeta\) increases from zero, attain a maximum at some value of \(\zeta\in(0,\pi/2)\) and then decrease as \(\zeta\) approaches \(\pi/2\). As \(\zeta\) increases above \(\pi/2\), these quantities increase rapidly as \(\zeta\) approaches \(\zeta_{\rm crit}\approx 0.68\pi\), at which point there is a classical mode instability and the semiclassical approximation breaks down. Whilst the v.e.v. of the \(\langle\hat{T}_{\rho}^{\rho}\rangle\) and \(\langle\hat{T}_{\theta}^{\theta}\rangle\) components of the RSET are negative, they are greater (less negative) for Robin boundary conditions than when Dirichlet/Neumann boundary conditions are applied. However, the variation in the v.e.v.s of the RSET components due to varying the boundary conditions is rather small, at roughly the percent level. While it may appear from Figure 2 that the v.e.v.s of the \(\langle\hat{T}_{\rho}^{\rho}\rangle_{0}^{\zeta}\) and \(\langle\hat{T}_{\theta}^{\theta}\rangle_{0}^{\zeta}\) components are the same, there is in fact a subtle difference. Writing the components of the RSET in the Landau decomposition, analogous to that employed in the thermal state [5], gives \[\langle\hat{T}_{\mu}^{\nu}\rangle_{0}^{\zeta}={\rm Diag}\Big{\{}-E_{0}^{\zeta},\,P_{0}^{\zeta}+\Pi_{0}^{\zeta},P_{0}^{\zeta}-\frac{1}{2}\Pi_{0}^{\zeta},P_{0 }^{\zeta}-\frac{1}{2}\Pi_{0}^{\zeta}\Big{\}} \tag{43}\] where \(E_{0}^{\zeta}\) is the energy density, \(P_{0}^{\zeta}\) the pressure and \(\Pi_{0}^{\zeta}\) is the shear stress or pressure deviator [5]. The pressure deviator measures the difference between the RSET of the quantum scalar field compared with that found if the field were modelled as a classical gas of particles (as it vanishes in the latter case). As the \(\langle\hat{T}_{\theta}^{\theta}\rangle_{0}^{\zeta}\) component of the RSET is greater than the \(\langle\hat{T}_{\rho}^{\rho}\rangle_{0}^{\zeta}\), component, in Figure 3 we show \(-\Pi_{0}^{\zeta}\) as a function of the radial coordinate \(\rho\) for the vacuum state. For both Dirichlet and Neumann boundary conditions, the vacuum pressure deviator is zero (not shown in Figure 3). For Robin boundary conditions \(\Pi_{0}^{\zeta}\) vanishes at both the origin and boundary of the space-time and attains its maximum absolute value between the two. ## 5 Thermal expectation value of the RSET with Robin boundary conditions The t.e.v.s of the nonzero components of the RSET with various values of \(\kappa\) (7) are shown in Figures 4-6 (as in the vacuum state, the \(\langle\hat{T}_{\phi}^{\phi}\rangle_{\beta}^{\zeta}\) component has the same values as the \(\langle\hat{T}_{\theta}^{\theta}\rangle_{\beta}^{\zeta}\) component). The nonzero components have very similar behaviour. Unlike the vacuum case, the t.e.v.s with Dirichlet and Neumann boundary conditions, for all nonzero components of the RSET, are no longer constant and vary with the space-time location. The difference between the t.e.v.s with Dirichlet and Neumann boundary conditions is a maximum at the space-time origin and decreases with increasing \(\rho\). The RSET components for these two boundary conditions converge to their common v.e.v. at the space-time boundary (\(\rho\to\pi/2\)). The absolute difference seen between the nonzero RSET components for the Dirichlet and Neumann boundary conditions increases with increasing \(\kappa\) (and hence increasing temperature) and is not clearly discernible at low temperatures in Figures 4-6 due to the scales used. The energy density \(-\langle\hat{T}_{\tau}^{\gamma}\rangle_{\beta}^{\zeta}\), is positive throughout the space-time, achieves its maximum value at the space-time origin and increases with increasing temperature. For all Robin parameters studied, the energy density converges to the the common vacuum Dirichlet/Neumann value at the space-time boundary. For the other nonzero components of the RSET, the t.e.v.s are predominantly negative at low temperature and increase with increasing temperature, becoming positive in a neighbourhood of the space-time origin at \(\rho=0\) for sufficiently large \(\kappa\). They also achieve their maximum values at the space-time origin, converging to the v.e.v. (41) at the space-time boundary. It can be seen that at low temperature (\(\kappa=1/2\)), the curves for t.e.v.s with Robin boundary conditions lie outside of the curves corresponding to Dirichlet/Neumann boundary conditions. With increasing temperature, the curves for t.e.v.s with Robin boundary conditions increasingly lie within those for Dirichlet/Neumann boundary conditions and are mostly contained within them for \(\kappa=2\pi\). As the temperature increases, the variation in the nonzero components of the RSET with varying Robin parameter \(\zeta\) becomes much less apparent, as seen in Figures 4-6. As in the vacuum case, we also plot minus the thermal pressure deviator (-\(\Pi_{\beta}^{\zeta}\)), which is the difference between the \(\langle\hat{T}_{\rho}^{\rho}\rangle_{\beta}^{\zeta}\) and \(\langle\hat{T}_{\theta}^{\theta}\rangle_{\beta}^{\zeta}\) components of the t.e.v. of the RSET (see Figure 7). The pressure deviator is not only sensitive to the different Robin boundary conditions (as in the vacuum case) but also to the different temperatures. Unlike the vacuum case, the pressure deviator is no longer zero everywhere for Dirichlet and Neumann boundary conditions (see also [5], where Dirichlet boundary conditions are applied). For these boundary conditions, the pressure deviator does vanish at the space-time origin and boundary and attains its maximum magnitude between these, this maximum magnitude increasing as the temperature increases. There is a difference in sign with \(-\Pi_{\beta}^{\zeta}\) being negative for Dirichlet and positive for Neumann boundary conditions respectively. For Robin boundary conditions, the profile of the pressure deviator is largely similar to that for Dirichlet or Neumann boundary conditions, vanishing at the origin and space-time boundary and having a maximum magnitude at some \(\rho\in(0,\pi/2)\). At the higher temperatures we see that \(\Pi_{\beta}^{\zeta}\) is most positive with Dirichlet boundary conditions (\(\zeta=0\)) but with with increasing Robin parameter, \(\zeta\), the pressure deviator becomes increasingly negative. As seen in the RSET components, we find that with increasing Figure 3: Vacuum pressure deviator -\(\Pi_{0}^{\zeta}\) (43) with Robin boundary conditions. On the left is a 3D surface plot showing the variation of \(-\Pi_{0}^{\zeta}\) with \(\rho\) and \(\zeta\). On the right is \(-\Pi_{0}^{\zeta}\) as a function of \(\rho\) for a selection of values of the Robin parameter \(\zeta\). Robin parameters \(\zeta>\pi/2\) shown with dotted curves. The pressure deviator vanishes identically when Dirichlet or Neumann boundary conditions are applied and is not plotted in these cases. Figure 4: T.e.v.s of the energy density \(-\langle\hat{T}^{\tau}_{\tau}\rangle^{\zeta}_{\beta}\), with Robin boundary conditions and a selection of values of \(\kappa\) (7). On the left are 3D surface plots showing the variation of \(-\langle\hat{T}^{\tau}_{\tau}\rangle^{\zeta}_{\beta}\) with \(\rho\) and \(\zeta\). On the right is \(-\langle\hat{T}^{\tau}_{\tau}\rangle^{\zeta}_{\beta}\) as a function of \(\rho\) for a selection of values of the Robin parameter \(\zeta\). Dirichlet and Neumann boundary conditions are shown with dotted lines. Figure 5: T.e.v.s of the RSET, \(\langle\hat{T}^{\rho}_{\rho}\rangle_{\beta}^{\zeta}\), with Robin boundary conditions and a selection of values of \(\kappa\) (7). On the left are 3D surface plots showing the variation of \(\langle\hat{T}^{\rho}_{\rho}\rangle_{\beta}^{\zeta}\) with \(\rho\) and \(\zeta\). On the right is \(\langle\hat{T}^{\rho}_{\rho}\rangle_{\beta}^{\zeta}\) as a function of \(\rho\) for a selection of values of the Robin parameter \(\zeta\). Dirichlet and Neumann boundary conditions are shown with dotted lines. Figure 6: T.e.v.s of the RSET, \(\langle\hat{T}^{\theta}_{\theta}\rangle^{\zeta}_{\beta}\), with Robin boundary conditions and a selection of values of \(\kappa\) (7). On the left are 3D surface plots showing the variation of \(\langle\hat{T}^{\theta}_{\theta}\rangle^{\zeta}_{\beta}\) with \(\rho\) and \(\zeta\). On the right is \(\langle\hat{T}^{\theta}_{\theta}\rangle^{\zeta}_{\beta}\) as a function of \(\rho\) for a selection of values of the Robin parameter \(\zeta\). Dirichlet and Neumann boundary conditions are shown with dotted lines. Figure 7: Thermal pressure deviators \(-\Pi^{\zeta}_{\beta}\) (43) with Robin boundary conditions and a selection of values of \(\kappa\) (7). On the left are 3D surface plots showing the variation of \(-\Pi^{\zeta}_{\beta}\) with \(\rho\) and \(\zeta\). On the right is \(-\Pi^{\zeta}_{\beta}\) as a function of \(\rho\) for a selection of values of the Robin parameter \(\zeta\). Dirichlet and Neumann boundary conditions are shown with dotted lines. temperature, the thermal pressure deviators with different Robin boundary conditions are increasingly 'contained' within the Dirichlet and Neumann curves. ## 6 The RSET at the boundary The behaviour of the RSET components as the space-time boundary is approached may be understood from the corresponding analysis in [18] for the VP. Since we are considering a massless, conformally coupled scalar field, we can make a conformal transformation to the Einstein static universe (ESU), containing a time-like surface which is the image of the adS boundary under this mapping. Using the general construction in [21], the Green's function for the scalar field on ESU with Robin boundary conditions applied can be written as an asymptotic series in terms of the Green's function on ESU with Neumann boundary conditions applied \(G_{N}^{\rm ESU}(x,x^{\prime})\) (see [18; 21] for more details). This procedure gives the following asymptotic series for the vacuum Euclidean Green's function on ESU, \(G_{\zeta,0}^{\rm ESU}\), with Robin boundary conditions applied: \[G_{\zeta,0}^{\rm ESU}(x,x^{\prime})=G_{\rm N,0}^{\rm ESU}(x,x^{\prime})-\frac {1}{L}G_{\zeta,0}^{(1)}(x,x^{\prime})\cot\zeta+\frac{1}{L^{2}}G_{\zeta,0}^{(2) }(x,x^{\prime})\cot^{2}\zeta+\ldots \tag{44}\] where the first two terms in the series are given by \[G_{\zeta,0}^{(1)}(x,x^{\prime})=\int_{\mathcal{I}_{\pi/2}}G_{ \rm N}^{\rm ESU}(x,y)G_{\rm N}^{\rm ESU}(y,x^{\prime})\,dS, \tag{45}\] \[G_{\zeta,0}^{(2)}(x,x^{\prime})=\int_{\mathcal{I}_{\frac{\pi}{ 2}}}G_{\rm N}^{\rm ESU}(x,y)\left[\int_{\mathcal{I}_{\frac{\pi}{2}}}G_{\rm N}^ {\rm ESU}(y,z)G_{\rm N}^{\rm ESU}(z,x^{\prime})\,dS\right]\,dS. \tag{46}\] Here \(\mathcal{I}_{\pi/2}\) is the surface at \(\rho=\pi/2\) in ESU, and the integrals are performed over the space-time points \(y\), \(z\) on this surface in ESU. Higher-order terms in the series can be found iteratively. The Green's function on ESU with Neumann boundary conditions applied has a compact closed-form expression [18] \[G_{\rm N}^{\rm ESU}(x,x^{\prime})=\frac{1}{8\pi^{2}L^{2}}\left\{\frac{1}{\cosh \Delta\tau+\cos\Psi}+\frac{1}{\cosh\Delta\tau+\cos\Psi^{*}}\right\} \tag{47}\] where \(\Delta\tau=\tau-\tau^{\prime}\) is the separation of the points in the \(\tau\)-direction, \[\Psi=\arccos\left[-\cos\rho\cos\rho^{\prime}-\cos\gamma\sin\rho \sin\rho^{\prime}\right],\] \[\Psi^{*}=\pi+\arccos\left[-\cos\rho\cos\rho^{\prime}+\cos\gamma \sin\rho\sin\rho^{\prime}\right] \tag{48}\] and \(\gamma\) is the angular separation of the points (6). Applying the differential operator \(\mathcal{T}_{\mu\nu}(x,x^{\prime})\) to the Green's function (44) and bringing the space-time points together gives \[\langle\hat{T}_{\mu\nu}\rangle_{\zeta,0}^{\rm ESU}=\langle\hat{T }_{\mu\nu}\rangle_{\rm N,0}^{\rm ESU}-\frac{\cot\zeta}{L}\lim_{x^{\prime} \to x}\left\{\mathcal{T}_{\mu\nu}(x,x^{\prime})\left[G_{\zeta,0}^{(1)}(x,x^{ \prime})\right]\right\}\\ +\frac{\cot^{2}\zeta}{L^{2}}\lim_{x^{\prime}\to x}\left\{ \mathcal{T}_{\mu\nu}(x,x^{\prime})\left[G_{\zeta,0}^{(2)}(x,x^{\prime})\right] \right\}+\ldots \tag{49}\] We can relate the RSET on ESU to that on adS using [22] \[\langle\hat{T}_{\mu}^{\omega}\rangle_{\zeta,0}^{\rm adS}=\langle\hat{T}_{\mu} ^{\nu}\rangle_{\zeta,0}^{\rm ESU}\frac{\sqrt{\tilde{g}}}{\sqrt{\tilde{g}}}- \frac{1}{2880\pi^{2}}\left[\frac{1}{6}\,{}^{(1)}H_{\mu}^{\nu}-{}^{(3)}H_{\mu}^ {\nu}\right], \tag{50}\] where \(\tilde{g}\) and \(g\) are the determinants of the metrics on ESU and adS respectively and \({}^{(1)}H_{\mu\nu}\) and \({}^{(3)}H_{\mu\nu}\) are given by [22] \[{}^{(1)}H_{\mu\nu}=2R_{;\mu\nu}-2g_{\mu\nu}\Box R-\frac{1}{2}g_{ \mu\nu}R^{2}+2RR_{\mu\nu}, \tag{51}\] \[{}^{(3)}H_{\mu\nu}=R_{\mu}^{\rho}R_{\rho\nu}-\frac{2}{3}RR_{\mu \nu}-\frac{1}{2}R_{\rho\sigma}R^{\rho\sigma}g_{\mu\nu}+\frac{1}{4}R^{2}g_{\mu \nu}. \tag{52}\] On adS, \({}^{(1)}H_{\mu\nu}\) vanishes identically and \({}^{(3)}H_{\mu\nu}=3g_{\mu\nu}/L^{2}\). Using (49, 50) we can write \[\langle\hat{T}^{\nu}_{\mu}\rangle_{\zeta,0}^{\rm adS}=\langle\hat{T }^{\nu}_{\mu}\rangle_{{\rm N},0}^{\rm adS}-\frac{\cot\zeta}{L}\lim_{x^{\prime} \to x}\left\{\mathcal{T}_{\mu\nu}(x,x^{\prime})\left[G^{(1)}_{\zeta,0}(x,x^{ \prime})\right]\right\}\cos^{4}\rho\\ +\frac{\cot^{2}\zeta}{L^{2}}\lim_{x^{\prime}\to x}\left\{ \mathcal{T}_{\mu\nu}(x,x^{\prime})\left[G^{(2)}_{\zeta,0}(x,x^{\prime})\right] \right\}\cos^{4}\rho+\ldots \tag{53}\] From the analysis in [21], the RSET on ESU (49) can also be expressed as an asymptotic series at an arbitrarily small distance, \(\epsilon\), from the boundary at \(\rho=\pi/2\) as \[\langle\hat{T}_{\mu\nu}\rangle_{\zeta,0}^{\rm ESU}\sim g_{\mu}^{\alpha^{ \prime}}g_{\nu}^{\beta^{\prime}}\left(\epsilon^{-4}\,T^{(4)}_{\alpha^{\prime }\beta^{\prime}}+\epsilon^{-3}\,T^{(3)}_{\alpha^{\prime}\beta^{\prime}}+ \epsilon^{-2}\,T^{(2)}_{\alpha^{\prime}\beta^{\prime}}\right)+O(\epsilon^{-1}), \tag{54}\] where \(g_{\mu}^{\alpha^{\prime}}(x,x^{\prime})\) is the bivector of parallel transport between the space-time points \(x\) and \(x^{\prime}\). When substituted in (50), the leading-order term \(\epsilon^{-4}g_{\mu}^{\alpha^{\prime}}g_{\nu}^{\beta^{\prime}}T^{(4)}_{\alpha^ {\prime}\beta^{\prime}}\), together with the contribution from \({}^{(3)}H_{\mu\nu}\), yields \(\langle\hat{T}_{\mu\nu}\rangle_{{\rm N},0}^{\rm adS}\) in (53). The next-to-leading order quantity \(\epsilon^{-3}g_{\mu}^{\alpha^{\prime}}g_{\nu}^{\beta^{\prime}}T^{(3)}_{\alpha ^{\prime}\beta^{\prime}}\), corresponds to the second term in the expansion (49), and the quantity \(\epsilon^{-2}g_{\mu}^{\alpha^{\prime}}g_{\nu}^{\beta^{\prime}}T^{(2)}_{\alpha ^{\prime}\beta^{\prime}}\) to the third term in (49). From [21], the next-to-leading order term is given, up to a multiplicative constant, by: \[T^{(3)}_{\mu\nu}\propto\left(3\chi_{\mu\nu}-\chi h_{\mu\nu}\right), \tag{55}\] where \(\chi_{\mu\nu}=n_{\mu;\alpha}h_{\nu}^{\alpha}\) and \(n_{\mu}\) is a unit vector normal to the boundary. The nonzero components of \(\chi_{\mu\nu}\) are \(\chi_{\theta\theta}=L\cos\rho\sin\rho\) and \(\chi_{\phi\phi}=L\cos\rho\sin\rho\sin^{2}\theta\), giving \(\chi=2\cot\rho/L\). As we approach the boundary (\(\rho\to\pi/2\)), we have \(\chi_{\mu\nu}=\chi=0\), and therefore the second term in (54) is zero. We arrive at the same conclusion from a direct computation of the second term in the asymptotic expansion (49). Subsequent terms in the expansion are of lower order in \(\epsilon\). This means that, as we approach the boundary, \(\langle\hat{T}^{\nu}_{\mu}\rangle_{\zeta,0}^{\rm adS}=\langle\hat{T}^{\nu}_{ \mu}\rangle_{{\rm N},0}^{\rm adS}\) as shown numerically in Section 4. Similar arguments apply to the t.e.v. of the RSET. ## 7 Discussion In this paper we have determined the v.e.v.s and t.e.v.s of the components of the RSET for a massless, conformally coupled scalar field propagating on a background four-dimensional global adS space-time. We have used Euclidean methods, which gives a unique Green's function and avoids the need for an '\(i\epsilon\)' prescription, rendering the numerical calculations easier than in the corresponding Lorentzian case (see, for example [6], whose results we have been able to reproduce with our Euclidean methods). With mixed indices, the v.e.v.s of the nonzero components of the RSET are constant when both Dirichlet and Neumann boundary conditions are applied, respecting the underlying maximal symmetry of the adS space-time. Furthermore, the constant is fixed by the trace anomaly (since we are considering a massless, conformally coupled scalar field), and hence is the same for both Dirichlet and Neumann boundary conditions. This common value for the v.e.v. with both Dirichlet and Neumann boundary conditions differs from that seen with the VP [3; 18; 19]. The maximal symmetry is broken when Robin boundary conditions are applied, and the v.e.v.s depend on the space-time location. However, for all Robin boundary conditions, the v.e.v.s of the nonzero components of the RSET with mixed boundary conditions take the same value on the space-time boundary, namely that for Dirichlet and Neumann boundary conditions. This symmetry breaking is also seen with the t.e.v.s, even for Dirichlet and Neumann boundary conditions. The t.e.v.s with either Dirichlet or Neumann boundary conditions are no longer constant and depend on the spatial location, with the maximum difference between them being found at the space-time origin. For thermal states, the boundary conditions have a significant effect on the expectation values of all nonzero components of the RSET. This effect is most apparent near the space-time origin, but becomes diluted with increasing temperature. With increasing temperature we find that the t.e.v.s with different Robin boundary conditions are increasingly 'contained' with the Dirichlet and Neumann curves, with the difference between all boundary conditions proportionately decreasing with increasing temperature. However, for all temperatures and Robin parameters, the t.e.v.s of all nonzero components of the RSET with mixed indices converge at the space-time boundary to the common v.e.v. found with Dirichlet and Neumann boundary conditions. This can be compared with the results for the VP [18] where the v.e.v.s and t.e.v.s for all Robin parameters converged to the Neumann result, except when Dirichlet boundary conditions were applied. In the case of the RSET, as both Dirichlet and Neumann boundary conditions result in the same v.e.v.s, in this case, all boundary conditions, including Dirichlet, converge to the same result. This supports the conclusion in [18; 19] that Neumann boundary conditions reflect the generic behaviour of the quantum scalar field at the boundary. The value of the VP on the boundary is _a priori_ unconstrained by the renormalization process, whereas the RSET for any maximally symmetric state of a massless, conformally coupled scalar field is completely determined by the trace anomaly. This is not the case for scalar fields with mass and/or more general coupling to the space-time curvature, when, even for a maximally symmetric state, the trace of the RSET depends on the constant value of the VP as well as the mass and coupling [4]. It would therefore be interesting to compute the RSET for massive or nonconformally-coupled scalar fields, extending the work of [19], which we plan to do in a forthcoming paper. ###### Acknowledgements. We thank Axel Polaczek for assistance with the symbolic computation of derivatives of Legendre functions. The work of E.W. is supported by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant ST/T001038/1. This research has also received funding from the European Union's Horizon 2020 research and innovation program under the H2020-MSCA-RISE-2017 Grant No. FunFiCO-777740.
2301.09512
Dirac: a command-line $γ$-matrix calculator
A software for simplification of Dirac matrix polynomials that arise in particle physics problems is implemented.
Sergii Kutnii
2023-01-23T16:04:17Z
http://arxiv.org/abs/2301.09512v3
# Dirac: a command-line \(\gamma\)-matrix calculator ###### Abstract A software for simplification of Dirac matrix polynomials that arise in particle physics problems is implemented. Introduction Many problems in high-energy physics require simplification of polynomials of Dirac matrices. As an example, consider the problem of classifying possible self-interactions of relativistic fermions. Interaction lagrangian has to be a Lorentz scalar. Let [1; 48; 49; 50; 51; 52; 53; 54] \[\begin{array}{l}\Gamma_{1}=1\\ \Gamma_{2}^{\mu}=\gamma^{\mu}\\ \Gamma_{3}^{\mu\mu\mu_{2}}=\sigma^{\mu_{1}\mu_{2}}\\ \Gamma_{4}^{\mu}=\gamma^{5}\gamma^{\mu}\\ \Gamma_{5}=\gamma^{5}\end{array} \tag{1}\] Then a generic quartic self-interaction term can be written as \[\mathcal{L}_{4Aij}=t_{A\bar{\mu}\nu}\bar{\psi}\Gamma_{i}^{\bar{\mu}}\psi\bar{ \psi}\Gamma_{j}^{\bar{\nu}}\psi, \tag{2}\] where \(\bar{\mu},\bar{\nu}\) are Lorentz multi-indices, i.e. combinations of zero to two indices, and \(t_{Aij\bar{\mu}\bar{\nu}}\) is some Lorentz-invariant tensor with up to four indices. By virtue of Weyl's theorem on invariants of orthogonal groups [2; 53], such tensors can only be built from the metric and Levi-Civita symbol. If parity symmetry is required, then only metric is allowed. Not all terms of the form (2) are independent though. The matrices (1) form a complete basis in the algebra of \(4\times 4\) complex matrices \(gl(4,\mathbb{C})\). It is easy to show that for the Grassmann field \(\psi\) \[\psi\bar{\psi}=-\frac{\bar{\psi}\psi}{4}-\frac{\gamma_{\mu}\bar{\psi}\gamma^ {\mu}\psi}{4}-\frac{\sigma_{\mu\nu}\bar{\psi}\sigma^{\mu\nu}\psi}{8}+\frac{ \gamma^{5}\gamma_{\mu}\bar{\psi}\gamma^{5}\gamma^{\mu}\psi}{4}-\frac{\gamma^ {5}\bar{\psi}\gamma^{5}\psi}{4}. \tag{3}\] Therefore, \[\bar{\psi}\Gamma_{i}^{\bar{\lambda}}\psi\bar{\psi}\Gamma_{k}^{\bar{\nu}}\psi= \Lambda_{j}\bar{\psi}\Gamma_{i}^{\bar{\lambda}}\Gamma_{j\bar{\mu}}\Gamma_{k}^ {\bar{\nu}}\psi\bar{\psi}\Gamma_{j}^{\bar{\mu}}\psi, \tag{4}\] where summation over repeated indices is implied, and \(\Lambda_{j}\) are coefficients in the expansion (3). But completeness also implies the decomposition \[\Gamma_{i}^{\bar{\lambda}}\Gamma_{j\bar{\mu}}\Gamma_{k}^{\bar{\nu}}=K_{ijkl \bar{\mu}\bar{\kappa}}^{\bar{\lambda}\bar{\nu}}\Gamma_{l}^{\bar{\kappa}}, \tag{5}\] for certain numeric coefficients \(K_{ijkl\bar{\mu}\bar{\kappa}}^{\bar{\lambda}\bar{\nu}}\). Therefore, quartic scalar combinations of fermions are subject to identities \[\bar{\psi}\Gamma_{i}^{\bar{\kappa}}\psi\bar{\psi}\Gamma_{j}^{\bar{\lambda}} \psi=f_{ijkl}\bar{\mu}\bar{\psi}\Gamma_{k}^{\bar{\kappa}}\psi\bar{\psi}\Gamma_ {l}^{\bar{\rho}}\psi. \tag{6}\] These are the well-known Fierz identities [1; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 22; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 108; 109; 110; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 187; 188; 189; 190; 188; 189; 180; 189; 181; 180; 183; 185; 187; 188; 189; 180; 189; 181; 184; 186; 187; 188; 189; 188; 189; 180; 189; 181; 180; 181; 185; 189; 181; 186; 187; 188; 189; 189; 191; 187; 189; 188; 189; 1920; 189; 180; 1821; 189; 183; 180; 184; 187; 189; 189; 180; 183; 185; 186; 187; 189; 188; 188; 189; 187; 188; 189; 189; 193; 188; 189; 194; 188; 189; 195; 180; 180; 181; 181; 182; 183; 184; 1896; 189; 187; 188; 188; 189; 1897; 189; 198; 1999; 200; 201; 202; 2031; 204; 205; 206; 207; 208; 209; 211; 209; 2101; 208; 209; 2111; 2011; 2012; 209; 2113; 2014; 2015; 2016; 2017; 2018; 2019; 214; 2017; 2019; 221; 2021; 2022; 2032; 204; 206; 207; 208; 209; 2115; 209; 2116; 202; 209; 217; 202; 2033; 2040; 206; 207; 208; 209; 218; 219; 222; 223; 2041; 209; 222; 2231; 205; 206; 207; 208; 209; 219; 224; 207; 209; 225; 208; 209; 219; 232; 233; 234; 235; 236; 241; 237; 238; 242; 25; 251; 261; 271; 28; 28; 293; 294; 295; 262; 273; 28; 296; 28; 297; 298; 299; 300; 310; 311; 329; 331; 333; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 89; 92; 93; 94; 95; 96; 97; 98; 99; 99; 100; 111; 12; 133; 14; 15; 16; 17; 18; 199; 19; 197; 198; 199; 199; 200; 219; 203; 21; 21; 22; 233; 24; 25; 26; 27; 28; 29; 301; even-parity ones can be built as contractions of \(\psi\bar{\psi}\) with eleven Lorentz-invariant tensor products \[\begin{split}\theta_{111}&=\mathbf{1}\otimes\mathbf{1 }\otimes\mathbf{1}\\ \theta_{122}&=\mathbf{1}\otimes\gamma_{\mu}\otimes \gamma^{\mu}\\ \theta_{133}&=\mathbf{1}\otimes\sigma_{\mu\nu} \otimes\sigma^{\mu\nu}\\ \theta_{144}&=\mathbf{1}\otimes\gamma^{5}\gamma_{\mu} \otimes\gamma^{5}\gamma^{\mu}\\ \theta_{155}&=\mathbf{1}\otimes\gamma^{5}\otimes \gamma^{5}\\ \theta_{523}&=\gamma^{5}\otimes\gamma_{\mu}\otimes \gamma^{5}\gamma^{\mu}\\ \theta_{223}&=\gamma_{\mu}\otimes\gamma_{\nu} \otimes\sigma^{\mu\nu}\\ \theta_{443}&=\gamma^{5}\gamma_{\mu}\otimes\gamma^{5} \gamma_{\nu}\otimes\sigma^{\mu\nu}\\ \theta_{243}&=\epsilon_{\kappa\lambda\mu\nu}\gamma^{ \kappa}\otimes\gamma^{5}\gamma^{\lambda}\otimes\sigma^{\mu\nu}\\ \theta_{533}&=\epsilon_{\kappa\lambda\mu\nu}\gamma^{5} \otimes\sigma^{\kappa\lambda}\otimes\sigma^{\mu\nu}\\ \theta_{333}&=\sigma^{\lambda}_{\kappa}\otimes \sigma^{\mu}_{\lambda}\otimes\sigma^{\kappa}_{\mu}\end{split} \tag{8}\] These lead to 275 matrix products to simplify, some of them being quite complex, such as the product of five \(\sigma\)-matrices. The need for an automated computation tool becomes clear. However, all such tools are either proprietary with prohibitive cost or their support of tensor and Dirac matrix algebra is limited. Cadabra [4] could be the best existing open-source tool for the task, but it uses different matrix basis and is tailored for computations in arbitrary number of spatio-temporal dimensions. The philosophy behind _dirac_ is different: instead of trying to solve a general problem with a general-purpose computer algebra system, _dirac_'s exclusive purpose is efficient simplification of \(\Gamma\)-matrix polynomials in four dimensions. ## II Representation and algorithms Completeness of the basis (1) implies that \[\Gamma_{i}^{\bar{\alpha}}\Gamma_{j}^{\bar{\beta}}=C_{ij\,\bar{\gamma}}^{k} \Gamma_{k}^{\bar{\alpha}\bar{\beta}}. \tag{9}\] The multiplication structure constants \(C_{ij\,\bar{\gamma}}^{k}\) can be grouped into _pseudo-matrices_. Let \[\mathbf{\Gamma}^{\bar{\mu}}=\begin{bmatrix}1&\gamma^{\mu}&\sigma^{\mu_{1}\mu _{2}}&\gamma^{5}\gamma^{\mu}&\gamma^{5}\end{bmatrix} \tag{10}\] Then (9) is equivalent to \[\Gamma_{i}^{\bar{\mu}}\mathbf{\Gamma}^{\bar{\nu}}=\mathbf{\Gamma}^{\bar{ \lambda}}\left(\mathbf{C}_{i}^{\bar{\mu}}\right)_{\bar{\lambda}}^{\bar{\nu}}. \tag{11}\] The nice thing about pseudo-matrices \(\mathbf{C}_{i}^{\bar{\mu}}\) is that they form a representation of \(\Gamma_{i}^{\bar{\mu}}\): it is easy to verify that multiplication of \(\Gamma\) corresponds to multiplication of the respective pseudo-matrices \(\mathbf{C}\) in the same order plus contraction of matching tensor indices. Thus, all pseudo-matrices can be constructed recursively from basic multiplicative identities, a good reference for which can be found in [5]. Let the metric be denoted with \(\eta\), and imaginary unit with \(I\). The pseudo-matrix counterparts to \(\gamma^{\mu}\) are \[\mathbf{C}_{2}^{\mu}=\begin{bmatrix}0&\eta^{\mu\nu}&0&0&0\\ \delta_{\lambda}^{\mu}&0&I\left(\eta^{\mu\nu_{1}}\delta_{\lambda}^{\nu_{2}}- \eta^{\mu\nu_{2}}\delta_{\lambda}^{\nu_{1}}\right)&0&0\\ 0&-\frac{I}{2}\left(\delta_{\lambda_{1}}^{\mu}\delta_{\lambda_{2}}^{\nu}- \delta_{\lambda_{2}}^{\mu}\delta_{\lambda_{1}}^{\nu}\right)&0&\frac{\epsilon ^{\mu\nu}\lambda_{1}\lambda_{2}}{2}&0\\ 0&0&-\epsilon^{\mu\nu_{1}\nu_{2}}\lambda&0&-\delta_{\lambda}^{\mu}\\ 0&0&0&-\eta^{\mu\nu}&0\end{bmatrix} \tag{12}\] and \(\gamma^{5}\) is represented with \[\mathbf{C}_{5}=\begin{bmatrix}0&0&0&0&1\\ 0&0&0&\delta_{\lambda}^{\nu}&0\\ 0&0&-\frac{I\epsilon^{\nu_{1}\nu_{2}}\lambda_{1}\lambda_{2}}{2}&0&0\\ 0&\delta_{\lambda}^{\nu}&0&0&0\\ 1&0&0&0&0\end{bmatrix} \tag{13}\] Given that, one can compute \[\left(\mathbf{C}_{4}^{\mu_{1}\mu_{2}}\right)_{\bar{\lambda}}^{\bar{\nu}}=\left( \mathbf{C}_{5}\right)_{\bar{\lambda}}^{\bar{\kappa}}\left(\mathbf{C}_{2}^{\mu} \right)_{\bar{\kappa}}^{\bar{\nu}}=\begin{bmatrix}0&0&0&-\eta^{\mu\nu}&0\\ 0&0&-\epsilon^{\mu\nu_{1}\nu_{2}}{}_{\lambda}&0&-\delta_{\lambda}^{\mu}\\ 0&-\frac{\epsilon^{\nu_{1}\nu_{2}}{}_{\lambda_{1}}\lambda_{2}}{2}&0&-\frac{f} {2}\left(\delta_{\lambda_{1}}^{\mu}\delta_{\lambda_{2}}^{\nu}-\delta_{\lambda_ {2}}^{\mu}\delta_{\lambda_{1}}^{\nu}\right)&0\\ \delta_{\lambda}^{\mu}&0&I\left(\eta^{\mu\nu_{1}}\delta_{\lambda}^{\nu_{2}}- \eta^{\mu\nu_{2}}\delta_{\lambda}^{\nu_{1}}\right)&0&0\\ 0&\eta^{\mu\nu}&0&0&0\end{bmatrix} \tag{14}\] and the most complicated of all \[\left(\mathbf{C}_{3}^{\mu_{1}\mu_{2}}\right)_{\bar{\lambda}}^{ \bar{\nu}}=\frac{I}{2}\left[\left(\mathbf{C}_{2}^{\mu_{1}}\right)_{\bar{\lambda }}^{\bar{\kappa}}\left(\mathbf{C}_{2}^{\mu_{2}}\right)_{\bar{\kappa}}^{\bar{ \nu}}-\left(\mathbf{C}_{2}^{\mu_{2}}\right)_{\bar{\lambda}}^{\bar{\kappa}} \left(\mathbf{C}_{2}^{\mu_{1}}\right)_{\bar{\kappa}}^{\bar{\nu}}\right]=\\ =\begin{bmatrix}0&0&0\\ 0&I\left(\delta_{\lambda}^{\mu_{1}}\eta^{\mu_{2}\nu}-\delta_{\lambda}^{\mu_{ 2}}\eta^{\mu_{1}\nu}\right)&0&0\\ \frac{f}{2}\left[\delta_{\lambda_{1}}^{\mu_{1}}\delta_{\lambda_{2}}^{\nu_{2}} \eta^{\mu_{2}\nu_{1}}-\delta_{\lambda_{2}}^{\mu_{2}}\delta_{\lambda_{2}}^{\nu _{2}}\eta^{\mu_{1}\nu_{1}}-\\ \frac{\delta_{\lambda_{1}}^{\mu_{1}}\delta_{\lambda_{2}}^{\nu_{1}}}{\delta_{ \lambda_{1}}^{\mu_{1}}}\eta^{\mu_{2}\nu_{2}}+\delta_{\lambda_{2}}^{\mu_{2}} \delta_{\lambda_{2}}^{\nu_{1}}\eta^{\mu_{1}\nu_{2}}-\\ -\delta_{\lambda_{2}}^{\mu_{1}}\delta_{\lambda_{1}}^{\mu_{1}}\eta^{\mu_{2}\nu _{2}}+\delta_{\lambda_{2}}^{\mu_{2}}\delta_{\lambda_{2}}^{\nu_{1}}\eta^{\mu_ {1}\nu_{2}}+\\ +\delta_{\lambda_{2}}^{\mu_{1}}\delta_{\lambda_{1}}^{\mu_{1}}\eta^{\mu_{2}\nu _{2}}-\delta_{\lambda_{2}}^{\mu_{2}}\delta_{\lambda_{1}}^{\mu_{1}}\eta^{\mu_ {1}\nu_{2}}\end{bmatrix}=\\ =\begin{bmatrix}0&0&0\\ 0&I\left(\delta_{\lambda}^{\mu_{1}}\eta^{\mu_{2}\nu}-\delta_{\lambda}^{\mu_{ 2}}\eta^{\mu_{1}\nu}\right)&0&0\\ &\frac{f}{2}\left[\delta_{\lambda_{1}}^{\mu_{1}}\delta_{\lambda_{2}}^{\nu_{ 2}}\eta^{\mu_{2}\nu_{1}}-\delta_{\lambda_{2}}^{\mu_{2}}\delta_{\lambda_{2}}^{ \nu_{2}}\eta^{\mu_{1}\nu_{1}}-\\ -\delta_{\lambda_{2}}^{\mu_{1}}\delta_{\lambda_{2}}^{\nu_{1}}\eta^{\mu_{2}\nu _{2}}+\delta_{\lambda_{2}}^{\mu_{2}}\delta_{\lambda_{2}}^{\nu_{1}}\eta^{\mu_ {1}\nu_{2}}-\\ -\delta_{\lambda_{2}}^{\mu_{1}}\delta_{\lambda_{1}}^{\mu_{1}}\eta^{\mu_{2}\nu _{1}}+\delta_{\lambda_{2}}^{\mu_{2}}\delta_{\lambda_{2}}^{\nu_{2}}\eta^{\mu_ {1}\nu_{1}}+\\ +\delta_{\lambda_{2}}^{\mu_{1}}\delta_{\lambda_{1}}^{\mu_{1}}\eta^{\mu_{2}\nu _{2}}+\delta_{\lambda_{2}}^{\mu_{2}}\delta_{\lambda_{1}}^{\nu_{1}}\eta^{\mu_ {1}\nu_{2}}\end{bmatrix}=\\ 0&-\epsilon^{\mu_{1}\mu_{2}\nu}{}_{\lambda}&0&I\left(\eta^{\mu_{2}\nu}\delta_{ \lambda}^{\mu_{1}}-\eta^{\mu_{1}\nu}\delta_{\lambda}^{\mu_{2}}\right)&0\\ 0&0&-I\epsilon^{\mu_{1}\mu_{2}\nu_{1}\nu_{2}}&0&0\end{bmatrix} \tag{15}\] Then any product of \(\Gamma\)-matrices can be represented as follows: \[\Gamma_{i_{1}}^{\bar{\mu}_{1}}\ldots\Gamma_{i_{k}}^{\bar{\mu}_{k}}\ldots\Gamma _{i_{n}}^{\bar{\mu}_{n}}\rightarrow\left(\mathbf{C}_{i_{1}}^{\mu_{1}}\right)_{ \bar{\lambda}_{1}}^{\bar{\lambda}_{2}}\ldots\left(\mathbf{C}_{i_{k}}^{\mu_{k}} \right)_{\bar{\lambda}_{k}}^{\bar{\lambda}_{k+1}}\ldots\left(\mathbf{C}_{i_{n}}^ {\mu_{n}}\right)_{\bar{\lambda}_{n}}^{\bar{\lambda}_{n+1}}. \tag{16}\] On the other hand, any matrix in \(gl(4,\mathbb{C})\) can be written as product of the basis row (10) and a coefficient column vector. In particular, \[1\rightarrow\begin{bmatrix}1\\ 0\\ 0\\ 0\\ 0\end{bmatrix}. \tag{17}\] Any matrix can be multiplied by 1. This implies that column vector representation of the left hand side in (16) can be obtained by multiplying its pseudo-matrix counterpart with the right hand side of (17) on the right, which is equivalent to taking the first column of the rightmost pseudo-matrix and multiplying it by the remaining pseudo-matrices on the left in the right to left order. All that remains is to simplify the components of the resulting column vector which are polynomials in the metric, Kronecker, and Levi-Civita symbols with complex coefficients. This can be done in three stages. First, Levi-Civita powers can be expanded using \[\epsilon_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\epsilon_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}=- \sum_{P}\text{sgn}(P)\eta_{\mu_{1}P\nu_{1}}\eta_{\mu_{2}P\nu_{2}}\eta_{\mu_{3}P \nu_{3}}\eta_{\mu_{4}P\nu_{4}}, \tag{18}\] where \(P\) labels all permutations of \(\nu_{1},\ldots,\nu_{4}\). Then all contractible indices can be contracted. using the fact that contraction of any tensor with the metric or Kronecker amounts to simple index replacement, and Levi-Civita symbol with any index pair contracted is zero. Contraction guarantees that all remaining indices are free. Then all terms having same tensor composition up to permutations of indices in constituent tensors can be collected, taking into account Levi-Civita index permutation signs. The only remaining major part of a \(\Gamma\)-matrix calculator is expression parser. A version of shunting yard algorithm was implemented. Input syntax is described in the next section. ## III The Software The _dirac_ software is available on github. It is implemented in C++20 and uses CMake (min version 3) as its build system. Ruby 3 language interpreter, "open3" Ruby gem, and LaTeX installation with pdflatex are required to run example scripts. The standard CMake build procedure mkdirbuild cdbuild cmake.. make (assuming you were in the project root) produces "dirac" executable in the build folder. There are two main ways to invoke the executable. The first is to provide the expression to simplify with -e key. In this case the executable simplifies the expression, prints the result to the console and exits. Example: ./dirac -e "\(\backslash\)gamma\(\_\)\(\backslash\)mu\(\backslash\)gamma\(\_\)\(\backslash\)nu" (the quotes around the expression prevent the terminal from eating the backslashes). This mode is mainly useful for scripting. The "examples" folder contains Ruby scripts that demonstrate how scripting can be used to batch computations and generate LaTeX output from the results. There are other command line keys as well. They affect input processing and output formatting. If no expression is provided via command line, the application starts an interactive shell. The shell accepts three primary types of input. **Quit-expression** is the single word quit which exits the shell. **Set-expression**: dirac:> #set <var-name> <var-value> sets the variable name identified by var-name to var-value. The variables are documented below. **mode**: arithmetic mode. Possible values: float and rational. Default is rational. Equivalent command line option -m. Example: dirac:> #setmodefloat or./dirac -mfloat The mode variable also affects input parsing. Floating point numeric values are acceptable in float mode and are considered errors when the mode is rational. The rational arithmetic implementation is the simplest possible. It does not e.g. handle overflows. This was a conscious decision to avoid adding library dependencies. A more robust implementation may be added in the future if there is user demand for it. **line terms**: number of terms per output line. Possible values: integers or inf (meaning "infinity"). Default: inf. When this variable is set to a nonzero integer constant, LaTeX line breaks (as if inside'split' environment) are inserted after each line_terms terms. Example: dirac:> #setline_terms 2 Equivalent command line option: -1: ./dirac -1 2 **dummy**: dummy index template. Possible values: string literals. Greek letters in LaTeX notation are good choice. Default is \omega. Equivalent command line option: -d. Example: dirac:> #set dummy \(\backslash\)sigma or ./dirac -d "\(\backslash\)sigma" **apply_symmetry**: controls whether the coefficient terms at \(\sigma^{\mu\nu}\) are merged in the output by taking into account \(\sigma\)'s antisymmetry. Possible values: true or false. Default is true. Command line equivalent: -s. Example: dirac:> #setapply_symmetryfalse or./dirac -s false ### Math expressions All input lines that are neither quit-expressions nor set-expressions are considered computable math. The dirac application tries to parse and compute them. Math syntax is LaTeX-like with some differences. A valid expression consists of * literals: alphanumeric sequences preceded by \, e.g. \gamma; * integer or floating-point numbers; * arithmetic operators +,-,*,/; * subscript _; * superscript ^; * brackets {...} (round or square brackets are not recognized for the sake of implementation simplicity). Subscript [head]_[tail] and superscript [head]^[tail] are tensorial expressions. Unlike LaTeX, multiple non-bracketed subscripts and superscripts to a single head are possible, but multi-level are not: \eta_\mu_\mu\nu is valid input while \gamma^{\}eta_\{\mu\nu\} is not. Literals' interpretation depends on their position in the input. Literals inside the tail of a tensorial expression are interpreted verbatim as tensor index labels. Otherwise, only a limited number of special literals is recognized: * imaginary unit; * Dirac gamma-matrix; * Dirac sigma-matrix; * \(\gamma^{5}\) matrix; * Minkowski metric; * Kronecker delta; * Levi-Civita symbol. The app does not perform any validation of tensorial expression consistency save for checking that all basic tensors have correct index counts at computation stage. Nevertheless, mathematically valid input results in mathematically valid output by design of the pseudo-matrix representation. ## IV Conclusions The _dirac_ software is a fully functional command line calculator for \(\gamma\)-matrix polynomials. Scripting can be used to process multiple expressions in a batch ang generate LaTeX documents with the output.
2310.10702
Transparent Anomaly Detection via Concept-based Explanations
Advancements in deep learning techniques have given a boost to the performance of anomaly detection. However, real-world and safety-critical applications demand a level of transparency and reasoning beyond accuracy. The task of anomaly detection (AD) focuses on finding whether a given sample follows the learned distribution. Existing methods lack the ability to reason with clear explanations for their outcomes. Hence to overcome this challenge, we propose Transparent {A}nomaly Detection {C}oncept {E}xplanations (ACE). ACE is able to provide human interpretable explanations in the form of concepts along with anomaly prediction. To the best of our knowledge, this is the first paper that proposes interpretable by-design anomaly detection. In addition to promoting transparency in AD, it allows for effective human-model interaction. Our proposed model shows either higher or comparable results to black-box uninterpretable models. We validate the performance of ACE across three realistic datasets - bird classification on CUB-200-2011, challenging histopathology slide image classification on TIL-WSI-TCGA, and gender classification on CelebA. We further demonstrate that our concept learning paradigm can be seamlessly integrated with other classification-based AD methods.
Laya Rafiee Sevyeri, Ivaxi Sheth, Farhood Farahnak, Samira Ebrahimi Kahou, Shirin Abbasinejad Enger
2023-10-16T11:46:26Z
http://arxiv.org/abs/2310.10702v2
# Transparent Anomaly Detection via Concept-based Explanations ###### Abstract Advancements in deep learning techniques have given a boost to the performance of anomaly detection. However, real-world and safety-critical applications demand a level of transparency and reasoning beyond accuracy. The task of anomaly detection (AD) focuses on finding whether a given sample follows the learned distribution. Existing methods lack the ability to reason with clear explanations for their outcomes. Hence to overcome this challenge, we propose Transparent Anomaly Detection Concept **E**xplanations (ACE). ACE is able to provide human interpretable explanations in the form of concepts along with anomaly prediction. To the best of our knowledge, this is the first paper that proposes interpretable by-design anomaly detection. In addition to promoting transparency in AD, it allows for effective human-model interaction. Our proposed model shows either higher or comparable results to black-box uninterpretable models. We validate the performance of ACE across three realistic datasets - bird classification on CUB-200-2011, challenging histopathology slide image classification on TIL-WSI-TCGA, and gender classification on CelebA. We further demonstrate that our concept learning paradigm can be seamlessly integrated with other classification-based AD methods. ## 1 Introduction In recent years, deep learning models have achieved remarkable advancements, often demonstrating performance levels on par with human capabilities in a wide array of tasks like image segmentation [19], image generation [26], and text generation [29, 22]. As a result, these models have been increasingly applied in diverse real-world contexts. However, it's crucial to recognize that despite their achievements, instances of model failure have surfaced in practical scenarios, raising concerns about their reliability for safety-critical applications. One contributing factor to these failures is rooted in the assumption that the distribution used for testing deep learning models matches their training distribution. However, this assumption does not hold in many realistic tasks limiting their application. Hence it is important for the model to be able to differentiate between the different distributions. The ability to identify and adapt to out-of-distribution (also known as anomalous) instances is vital for ensuring that the model's predictions remain reliable and trustworthy in novel and diverse scenarios which can be aided by anomaly detection. Anomaly detection models aims to differentiate between data points that follow a certain distribution and those that deviate from it. In many critical domains such as healthcare and finance, it is not only crucial to identify anomalies but also to provide meaningful explanations for the detected anomalies. This is particularly important to enhance user trust, facilitate decision-making, and ensure the accountability of deep learning models. The task of anomaly detection has been of great interest in the research community and has developed strong methods to detect distributional shifts [3, 31, 45, 34]. Most of these works have focused on improving the algorithms to accurately discriminate between in-distribution (also known as normal) data and out-of-distribution (anomalous) instances [3, 35, 12, 58, 3]. However, despite the progress in deep learning, a critical aspect - explainability - has been overlooked for anomaly detection methods. An "easy"and obvious method could be to use the interpretability methods for each in-domain and out-domain data separately and then visualize the difference [42]. While this has been somewhat explored in literature, it could be argued that pixel-level activations may not necessarily be the most effective methods for human understanding [1]. Since the emergence of research focused on anomaly detection, there have been different subgroups of problem statements introduced [5]. Through our work, we aim to bridge the research gap between human-level explainability and anomaly detection using human-understandable concepts. To the best of our knowledge, this is the first work that introduces an inherently interpretable by-design model for anomaly detection. Concept-based explanations provide an interpretable linkage between the model's decisions and the high-level concepts learned during training. By conditioning the detection of anomalous (out-of-distribution) instances with specific concepts, we not only offer insights into the model's decision-making process but also empower domain experts to validate the model's conclusions. One popular approach for unsupervised anomaly detection is using self-supervised learning to train an auxiliary task (e.g. discriminating between different transformations) to learn better representations for normal training data and consequently improving the model's discriminativeness [3, 12, 31, 44]. In this work, we propose Transparent Anomaly Detection Concept Explanations (ACE) that provide explanations for transformation-based anomaly detection models (see Fig. 1). In addition to providing explanations, our method can allow a domain expert to interact with the model using concept-based explanations. This interaction allows the supervisor to correct the concepts if they disagree with the model's explanations which can improve the downstream representation as well. We conduct extensive experiments on diverse datasets to showcase the effectiveness of our approach. Our results demonstrate that our concept-based explanation framework not only enhances the interpretability of anomaly detection models but also maintains competitive detection performance. ## 2 Related Work In this section, we will review some of the previous studies related to out-of-distribution detection and concept learning. Anomaly DetectionAnomaly detection (AD) or in general out-of-distribution (OOD) detection approaches can be grouped according to the following paradigms. **Distributional-based approaches** try to build a probabilistic model on the distribution of normal data. They rely on the idea that the anomalous samples would act differently than the normal data. They expect that the anomalous samples receive a lower likelihood under the probabilistic model than the normal samples. The difference in these models is in the choice of their probabilistic model and their feature representation approach. Gaussian mixture models [28], which only work if the data can be modeled with the probabilistic assumptions of the model, and kernel density estimation (KDE) [21] methods are among traditional methods. Some recent approaches use deep learning to represent the features [49, 57]. To alleviate the limitation that the probabilistic assumption imposes, recent studies suggested learning a probabilistic model on the features extracted by the deep models [58]. **Classification-based approaches** One-Class SVM (OC-SVM) [35] and support vector data description (SVDD) [45] are among the first works in this category. They used the idea of separating the normal data from the anomalous data based on their feature spaces. In the long history of the studies of this paradigm, different approaches from kernel methods to deep learning approaches such as Deep-SVDD [32] have been used. However, these approaches may suffer from the insufficient and biased representations the feature learning methods can provide. One remedy for this issue is using self-supervised learning methods. Various surrogate tasks such as image colorization [56], video frame prediction [25], and localization [50] are among those that provide high-quality feature representations for downstream tasks. In 2018, Golan _et al_. [12] proposed geometric transformation classification (GEOM) to predict different geometric image transformations as their surrogate task for anomaly detection. Following that, Bergman _et al_. [3] introduced GOAD, a unified method on one-class classification and transformation-based classification methods. Sohn _et al_. [41] presented a two-stage framework with a self-supervised model to obtain high-level data representations as the first stage, followed by a one-class classifier, such as OC-SVM or KDE, on top of the representations of the first stage. Whereas CSI [44] changed the conventional contrastive learning setting for anomaly detection by contrasting each example by distributionally-shifted augmentations Figure 1: ACE; Anomaly detection with ACE on the CUB dataset. Corresponding concepts provide detailed insights and explanations into model behavior. of itself. MSC [31] recently proposed a new contrastive objective to use transformed representations pretrained on an external dataset for anomaly detection. **Reconstruction-based approaches** instead of relying on the lower likelihood of the distributional-based methods, these approaches rely on the idea that normal samples should receive smaller reconstruction loss rather than anomalous samples. Different loss and reconstruction basis functions vary in each of these approaches. K-means is used as an early basis reconstruction function [16] while [2] proposed using deep neural networks as the basis functions. In the class of deep neural networks, generative models such as GANs [34] and autoencoder [57] are used to learn the reconstruction basis functions. Following the presentation of AnoGAN [34] as the first anomaly detection model based on GAN, several other studies used similar ideas with modifications on their basis functions and losses [54, 30, 55, 8] to increase the performance of anomaly detection models based on GANs. One of the major issues in using generative models, especially GANs as the reconstruction basis function, is their difficulty in recovering the entire data distribution (aka mode-collapse in GANs), leading to lower performance in comparison with classification-based approaches. [37] combined adversarial training with contrastive learning to mitigate the challenges of reconstruction-based approaches. Interpretable Anomaly DetectionWhile ample research has been conducted in the field of anomaly detection (AD), only a limited number of studies have focused on developing interpretable AD models. Carletti [4] employed feature importance to improve the explainability to the Isolation Forest. While an interpretable and explainable anomaly detector has been rarely explored, few studies improved explainablity using concepts to address distribution shifts [1, 17]. More specifically on out-of-distribution detection, [47] proposed the idea of introducing a new confidence measure based on PARTICUL [48], an existing pattern identification model into OOD detection. Seras [36] relied on the attribution maps in Spiking Neural Networks (SNNs) [11] to explain the reason behind the prediction of the model. Szymanowicz [43] proposed an explainable OOD detection based on saliency maps in video. On the other hand, [9] proposed a dual-monitoring approach involving global and local elements to construct a scene graph that observes interactions among different objects within a scene. In a rather limited attempt, Cho [7] introduced a new semi-supervised explainable OOD detection model for medical image segmentation. This was achieved by training an auxiliary prototypical network based on outlier exposure [15]. Concept-based ExplainabilityConcept Bottleneck Models (CBMs) [20] introduced explainability by adding predefined human understandable features in the neural network. Despite the popularity of CBMs, [24] pointed out that their concept representations may lead to information leakage, thus diminishing their performance. Since its introduction, there have been various works that build upon CBMs to overcome their shortcomings [13, 18, 38, 10]. Havasi [13] showed that CBM performance is highly dependent on the set of concepts and the expressiveness of the concept predictor and modified CBMs using autoregressive models and disentangled representations. [10] improved the performance of concept-based models by introducing high-dimensional concept embedding. Sheth [38] proposed a multi-task concept learning model for medical imaging applications. Beyond explainability, CBMs also allow human intervention to refine the concepts during inference. These interventions were further studied by [6, 39, 53, 40]. ## 3 Methodology ### Background In an unsupervised anomaly detection, models only have access to the normal training data. Once the unsupervised anomaly model is trained, the representations obtained from it will be used to separate normal and anomalous samples. One-class (OC) classification models rely on a normality score defined to identify anomalous samples. While these models achieved state-of-the-art performance in detecting anomalies, they lack explanatory capabilities for their predictions. Therefore, the incorporation of concepts into the detection process introduces a more transparent anomaly detection model. ### Ace In this section, we deconstruct the Transparent Anomaly Detection **C**oncept **E**xplanations (ACE) model into two distinct modules: anomaly detection and concept-based explanation. We further explain each module in the following section. Formal DefinitionIn our AD task, assuming that all data lies in \(R^{L}\) where \(L\) defines the data distribution, we defines normal (in-distribution) data as a subspace \(X\subset R^{L}\) which includes \(x\in X\). Therefore, given an unsupervised setting, the training set \(D_{train}=\{x_{1},x_{2},...,x_{k}\sim P_{ind}\}\) contains normal samples drawn from \(P_{ind}\), in-distribution. To evaluate the model, we use a test set \(D_{test}=\{\bar{x_{1}},\bar{x_{2}},...,\bar{x_{n}}\sim P_{ind}\ \cup\ P_{ood}\}\) including both normal and anomalous samples drawn from in and out-of-distribution (\(P_{ood}\)) respectively. Our model belongs to the category of unsupervised approaches as it does not see any out-of-distribution sample during training. Concept ExplanationsIn the concept-based model setup, each dataset is augmented with its corresponding auxiliary human interpretable concepts. For ACE, we reformulate the training dataset used for anomaly detection to incorporate concepts. As a result, the training dataset is redefined as \(D_{train}=\{(x_{1},c_{1}),(x_{2},c_{2}),...,(x_{k},c_{k})\sim P_{ind}\}\), where \(c\in\mathbb{R}^{0}\). Each concept vectors is of the length \(N\) denoting the number of human-interpretable concepts we train with. Our concept representation is binary in nature i.e. the model predicts whether a concept is present in the image or not. While traditional concept based models were developed for supervised classification, we modify the objective for anomaly detection. The first step is to extract the concepts from the image, we define a concept model, an encoder \(\mathcal{G}_{X\to C}\), to map each image \(x\in X\) into the concept \(c\). In the concept encoder, most of the parameters are shared apart from the very shallow concept prediction layer. For brevity, we continue to denote the concept encoder with concept prediction layers as \(\mathcal{G}_{X\to C}\). To train the concepts in ACE, we use binary cross entropy loss function across each concept: \[\mathcal{L}_{concepts}(c,\hat{c})=\sum_{c_{i}}^{c_{N}}-c_{i}\text{log}(\hat{c }_{i})-(1-c_{i})\text{log}(1-\hat{c}_{i}) \tag{1}\] where \(\hat{c}\) is the predicted concept. Anomaly DetectionIn this work, we follow the transformation-based classification methods for anomaly detection [3, 12, 44, 31]. Given an input tuple \((x,c)\in D_{train}\) and a transformation function \(T(.)\) with \(M=\{t_{1},t_{2},...,t_{m}\}\) different transformations, we apply all the transformations to each normal image \(x\in X\). Hence, for each input image, the original input image \(x\) and its corresponding transformed images \(x^{\prime}\in X^{\prime}_{m}\) will be mapped to their corresponding feature representations using the encoder \(\mathcal{G}_{X\to C}\). Each transformation \(t_{m}\in M\) forms a cluster with a centroid \(s_{m}\) defining the sphere. The centroid is the average of the features over all the training set for every transformation and is computed by \(s_{m}=\frac{1}{N}\Sigma_{x\in X}\mathcal{G}(T((x,c),t_{m}))\) where \(N\) defines the number of samples in the training set. Following Bergman _et al_. [3] and in order to have lower intra-class variation and higher inter-class variation for each cluster (feature space), we define a transformation loss using triplet loss [14] for the training of encoder \(\mathcal{G}\) as follows: \[\begin{split}\mathcal{L}_{AD}=\Sigma_{i}\ max\left(\left\| \mathcal{G}(T(x_{i},c_{i},t_{m}))-s_{m}\right\|^{2}+d\right.\\ \left.-min_{m^{\prime}\neq m}\left\|\mathcal{G}(T(x_{i},c_{i},t_{ m}))-s^{\prime}_{m}\right\|^{2},0)\right.\end{split} \tag{2}\] where \(d\) is a hyperparameter regularizing the distance between clusters. The final loss for training ACE combines concept (Eq. 1) and anomaly detection (Eq. 2) losses: \[\mathcal{L}_{ACE}=\alpha\mathcal{L}_{concepts}+\mathcal{L}_{AD} \tag{3}\] where \(\alpha\) is the hyper-parameters controlling the contribution of the accurate concept learning training process. Normality ScoreDuring the inference time, all \(M\) different transformations will be applied to each sample \(x\in D_{test}\) including samples from \(P_{ind}\cup P_{ood}\). The probability of \(x\) being identified as normal is the product of the probabilities that all transformed samples lie within their respective subspaces. Therefore, we compute the normality score as presented in Eq. 4, where higher values indicates anomalous samples (see Fig 2 for a schematic overview of ACE). \[\begin{split} NS(x)=-logP(x\in X)&=-\Sigma_{t_{m}} P(T(x,c,t_{m})\in X_{m})\\ &=-\Sigma_{t_{m}}P(t_{m}|T(x,c,t_{m}))\end{split} \tag{4}\] Figure 2: ACE; Training and inference of ACE on two examples from the CUB dataset with their corresponding transformed images. Since the example in the inference is from different classes as the normal training example, it receives a higher normality score indicating anomalous. ## 4 Experiments and Results We conduct extensive experiments across two benchmark datasets with different domains (i.e. vision and medical) to validate the performance of our model, ACE. We aim to show that our method either improves performance or has a comparable performance to its baselines with explanatory power. ### Datasets We conduct experiments for bird classification CUB-200-2011 and cancer histopathology cancer classification TIL-WSI-TCGA data. For bird classification, we used the Caltech-UCSD Birds-200-2011 (CUB) [46]. The CUB dataset has 11,788 images with \(200\) classes, however, we trained our anomaly detection model on the first \(10\) classes and tested on the first \(20\) classes only. We also conducted experiments on CelebA (CelebFaces Attributes Dataset) [23]. CelebA has 202,599 face images of 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender, and age. In this experiment, we focus on gender classification between males and females. For the medical dataset, we examined the Tumor-Infiltrating Lymphocytes (TIL) dataset [33], which contains histopathology images from various cancer types. TIL encompasses 13 subsets of the TCGA dataset, each representing a distinct cancer type. In order to evaluate ACE on anomaly detection tasks, we employ _one-vs-all_ scheme. In this scheme, a dataset with \(K\) classes will lead to \(K\) different anomaly detection experiments. A given class \(k_{ind}\), \(1\leq k_{ind}\leq K\), is considered as the _normal_ class, while \(k_{ood}\) defines _anomalous_ class of the rest of \(K-1\) classes. ### Baselines Our model ACE introduces concept-based explanations in the _one-vs-all_ category of anomaly detection. Hence we compare the performance of our model with the black-box anomaly detection model. We considered a popular transformation-based anomaly detection baseline-GOAD [3] for all of our experiments. The aim of our model is to maintain the performance of GOAD along with providing human interpretable explanations. We further performed an ablation study by using a different backbone-MSC [31] instead of GOAD (see Sec. 4.6). ### Experimental Setup We use the same hyperparameters for GOAD and GOAD+ACE for the anomaly detection task. We used \(M=72\) transformations during training. For training, we employed the SGD optimizer with a learning rate of \(0.01\) and a batch size of \(4\). The training of GOAD and GOAD+ACE was conducted over 15 epochs for both datasets. The backbone is a WideResNet10 model [52]. For GOAD+ACE experiments, the concept weight is \(\alpha\)=\(0.01\). ### Results With extensive experiments, we demonstrate that our explainable concept-based anomaly detection approach can indeed be effectively applied to transformation-based anomaly detection models. The performance of ACE is outlined in Table 1. The Receiver Operating Characteristics (ROC) curve's Area Under the Curve (AUC) measures the classifier's performance across different threshold settings. In the context of this research, the ROC-AUC assesses the classifier's ability to differentiate between normal and anomalous samples. The results from Table 1 indicate that adding concept explanations to the anomaly detection enhances both the interpretability as well as the overall performance of the model, particularly in the case of challenging TIL dataset. To evaluate the faithfulness in concept prediction, we use concept accuracy as a metric. High concept accuracy signifies that the model is able to learn concept representation aligned to human understanding and annotation. ### Ablation Studies To evaluate the robustness of our model against various hyperparameters, we conducted extensive experiments. #### 4.5.1 Sparse Concept Scenario The concept representation for vision and medical datasets is fairly different. For medical datasets such as TIL, the concepts are fairly easily available through medical notes. However, in settings where concept data annotation is not easy to obtain, it might be difficult to gauge the optimal number of concepts to label. Although finer concept label suggests that we can obtain finer knowledge about the image. To understand the effect of concept annotation, we performed experiments by training with a part of concepts only. In the Figure 3 and Figure 4, we averaged the AUC of each label for CUB and TIL datasets respectively. Our experiments showed that using only \(10\%\) of concept annotation resulted in a lower AUC, whereas an increase in the number of concept labels led to an AUC enhancement (apart from \(40\%\) concept annotation on TIL). #### 4.5.2 Influence of Concept Weightage The hyperparameter \(\alpha\) controls the weightage given to the concept learning in the final ACE loss. We conduct experiments to evaluate the sensitivity of ACE to this hyperparameter. Given the small variations in AUC for CUB dataset across \(\alpha=(0.001,0.01,0.1,1.0,10.0)\), we conclude that our model is fairly robust to changes in \(\alpha\) in the CUB dataset. For the medical TIL dataset, we observe a fairly similar trend. In summary, our model is robust to different concept weights while being interpretable. ### Extending ACE to other AD Methods To prove that the explanatory feature of ACE is easily applicable to other anomaly detection models, we integrated a new AD model as the backbone for our anomaly detection framework. Therefore to evaluate the effectiveness of the concept explainability module introduced in ACE, we used a recent transformation-based anomaly detection model named MSC [31] and added the concept encoder to it. For our experiments on MSC+ACE with a transformation model inspired by MSC [31], we used similar hyperparameters. During the training, each sample within a batch will go over \(M=2\) random transformations. We used pre-trained ResNet18 for our encoder \(\mathcal{G}_{X}\). We trained MSC+ACE with SGD optimizer with learning rate \(1e-3\), attribute weight (\(\alpha\)) \(0.1\). MSC+ACE is trained for 20 epochs for both dataset with batch size 8, 32 for CUB and TIL respectively. Similar hyperparameters are used for the experiments on MSC. The results of MSC and MSC+ACE are presented in Table 2. While the outcomes of our model, incorporating an anomaly detection backbone influenced by MSC [31], reveal significantly enhanced performance compared to GOAD+ACE, this comes with certain considerations. MSC \begin{table} \begin{tabular}{c c c c c} \hline \hline Datasets & Class (\(k_{ind}\)) & GOAD [3] & GOAD+ACE & Concept Accuracy \\ \hline \multirow{8}{*}{CUB} & Black footed Albatross & 71.76\(\pm\) 0.01 & 73.38\(\pm\) 0.00 & 92.47\(\pm\)0.34 \\ & Laysan Albatross & 63.73\(\pm\) 0.02 & 65.02\(\pm\) 0.03 & 86.94\(\pm\)0.88 \\ & Sooty Albatross & 60.02\(\pm\) 0.01 & 60.35\(\pm\) 0.02 & 87.37\(\pm\)1.39 \\ & Groove billed Ani & 67.94\(\pm\) 0.01 & 69.31\(\pm\) 0.01 & 93.11\(\pm\)2.20 \\ & Crested Auklet & 66.31\(\pm\) 0.03 & 66.91\(\pm\) 0.03 & 88.38\(\pm\)0.79 \\ & Least Auklet & 55.07\(\pm\) 0.03 & 58.84\(\pm\) 0.02 & 84.26\(\pm\)2.21 \\ & Parakeet Auklet & 77.79\(\pm\) 0.01 & 77.85\(\pm\) 0.02 & 93.07\(\pm\)0.54 \\ & Rhinoceros Auklet & 70.91\(\pm\) 0.01 & 70.00\(\pm\) 0.00 & 90.42\(\pm\)1.23 \\ & Brewer Blackbird & 56.05\(\pm\) 0.02 & 56.44\(\pm\) 0.02 & 89.57\(\pm\)1.95 \\ & Red winged Blackbird & 77.60\(\pm\) 0.01 & 77.96\(\pm\) 0.01 & 94.04\(\pm\) 1.63 \\ \hline \multirow{8}{*}{TIL} & Average & 66.72 & **67.61** & 89.96 \\ \cline{2-5} & BLCA & 53.40\(\pm\) 0.04 & 58.83\(\pm\) 0.00 & 90.91 \(\pm\) 0.98 \\ & BRCA & 54.56\(\pm\) 0.06 & 57.43\(\pm\) 0.00 & 92.68\(\pm\) 1.96 \\ & CESC & 51.52\(\pm\) 0.08 & 53.66\(\pm\) 0.08 & 91.63\(\pm\) 0.59 \\ & COAD & 40.88\(\pm\) 0.02 & 37.82\(\pm\) 0.03 & 91.90 \(\pm\) 1.38 \\ & LUAD & 50.01\(\pm\) 0.01 & 52.21\(\pm\) 0.01 & 92.14 \(\pm\) 1.27 \\ & LUSC & 53.75\(\pm\) 0.03 & 56.71\(\pm\) 0.05 & 91.88\(\pm\) 0.83 \\ & PAAD & 50.64\(\pm\) 0.01 & 52.92\(\pm\) 0.02 & 85.39\(\pm\) 0.48 \\ & PRAD & 56.84\(\pm\) 0.02 & 55.59\(\pm\) 0.09 & 88.47\(\pm\) 1.21 \\ & READ & 56.05\(\pm\) 0.02 & 56.44\(\pm\) 0.02 & 89.43\(\pm\) 0.32 \\ & SKCM & 41.72\(\pm\) 0.02 & 48.92\(\pm\) 0.06 & 91.11\(\pm\) 0.96 \\ & STAD & 52.28\(\pm\) 0.01 & 52.66\(\pm\) 0.00 & 90.46\(\pm\) 0.46 \\ & UCEC & 48.96\(\pm\) 0.05 & 61.30\(\pm\) 0.02 & 91.33\(\pm\) 0.74 \\ & UVM & 53.76\(\pm\) 0.06 & 62.57\(\pm\) 0.02 & 83.45\(\pm\) 2.40 \\ \hline \multirow{8}{*}{CelebA} & Average & 51.11 & **54.39** & 83.14 \\ \cline{2-5} & Female & **65.75\(\pm\)** 0.01 & 65.28\(\pm\) 0.00 & 81.52\(\pm\) 2.39 \\ \cline{1-1} & Male & 39.20\(\pm\) 0.01 & **40.01\(\pm\)** 0.01 & 74.68\(\pm\)1.85 \\ \hline \multirow{8}{*}{CelebA} & Average & 52.47 & **52.64** & 78.08 \\ \cline{1-1} \cline{2-5} & \multicolumn{1}{c}{} & & & \\ \end{tabular} \end{table} Table 1: ROC-AUC (\(\%\)) comparison of AD models on TIL, CUB, and CelebA datasets with _one-vs-all_ scheme. In the _one-vs-all_ scheme, the class name defines \(k_{ind}\). The results are averaged over five different runs. We used \(\alpha=0.01\), \(\alpha=1.0\), and \(\alpha=0.01\) for our experiments on TIL, CUB, and CelebA datasets respectively. The concept accuracy is only reported for GOAD+ACE employs a k-Nearest Neighbor algorithm with \(k=2\) to determine whether a sample is categorized as normal or anomalous, and this decision-making involves the storage of all training set embeddings. Consequently, there exists a trade-off between the desired accuracy and the available resources, particularly when dealing with larger datasets. ## 5 Limitations While ACE improves anomaly detection performance through human explainable concepts, its adaptability to diverse anomaly detection scenarios relies on the presence of annotated concepts. Recent works [27, 51] overcame the limitation of concept annotations by querying a large vision-language model for concepts. However, as mentioned in [51], the concepts are prone to model biases which are undesirable. Additionally, these models would fail to generate concepts in realistic datasets such as those in medical datasets. However to have a generalizable concept generating model is a fairly difficult task, although overcoming \begin{table} \begin{tabular}{l c c c} \hline \hline Datasets & CUB & TIL & CelebA \\ \hline MSC & **93.90\(\pm\)**2.90 & 62.42\(\pm\)1.01 & 70.76\(\pm\)4.73 \\ \hline MSC+ACE & 93.85\(\pm\)3.00 & **64.83\(\pm\)**1.32 & **70.99\(\pm\)**4.37 \\ \hline \hline \end{tabular} \end{table} Table 2: ROC-AUC (\(\%\)) comparison of MSC+ACE with MSC on _one-vs-all_ scheme on CUB and TIL datasets. All of the results are from our implementations and are averaged over five different runs for TIL and CUB and three for CelebA. Figure 4: Number of concepts and its impact on TIL using GOAD+ACE; Increasing the number of concept improves the performance. Figure 5: The impact of concept weightage on CUB dataset: Higher values of \(\alpha\) generally lead to a reduction in ROC-AUC. Figure 3: Number of concepts and its impact on CUB using GOAD+ACE; In general increasing the number of concept leads to the higher ROC-AUC. Figure 6: The effect of concept weightage on TIL dataset; GOAD+ACE achieves highest ROC-AUC with \(\alpha=1.0\). this bottleneck could be an interesting future work. Since our contribution is uncovering the black-box anomaly detection via human-interpretable concepts and is algorithmic in nature, we have not performed a user survey of using these concepts in anomaly detection. We hope that our work can motivate experiments in the medical domain. ## 6 Conclusion and Future Work In this paper, we proposed a methodology that introduces transparency in anomaly detection prediction beyond standard metrics. While current anomaly detection approaches achieve promising performance, real-world applications demand transparency in model predictions beyond accuracy. Our proposal, transparent Anomaly detection Concept Explanations (ACE) addresses this challenge, offering human-interpretable insights along with anomaly prediction. Our experiments conducted on realistic datasets demonstrate comparable or better results in comparison to black box anomaly detectors. Additionally, we showcased the adaptability of the explanatory module to other transformation-based AD models. We hope that our work encourages research in interpreting and explaining anomaly detection. In future work, we intend to extend our experiments in exploring the integration of ACE into anomaly detection models beyond transformation-based detectors. Additionally, investigating the influence of intervention on anomaly detection performance is another avenue to explore. ## 7 Acknowledgements The authors would like to acknowledge compute support by Digital Research Alliance of Canada.
2301.06621
Constraining flavour-universal nonstandard interactions and superweak extension of the standard model
Nonstandard neutrino interactions (NSI) arising from light and heavy mediators probe different sectors of the parameter space of models focusing on phenomena that require the extension of the standard model. High-energy scattering experiments are not relevant on constraining the NSI hiding a light mediator at the fundamental level, while flavour-universal NSI cannot be probed with neutrino oscillation experiments. Currently the only way to measure flavour-universal NSI with a light mediator is to rely on coherent elastic neutrino-nucleon scattering experiments, which we use to derive bounds for light mediator flavour-universal NSI. For light NSI, we obtain $\varepsilon^u \in [-14.85,14.79]$ and $\varepsilon^d \in [-13.19,13.84]$ (90~\% CL.). We also derive constraints on flavour-universal heavy NSI and find a 2$\sigma$ tension. Finally, we discuss the implications of the experiments on the allowed parameter space of a specific example model, called superweak extension of the standard model.
Timo J. Kärkkäinen, Zoltán Trócsányi
2023-01-16T21:59:56Z
http://arxiv.org/abs/2301.06621v2
Constraining the parameter space of the super-weak extension of the standard model by limits on non-standard interactions and vice versa ###### Abstract Nonstandard neutrino interactions (NSI) arising from light and heavy mediators probe different sectors of the parameter space of models focusing on phenomena that require the extension of the standard model. High-energy scattering experiments are not relevant on constraining the NSI hiding a light mediator at the fundamental level, while flavour-universal NSI cannot be probed with neutrino oscillation experiments. Currently the only way to measure flavour-universal NSI with a light mediator is to rely on coherent elastic neutrino-nucleon scattering experiments. We derive bounds for both light and heavy mediator flavour-universal NSI. We also discuss the implications of the experiments on the allowed parameter space of a specific example model, a U(1)-extension of the Standard Model called super-weak force. ## 1 Introduction The discovery of neutrino oscillations [1, 2] kickstarted a plethora of research efforts in neutrino physics. As the Standard Model (SM) is devoid of neutrino masses, neutrinos are an exciting option as a portal to new physics, which must contain a mechanism to generate neutrino masses, and therefore neutrino oscillations. One of the most popular models of mass generation is the seesaw mechanism [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. The type I mechanism introduces heavy right-handed neutrinos that are sterile under the SM. As at least two of the three active neutrinos are massive, the minimum extension includes two sterile neutrinos. Type II mechanism instead extends the scalar sector with an SU(2)\({}_{\rm L}\) triplet scalar \(\Delta=(\Delta^{++},\Delta^{+},\Delta^{0})\) with hypercharge \(Y=2\), which is usually assumed to be leptophilic. Type III seesaw extends the fermion sector with hyperchargeless SU(2)\({}_{\rm L}\) triplet \(\Sigma=(\Sigma^{+},\Sigma^{0},\Sigma^{-})\). Other neutrino mass generation mechanisms include inverse seesaw [15; 16; 17], radiative mass models [18; 19] and others. New physics effects are manifested at low energy scales via effective operators, which are generated by integrating out the heavy degrees of freedom from the high-energy theory. In the context of neutrino physics, there are three important operators: \[\mathcal{O}_{5} =\frac{C_{5}}{\Lambda}(\overline{L^{c}}\cdot H)(H\cdot L)\,, \tag{1}\] \[\mathcal{O}_{6a} =\frac{C_{6a}}{\Lambda^{2}}(\overline{L}\gamma^{\mu}P_{\rm L}L)( \overline{f}\gamma_{\mu}P_{X}f)\,,\] (2) \[\mathcal{O}_{6b} =\frac{C_{6b}}{\Lambda^{2}}(\overline{L}\cdot H)i\not{\partial}( H^{\dagger}\cdot L) \tag{3}\] where the dot represents the SU(2)\({}_{\rm L}\) invariant product of doublets and \(\Lambda\) is the scale of new physics. The first operator is Weinberg operator [20], which is the only possible gauge invariant dimension-5 operator that can be constructed from the SM fields. After spontaneous symmetry breaking this gives a Majorana neutrino mass term. The second operator corresponds to _nonstandard interactions_ (NSIs) [21] of four charged leptons or charged lepton - quark NSI, which in general break flavour. The third operator arises from active-sterile neutrino mixing. The latter two operators are of dimension six. The scale \(\Lambda\) is interpreted as the energy scale of new physics, typically considered much higher than the electroweak scale, corresponding to a _heavy NSI mediator_ at the fundamental level. This expectation is based on the assumption that the couplings \(C_{i}\) are \(\mathcal{O}(1)\) coefficients. However, quantum field theory does not _a priori_ force the couplings to be so large. In the SM, a prime example of small couplings is the Yukawa coupling of the electron, \(y_{e}\simeq 3\cdot 10^{-6}\ll 1\). In the case when the couplings \(C_{i}\ll 1\), the scale \(\Lambda\) can be as low as GeV or even MeV, and the mass of the corresponding NSI mediator may be light or similar to the momentum transfer in the experiment. While such scenarios do not support models built on naturalness arguments, they are certainly not ruled out, and also predictive. Such new physics interactions can be probed at high-intensity, low-energy experiments that are planned for the next decades. Neutrino interactions have very low cross sections. Nonetheless neutrino-electron and neutrino-nucleon cross sections have been measured at scattering experiments where the averaged momentum transfer squared is large, \(\langle q^{2}\rangle=20\,\mathrm{GeV}^{2}\)[22; 23; 24]. These measurements give stringent bounds to new physics effects originating from the effective operators, namely the NSI with new physics scale \(\Lambda>\Lambda_{\rm EW}\). Recently the first successful detection of coherent elastic neutrino-nucleon scattering (CE\(\nu\)NS) [25; 26] allows us to test whether or not NSI effects exist with scale \(\Lambda\) below the electroweak scale. Different extensions of the SM produce different NSI textures. A subclass of these extensions is flavour conserving. Consequently, the NSI matrix is diagonal and real, containing only three elements, which have contributions from up-type quarks, down-type quarks and charged leptons. If in addition the extension is flavour universal, then the NSI matrix is isotropic (proportional to the unit matrix). In the bottom-to-top approach, current experimental bounds can be used to constrain the high-energy theory parameters. In contrast, the top-to-bottom approach can be used to predict the texture and region NSI available for a particular model. In this paper, we discuss the NSI formalism and both approaches by considering the constraints with light and heavy NSI mediators. We derive bounds for flavour-universally coupled NSI mediator in _both the light and the heavy case_. We then consider a specific example, the super-weak extension of the standard model (SWSM) which exhibits tiny flavour-universal couplings to fermions. It contains an NSI mediator that is light in scattering experiments and therefore it evades detection, but not so in CE\(\nu\)NS, which is sensitive for NSI originating from SWSM. We derive bounds on the new gauge coupling and ratio of the vacuum expectation values in the SWSM based on the results of COHERENT [25, 26, 27] and our previous analyses on dark matter [28] in the SWSM. ## 2 Experimental constraints on the NSI parameters ### NSI formalism In our study we focus on the \(\mathcal{O}_{6a}\) operator of Eqn. (2) that is relevant to neutrino-matter interactions. In the usual parametrization of the NSI Lagrangian the interaction strength is set by the Fermi coupling \(G_{\rm F}\), \[\mathcal{L}_{\rm NSI}=-2\sqrt{2}G_{\rm F}\sum_{f,X=\pm,\ell,\ell^{\prime}} \varepsilon_{\ell,\ell^{\prime}}^{f,X}(\bar{\nu}_{\ell}\gamma^{\mu}P_{\rm L} \nu_{\ell^{\prime}})(\bar{f}\gamma_{\mu}P_{X}f) \tag{4}\] where \(\varepsilon_{\ell,\ell^{\prime}}^{f,X}\) parametrizes the strength of the new interaction with respect to \(G_{\rm F}\), with \(\ell\), \(\ell^{\prime}\) denoting charged lepton flavours and \(f\) being a charged fermion in the standard model. When one matches the NSI Lagrangian (4) with the effective Lagrangian obtained from a high-energy theory, the NSI parameters are proportional to the propagator of the mediator, i.e. to \(\varepsilon_{\ell,\ell^{\prime}}^{f,X}\propto(q^{2}-M^{2})^{-1}\), where \(q^{\mu}\) is the four-momentum (\(q^{2}=q_{\mu}q^{\mu}\)) carried by the mediator and \(M\) is its mass. In a neutrino scattering experiment, we may approximate the propagator either as \[\varepsilon_{\ell,\ell^{\prime}}^{f,X}\propto+\frac{1}{q^{2}}\ \text{if}\ q^{2} \gg M^{2}, \tag{5}\] or \[\varepsilon_{\ell,\ell^{\prime}}^{f,X}\propto-\frac{1}{M^{2}}\ \text{if}\ q^{2} \ll M^{2}. \tag{6}\] The first case in Eq. (5) corresponds to "light NSI mediator", while the second one to "heavy NSI mediator". For concreteness, let us consider \(M=50\) MeV. Then the mediator is considered _heavy_ from the _viewpoint of neutrino oscillation experiments_, but _light for high-energy neutrino scattering experiments_, such as CHARM [22] and NuTeV [23]. However, if \(q^{2}\) is similar in size to \(M^{2}\), as in the case of CE\(\nu\)NS in our example, we cannot take any of these limits. Nevertheless, we can still apply the NSI formalism using the full propagator with \(q^{2}\) being the characteristic momentum transfer squared in the scattering experiment. The resulting NSI couplings interpolate smoothly between the light and heavy limits. We present an example in Sect. 4.2. ### Global fit of the heavy NSI parameters In Ref. [29] the authors perform a global fit to current experiments for the NSI couplings with heavy mediators and in the absence of CP violation, that is, the NSI parameters are assumed to be real. The authors performed a \(\chi^{2}\)-test, minimizing the \(\chi^{2}\)-function, and presented the dependence of the \(\Delta\chi^{2}\)-distributions (the difference of a \(\chi^{2}\)-test value to \(\chi^{2}\) best-fit value), that is, the statistical significance of the NSI parameters. We reproduced those plots here in Fig. 1, with \(2\sigma\) and \(90\,\%\) confidence intervals exhibited. We read off the best-fit points directly from these graphs, and presented those together with the confidence intervals in Table 1. We then combined the individual \(\Delta\chi^{2}\)-distributions to test flavour-universal couplings by summing the three \(\Delta\chi^{2}\)-distributions [30]: \[\Delta\chi^{2}_{\rm isotropic}=\Delta\chi^{2}_{ee}+\Delta\chi^{2}_{\mu\mu}+ \Delta\chi^{2}_{\tau\tau}\,. \tag{7}\] We present the combined up- and down-type isotropic NSI coupling \(\Delta\chi^{2}\)-distributions in Fig. 2, with the individual original distributions overlaid. The relative incompatibility of different flavour distributions results in tension with experimental data indicating that both the up- and down-type quark isotropic NSI scenario are excluded at \(2\sigma\). We compare the individual and combined bounds in Fig. 3. For isotropic NSI we have summarized our results in Table 2. These bounds are relevant for theories which are accessible via high-energy experiments, where the mediator has at least mass \(\mathcal{O}(10)\,\)GeV Figure 1: Determinations of \(2\sigma\) and \(90\,\%\) confidence intervals from minimized \(\Delta\chi^{2}\)-distributions given in [29]. Down-type quark NSI above and up-type quark NSI below. Vertical black line (\(\Delta\chi^{2}=4\)) corresponds to the \(2\sigma\) bound. and couples to quark flavours universally. For leptonic NSI, one can use the constraints given in Fig. 2 of Ref. [31], where the authors performed both one-parameter- and flavour-conserving fits. Their \(\chi^{2}\)-analysis takes into account the data from LEP experiments (ALEPH, DELPHI, L3 and OPAL), LSND experiment, reactor experiments (MUNU and Rovno) and CHARM II experiment. \begin{table} \begin{tabular}{|c||c|c|c|} \hline **Parameter** & **Best fit** & \(3\sigma\)**CI** \\ \hline \(\varepsilon^{u}\) & \(-5.5\times 10^{-4}\) & \([-0.0073,0.0063]\) \\ \hline \(\varepsilon^{d}\) & \(5.3\times 10^{-3}\) & \([-0.0026,0.0114]\) \\ \hline \end{tabular} \end{table} Table 2: Best-fit points and \(3\sigma\) confidence intervals for isotropic NSI. The constraints from high-energy experiments have been taken into account, hence the bounds apply only for heavy mediator NSI (\(M^{2}\gg 20\) GeV\({}^{2}\)). \begin{table} \begin{tabular}{|c||c|c||c|} \hline **Parameter** & **Best-fit point \(\mu_{i}\)** & \(2\sigma\)**CI \(\sigma_{2,i}\)** & **90 \% CI \(\sigma_{90,i}\)** \\ \hline \hline \(\varepsilon^{d}_{ee}\) & 0.301 & [–0.015, 0.556] & [0.019, 0.504] \\ \hline \(\varepsilon^{d}_{\mu\mu}\) & 0.003 & [–0.004, 0.010] & [–0.003, 0.009] \\ \hline \(\varepsilon^{d}_{\tau\tau}\) & 0.006 & [–0.004, 0.073] & [–0.001, 0.044] \\ \hline \(\varepsilon^{u}_{ee}\) & 0.297 & [0.006, 0.493] & [0.044, 0.451] \\ \hline \(\varepsilon^{u}_{\mu\mu}\) & \(-0.001\) & [–0.009, 0.006] & [–0.008, 0.005] \\ \hline \(\varepsilon^{u}_{\tau\tau}\) & \(-0.001\) & [–0.011, 0.067] & [–0.009, 0.035] \\ \hline \end{tabular} \end{table} Table 1: Best-fit points for diagonal quark NSI parameters, and also 90 % and \(2\sigma\) confidence intervals (CI) derived from using Fig. 4 of [29]. The bounds apply only for heavy mediator NSI (\(M^{2}\gg 20\) GeV\({}^{2}\)). Figure 2: Combined \(\chi^{2}\)-distributions and the individual components overlaid. ### Flavour universal NSI from the COHERENT experiment For obtaining constraint on light NSI parameters oscillation experiments can be utilized. However, those cannot observe the diagonal elements of the NSI matrix themselves. Instead, they measure off-diagonal couplings and differences of the diagonal couplings. In Ref. [32] the authors have chosen the convention that \(\varepsilon_{\mu\mu}\) is subtracted from the effective Mikheyev-Smirnov-Wolfenstein neutrino oscillation Hamiltonian as a phase rotation, so the observable parameters are \(\varepsilon_{ee}^{f}-\varepsilon_{\mu\mu}^{f}\) and \(\varepsilon_{ee}^{f}-\varepsilon_{\tau\tau}^{f}\). Consequently, flavour-conserving NSI (that is, diagonal NSI matrix) can be detected in neutrino oscillations only if it is not flavour-universal. In flavour-universal case the NSI matrix is isotropic and manifests itself as an unphysical phase rotation, undetectable in such experiments. Another resource to test the light NSI couplings is coherent elastic neutrino-nucleon scattering (CE\(\nu\)NS). In this experiment the differential cross section for in the recoil energy \(T\) (\(T\lesssim 10\,\)keV) of the nucleus in this process is given by \[\frac{\mathrm{d}\sigma}{\mathrm{d}T}=\frac{G_{\mathrm{F}}^{2}M}{\pi}\left(1- \frac{|\mathbf{q}|^{2}}{4E_{\nu}^{2}}\right)Q_{W}^{2} \tag{8}\] where \(M\) is the mass of the nucleus and \(|\mathbf{q}|^{2}=2MT\) is the momentum transfer squared. \(E_{\nu}\) is the energy of the neutrino, while \(Q_{W}\) denotes the weak charge for a nucleus of \(Z\) protons and \(N\) neutrons, which in the standard model reads as \[Q_{W}^{\mathrm{SM}}=g_{V}^{n}NF_{n}(\mathbf{q})+g_{V}^{p}ZF_{p}(\mathbf{q})\,, \quad g_{V}^{n}=-\frac{1}{2},\quad g_{V}^{p}=\frac{1}{2}-2\sin^{2}\theta_{W}\,. \tag{9}\] The functions \(F_{n}\) and \(F_{p}\) are nuclear form factors for the neutron and the proton distribution in the nucleus, parameterized using Helm's parameterization in Ref. [27]: \[F_{x}(|\mathbf{q}|)=\frac{3j_{1}(|\mathbf{q}|R_{x,0})}{|\mathbf{q}|R_{x,0}} \mathrm{e}^{-|\mathbf{q}|^{2}s^{2}/2},\quad R_{x,0}^{2}=5s^{2}-\frac{5}{3}R_{ x}^{2},\quad x=n\text{ or }p\,. \tag{10}\] In this formula \(R_{x,0}\) is obtained using the surface thickness \(s=0.9\,\)fm and the the root mean square radii of the proton and neutron distributions inside the nucleus. Figure 3: Comparisons of \(2\sigma\) and \(90\) % confidence intervals for the diagonal elements, including best-fit value. Left: down-type quark NSI, right: up-type quark NSI. Isotropic NSI included. The best-fit of \(\varepsilon_{ee}\) is not visible at this range. For instance, \(R_{p}(^{133}Cs)=4.804\,\mathrm{fm}\) and \(R_{n}(^{133}Cs)=5.01\) for Cesium and \(R_{p}(^{127}I)=4.749\,\mathrm{fm}\) and \(R_{n}(^{127}Cs)=4.94\) for Iodine used in the experiments. The function \(j_{1}(x)=\frac{\sin x}{x^{2}}-\frac{\cos x}{x}\) is the spherical Bessel function of the first kind, order 1. CE\(\nu\)NS was predicted by Freedman in 1974, and finally observed for the first time in COHERENT experiment in 2017. The first run used Cesium-133 and Iodine-127 nuclei in 2017 and the second run in 2020 liquid argon-40. The generalization of the weak charge in Eq. (9) to the case of generic NSI is \[\begin{split} Q_{W,e}^{2}&=\big{(}(g_{V}^{p}+2 \varepsilon_{ee}^{u}+\varepsilon_{ee}^{d})ZF_{p}(|\mathbf{q}|)+(g_{V}^{n}+ \varepsilon_{ee}^{u}+2\varepsilon_{ee}^{d})NF_{n}(|\mathbf{q}|)\big{)}^{2}\\ &+\big{|}(2\varepsilon_{e\mu}^{u}+\varepsilon_{e\mu}^{d})ZF_{p}( |\mathbf{q}|)+(\varepsilon_{e\mu}^{u}+2\varepsilon_{e\mu}^{d})NF_{n}(|\mathbf{ q}|))\big{|}^{2}\\ &+\big{|}(2\varepsilon_{e\tau}^{u}+\varepsilon_{e\tau}^{d})ZF_{ p}(|\mathbf{q}|)+(\varepsilon_{e\tau}^{u}+2\varepsilon_{e\tau}^{d})NF_{(}| \mathbf{q}|))\big{|}^{2}\end{split} \tag{11}\] where \(\varepsilon_{\ell\ell^{\prime}}^{f}=\varepsilon_{\ell\ell^{\prime}}^{f,+}+ \varepsilon_{\ell\ell^{\prime}}^{f,-}\). We remark that the leading order contribution to the flavour-breaking NSI parameters \(\varepsilon_{\ell\ell^{\prime}}^{f}\) (\(\ell\neq\ell^{\prime}\)) is proportional to the second order of those parameters, while the flavour-conserving parameters contribute at both first and second order (linear and square terms). If both flavour-conserving and flavour-breaking NSI parameters have approximately the same magnitude and are significantly less than one, then we may neglect the second order terms. Then, the flavour-conserving NSI parameters dominate the distortion to the weak charge \(Q_{W}^{2}\): \[\begin{split} Q_{W,e}^{2}&=Q_{W,e}^{\text{SM}}\\ &+2(g_{V}^{n})^{2}\Big{(}\varepsilon_{ee}^{u}+2\varepsilon_{ee}^{ d}\Big{)}N^{2}F_{n}^{2}+6g_{V}^{n}g_{V}^{p}\Big{(}\varepsilon_{ee}^{u}+ \varepsilon_{ee}^{d}\Big{)}NZF_{n}F_{p}+2(g_{V}^{p})^{2}\Big{(}2\varepsilon_{ee }^{u}+\varepsilon_{ee}^{d}\Big{)}Z^{2}F_{p}^{2}\,,\end{split} \tag{12}\] Presently large values (larger than one) for the light NSI parameters are still allowed experimentally for both flavour-conserving and flavour-breaking case [27]. In such a case, one should use the complete formula for the weak charge as given in Eq. (11). We may utilize the COHERENT limit given by [27] to constrain \(\varepsilon_{ee}^{q}\). Analogously, the same argument can be used to demonstrate the dominance of the \(\mu\mu\) elements on \(Q_{W,\mu}^{2}\). We performed the combination of \(\Delta\chi^{2}\)-distributions also for COHERENT experiment, which is sensitive to \(\varepsilon_{ee}^{q},\varepsilon_{e\mu}^{q}\) and \(\varepsilon_{\mu\mu}^{q}\) but not to \(\varepsilon_{\tau\tau}^{q}\), where \(q=u,d\). In isotropic NSI models \(\varepsilon_{ee}^{q}=\varepsilon_{\mu\mu}^{q}\). We assume the COHERENT measurements of these two couplings to be independent and sum the \(\Delta\chi^{2}\)-distributions related to these parameters, following the instruction of Ref. [30]. We then derive the COHERENT bounds for isotropic NSI parameters. We reproduce the individual \(\Delta\chi^{2}\)-distributions are taken from Ref. [27], and show them together with the combination in Fig. 4. The corresponding confidence intervals are given in Table 3. ## 3 NSI couplings derived in the SWSM In this section we provide an example of a model that naturally yields an isotropic NSI matrix, namely, the super-weak extension of the Standard Model [33]. We recall the details of the SWSM only to the extent needed to derive the NSI couplings. For more details on the model, we call attention to Refs. [34, 28, 35, 36] where various phenomenological aspects were studied. ### Super-weak extension of the standard model The SWSM is based on the SU(3)\({}_{c}\otimes\)SU(2)\({}_{L}\otimes\)U(1)\({}_{Y}\otimes\)U(1)\({}_{z}\) gauge group. The U(1) gauge couplings are denoted by \(g_{y}\) and \(g_{z}\). The anomaly-free U(1)\({}_{z}\) charges for the fermions are presented in Table 4. The SU(2)\({}_{L}\otimes\)U(1)\({}_{Y}\) symmetry is broken by the vacuum expectation value \(v\) of the usual Brout-Englert-Higgs field, while the U(1)\({}_{z}\) symmetry is spontaneously broken by the vacuum expectation value \(w\) of a complex scalar singlet (under transformations of the SM), making the corresponding neutral gauge bosons \(Z\) and \(Z^{\prime}\) massive. These bosons mix weakly with mixing angle \(\theta_{Z}\). The covariant derivative related to the Abelian sector of the model is \[D_{\mu}\supset D_{\mu}^{\rm U(1)}=\partial_{\mu}-{\rm i}(y,z)\begin{pmatrix}g_{ y}&-\eta g_{z}\\ 0&g_{z}\end{pmatrix}R_{\varepsilon}\begin{pmatrix}B_{\mu}\\ B_{\mu}^{\prime}\end{pmatrix} \tag{13}\] where \(R_{\varepsilon}\) is an unphysical rotation matrix (whose rotation angle can be absorbed in \(\theta_{Z}\)), \(y\) and \(z\) are the U(1) charges, and the parameter \(\eta\) is a more convenient way to parametrize the kinetic mixing between the U(1) gauge fields. It depends on the renormalization scale scale \(\mu\) mildly, and its value at the electroweak scale will vary according to the free choice of the scale \(\mu_{0}\) where the mixing vanishes, \(\eta(\mu_{0})=0\). For \(\mu_{0}\) chosen in the range \([M_{Z},M_{\rm GUT}\) one finds \(\eta(M_{Z})\in[0,0.656]\)[34]. The largest value \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Field** & \(Q_{L}\) & \(u_{R}\) & \(d_{R}\) & \(L_{L}\) & \(\ell_{R}\) & \(N_{R}\) \\ \hline U(1)\({}_{z}\) charge & \(\frac{1}{6}\) & \(\frac{7}{6}\) & \(-\frac{5}{6}\) & \(-\frac{1}{2}\) & \(-\frac{3}{2}\) & \(\frac{1}{2}\) \\ \hline \end{tabular} \end{table} Table 4: Charges of the extra U(1) symmetry of the fermions in SWSM. Figure 4: Combined \(\Delta\chi^{2}\)-distributions and the individual components overlaid. Only COHERENT data is taken into account. \begin{table} \begin{tabular}{|c||c|c|c|} \hline **Parameter** & \(2\sigma\)**CI** & **90 \% CI** & \(1\sigma\)**CI** \\ \hline \hline \(\varepsilon^{u}\) & \([-17.25,17.16]\) & \([-14.85,14.79]\) & \([-10.01,9.42]\) \\ \hline \(\varepsilon^{d}\) & \([-15.31,16.05]\) & \([-13.19,13.84]\) & \([-8.61,9.23]\) \\ \hline \end{tabular} \end{table} Table 3: Confidence intervals for isotropic NSI couplings based on the COHERENT constraints. corresponds to a special case, where we assume that the kinetic mixing vanishes near the Planck scale. The interaction vertices can be obtained using the implementation of the model [35] in SARAH[37, 38, 39]. For the \(Z^{\prime}\)-neutrino interactions, we find \[-{\rm i}eC^{L}_{Z^{\prime}\nu_{l}\nu_{k}} =-\frac{{\rm i}}{2}\Big{[}\sum_{j=1}^{3}({\bf U}_{i,j})({\bf U}^{ \dagger})_{j,k}\left(\frac{e}{\sin\theta_{W}\cos\theta_{W}}\sin\theta_{Z}+( \eta-1)g_{z}\cos\theta_{Z}\right) \tag{14}\] \[\qquad-g_{z}\cos\theta_{Z}\sum_{j=1}^{3}{\bf U}_{i,j+3}({\bf U}^{ \dagger})_{j+3,k}\Big{]} \tag{15}\] where \(\theta_{W}\) is Weinberg's angle and \({\bf U}\) is the neutrino mixing matrix. The model contains three extra heavy sterile right-handed neutrinos \(N_{R,i}\) (\(i=1,\,2,\,3\)), so this matrix is a \(6\times 6\) unitary matrix. The sterile neutrinos of the SWSM are much more massive than the active ones. We may safely assume that active-sterile neutrino mixing is negligible (that is, the off-diagonal \(3\times 3\) blocks vanish), and hence the active neutrino mixing matrix is unitary (the \(3\times 3\) upper left block of \({\bf U}\), ie. Pontecorvo-Maki-Nakagawa-Sakata matrix). Using these conditions, we can perform the matrix element sums and obtain the simplified expression: \[-{\rm i}eC^{L}_{Z^{\prime}\nu\nu}=-\frac{{\rm i}}{2}\left(\frac{e}{\sin\theta _{W}\cos\theta_{W}}\sin\theta_{Z}+(\eta-1)g_{z}\cos\theta_{Z}\right). \tag{16}\] The other \(Z^{\prime}\)-fermion couplings (multiplied by \({\rm i}\) for easier reading) are \[eC^{L}_{Z^{\prime}dd} =-\frac{1}{6}\tan\theta_{W}\Big{(}e\left(3\cot^{2}\theta_{W}+1 \right)\sin\theta_{Z}+(\eta-1)g_{z}\cot\theta_{W}\cos\theta_{Z}\Big{)} \tag{17}\] \[eC^{R}_{Z^{\prime}dd} =+\frac{1}{6}\Big{(}2e\tan\theta_{W}\sin\theta_{Z}+(2\eta-5)g_{z} \cos\theta_{Z}\Big{)}\] (18) \[eC^{L}_{Z^{\prime}uu} =-\frac{1}{6}\tan\theta_{W}\Big{(}e(1-3\cot^{2}\theta_{W})\sin \theta_{Z}+(\eta-1)g_{z}\cot\theta_{W}\cos\theta_{Z}\Big{)}\] (19) \[eC^{R}_{Z^{\prime}uu} =-\frac{1}{6}\Big{(}4e\tan\theta_{W}\sin\theta_{Z}+(4\eta-7)g_{z} \cos\theta_{Z}\Big{)}\] (20) \[eC^{L}_{Z^{\prime}ee} =-\frac{1}{2}\tan\theta_{W}\Big{(}e\left(\cot^{2}\theta_{W}-1 \right)\sin\theta_{Z}-(\eta-1)g_{z}\cot\theta_{W}\cos\theta_{Z}\Big{)}\] (21) \[eC^{R}_{Z^{\prime}ee} =+\frac{1}{2}\Big{(}2e\tan\theta_{W}\sin\theta_{Z}+(2\eta-3)g_{z} \cos\theta_{Z}\Big{)} \tag{22}\] Now we may write the Feynman amplitude for virtual \(Z^{\prime}\)-mediated \(\nu_{\ell}f\to\nu_{\ell}f\)-scattering. Then we obtain the NSI couplings derived from the SWSM as \[\varepsilon^{f,X}(g_{z},\eta,\tan\beta)=-\frac{v^{2}}{2(q^{2}-M_{Z^{\prime}}^ {2})}(eC^{L}_{Z^{\prime}\nu\nu})(eC^{X}_{Z^{\prime}ff})\,, \tag{23}\] which interpolates smoothly between the limits of heavy or light NSI couplings given by \[\varepsilon^{f,X}\approx\frac{1}{2}(eC^{L}_{Z^{\prime}\nu\nu})(eC^{X}_{Z^{ \prime}ff})\times\begin{cases}\frac{v^{2}}{M_{Z^{\prime}}^{2}},&\text{when }M_{Z^{\prime}}^{2}\gg q^{2},\\ -\frac{v^{2}}{q^{2}},&\text{when }M_{Z^{\prime}}^{2}\ll q^{2}.\end{cases} \tag{24}\] These NSI couplings are flavour universal, hence we have suppressed the corresponding lower indices. Also, flavour is conserved. The mass of the \(Z^{\prime}\) in Eq. (23) is fixed according to Eq. (A.14) of Ref [34], reproduced in an equivalent form here: \[M_{Z^{\prime}}^{2}(g_{z},\eta,\tan\beta)=\frac{g_{z}^{2}v^{2}\tan^{2}\beta}{1+ \frac{1}{e}(2-\eta)g_{z}\sin\theta_{W}\cos\theta_{W}}\,, \tag{25}\] with \(\tan\beta=w/v\) being the the ratio of the two VEVs. In addition, the mixing angle \(\theta_{Z}\) also depends on the same parameters (see Eq. (A.13) of [34]), \[\tan 2\theta_{Z}=\frac{\left(1-\frac{\eta}{2}\right)\frac{g_{z}\cos\theta_{W}}{ g_{L}}}{\frac{1}{4}-\left(\left(1-\frac{\eta}{2}\right)^{2}+\tan^{2}\beta \right)\left(\frac{g_{z}\cos\theta_{W}}{g_{L}}\right)^{2}}\,. \tag{26}\] ### Numerical estimates Solving the Eq. (25) for \(g_{z}\), we obtain for positive \(g_{z}\) that \[g_{z} =\frac{1}{4ev^{2}\tan^{2}\beta}\] \[\times\left(\sqrt{M_{Z^{\prime}}^{2}\left(16e^{2}v^{2}\tan^{2} \beta+(\eta-2)^{2}M_{Z^{\prime}}^{2}\sin^{2}\left(2\theta_{W}\right)\right)}- (\eta-2)M_{Z^{\prime}}^{2}\sin\left(2\theta_{W}\right)\right) \tag{27}\] \[\simeq\frac{3.94\cdot 10^{-6}}{\tan\beta}\times\frac{M_{Z^{ \prime}}}{\text{MeV}}\] where we substituted \(\eta=0\) and took into account only the leading order contribution. We justify this by noting that in our investigation the dependence of \(\eta\) on other parameters is weak and its inclusion is manifested by multiplying the right hand side of Eq. (27) with a multiplicative factor of \(\mathcal{O}(1)\). Similarly, \[\theta_{Z}\approx(2-\eta)\cos\theta_{W}\frac{g_{z}}{g_{L}}\simeq 1.354(2-\eta)g_{z }=g_{z}\times\mathcal{O}(1)\,. \tag{28}\] Assuming \(\theta_{Z}\ll 1\) (i.e. super-weak coupling), we can derive the following expressions for NSI couplings: \[\varepsilon^{u} \simeq\frac{1}{2}\left(\frac{v}{M_{Z^{\prime}}}\right)^{2}\left( \frac{g_{z}^{2}}{12}\left(-5\eta^{2}+13\eta-8\right)+0.2355g_{z}\theta_{Z}(1.76 6-\eta)+0.0469\theta_{Z}^{2}\right)\,, \tag{29}\] \[\varepsilon^{d} \simeq\frac{1}{2}\left(\frac{v}{M_{Z^{\prime}}}\right)^{2}\left( \frac{g_{z}^{2}}{12}\left(\eta^{2}-5\eta+4\right)-0.0626g_{z}\theta_{Z}(1.881+ \eta)-0.0885\theta_{Z}^{2}\right)\,,\] (30) \[\varepsilon^{e} \simeq\frac{1}{2}\left(\frac{v}{M_{Z^{\prime}}}\right)^{2}\left( \frac{g_{z}^{2}}{4}\left(3\eta^{2}-7\eta+4\right)+0.5335g_{z}\theta_{Z}(1.338- \eta)-0.00536\theta_{Z}^{2}\right)\,. \tag{31}\] Scanning over the possible \(\eta\), we find \[\theta_{Z}\in[1.820,2.708]g_{z}\ \ \text{and}\ \ |\varepsilon^{f}|\in\text{in}_{f} \left(\frac{vg_{z}}{M_{Z^{\prime}}}\right)^{2}\,, \tag{32}\] with flavour dependent intervals \[\text{in}_{u}=[0.248,0.402]\,,\qquad\text{in}_{d}=[0.339,0.651]\,,\qquad\text{in}_ {e}=[0.4275,1.486]\,. \tag{33}\] Note that the NSI parameters are not independent of each other, which can be seen by taking the ratio of up- and down-type quark NSI in SWSM, \[R=\frac{\varepsilon^{u}}{\varepsilon^{d}}=\frac{eC_{Z^{\prime}uu}^{L}+eC_{Z^{ \prime}ud}^{R}}{eC_{Z^{\prime}dd}^{L}+eC_{Z^{\prime}dd}^{R}}=\frac{e\,(5-3 \cot^{2}\theta_{W})\sin\theta_{Z}+(5\eta-8)g_{z}\cot\theta_{W}\cos\theta_{Z} }{e\,(3\cot^{2}\theta_{W}-1)\sin\theta_{Z}-(\eta-4)g_{z}\cot\theta_{W}\cos \theta_{Z}}\,, \tag{34}\] from which we can express \(\eta\) as \[\eta=\frac{\frac{e}{g_{z}}\tan\theta_{W}\tan\theta_{Z}\,(3(R+1)\cot^{2} \theta_{W}-R-5)+4R+8}{R+5}\,. \tag{35}\] It turns out that the resulting valid benchmark points are confined to a very narrow region (see the next section). Finally we remark that assuming a universal bound \(\varepsilon_{\text{max}}\) for the NSI couplings, we may present a simple analytic bound in the \((M_{Z^{\prime}},g_{z})\) plane, namely \[g_{z}<\sqrt{\varepsilon_{\text{max}}}\left(\frac{M_{Z^{\prime}}}{v}\right) \times\mathcal{O}(1)\,. \tag{36}\] ## 4 Results Our results are two-fold. First we present constraints on the parameters of the SWSM and also on the NSI parameters originating from the SWSM. Next we discuss our predictions for those NSI couplings. ### Free parameters and constraints The NSI couplings depend on the gauge sector parameters \(g_{z}\), \(\eta\) and on \(\tan\beta\), which we choose as free parameters in the model. For the neutrino masses we consider, we may assume that the PMNS matrix is unitary, since nonunitary effects contributing to the NSI are negligible [36]. We scanned the \((\log_{10}\tan\beta,\log_{10}|g_{z}|,\eta)\) right rectangular prism by a uniformly distributed random sampling in \([-2,2]\times[-10,0]\times[0,0.656]\) to determine the region consistent with current bound on isotropic NSI couplings, derived in Sec. 2.3. Larger values of \(\tan\beta\) are possible in principle, but in such cases the new scalar sector decouples almost completely, hence remains inaccessible. Also values \(\tan\beta\gtrsim 100\) are disfavored by the overproduction of dark matter if the SWSM is to explain the origin of dark matter energy density observed [40]. We used the \(2\sigma\) limits for the NSI couplings as given in Table 3. We present the allowed values in histograms in Fig. 5 and in Table 5. We see that the model prefers small values of \(M_{Z^{\prime}}\) and \(\tan\beta\). The distribution of \(g_{z}\) (hence also \(\theta_{Z}\)) is fairly flat within the allowed range \(g_{z}\in 5\cdot[10^{-6},10^{-4}]\) (approximately), with the full allowed range being somewhat larger. We note that the average value (or also the median) of the asymmetric \(\varepsilon^{u}\) and \(\varepsilon^{d}\) distributions are positive and negative, since they are skewed to the respective values. Figure 5: Histograms (containing 50 bins) of the scan (with total number of points \(N=10^{6}\)) corresponding to \(M_{Z^{\prime}}\), \(\log_{10}\theta_{Z}\), \(\log_{10}g_{z}\), \(\tan\beta\), \(\varepsilon^{u}\), \(\varepsilon^{d}\), \(\varepsilon_{L}^{e}\) and \(\varepsilon_{R}^{e}\). Note that the first three of the histograms have linear, while the last five ones have logarithmic vertical axis. ### Predictions The NSI couplings \(\varepsilon^{u}\) and \(\varepsilon^{d}\) derived from the SWSM are anticorrelated, as can be seen on Fig. 6 obtained using those in Eq. (23) with \(q^{2}\simeq(51\,\mathrm{MeV})^{2}\) as the characteristic energy transfer squared in the COHERENT experiment. Three distinct \(Z^{\prime}\) mass regions emerge. The region with red colour in the left plot is inconsistent SWSM freeze-out dark matter scenario, which requires that the mass of the \(Z^{\prime}\) boson falls into the \((10\)-\(135)\,\mathrm{MeV}\) mass range [34]. Restricting our scan to this constrained region, shown on the right plot, reveals additional predictions: if \(q\lesssim M_{Z^{\prime}}\leq m_{\pi}\), then \(\varepsilon_{u}<0<\varepsilon_{d}\) but if \(10\,\mathrm{MeV}\leq M_{Z^{\prime}}\lesssim q\), then \(\varepsilon_{d}<0<\varepsilon_{u}\). In the left panel of Fig. 7 we can see that the parameter \(\eta\) is almost a linear function of the ratio \((\varepsilon^{u}/\varepsilon^{d})\) as one expects based on the discussion after Eq. (34). This information is visualized as a heat map in the right panel of Fig. 7, which shows that the COHERENT limits are compatible with \(\epsilon^{u}>0\) at the \(2\sigma\) confidence level only for \(\eta\lesssim 0.3\) at the electroweak scale. We present additional benchmark points (BPs) in Fig. 8 over the \((g_{z},X)\) planes (\(X=\eta\), \(\theta_{Z}\) and \(M_{Z^{\prime}}\)) as heat maps depending on the mass of the \(Z^{\prime}\). All these plots are relevant in the context of explaining dark matter within the SWSM. The BPs do not exhibit any particular dependence on the parameter \(\eta\) representing the kinetic mixing. The second plot visualizes precisely the approximate relation in Eq. (28). We show the available parameter space in \((\tan\beta,g_{z})\) plane separately in Fig. 9 where we present approximate analytic bounds superimposed (green dashes). In addition we added the NA64 constraint obtained by searching for dark photons, identified here with the \(Z^{\prime}\) (red solid curve) [40]. For \(\tan\beta\) we find the lower bounds corresponding to NA64 slightly depending on the value of the coupling \(g_{z}\). The gauge coupling is constrained to \(2\sigma\) confidence interval between \(4.17\cdot 10^{-6}\) and \(4.90\cdot 10^{-3}\), where the lower and upper bounds correspond to \(M_{Z^{\prime}}>10\) MeV and \(M_{Z^{\prime}}<m_{\pi}\). We see that the mass of the \(Z^{\prime}\) does not significantly affect this \(\tan\beta\) bound, but the favoured values of \(\theta_{Z}\) increase with \(M_{Z^{\prime}}\) (see Fig. 8). While large NSI couplings are still allowed, according to Fig. 5 for the benchmark \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Parameter** & **Scan range** & **BP range (\(2\sigma\))** & **BP range (\(1\sigma\))** \\ \hline \(\eta\) & [0,0.656] & [0,0.656] & [0,0.656] \\ \hline \(\tan\beta\) & [0.01,100] & [0.02,100] & [0.03,100] \\ \hline \(\log_{10}g_{z}\) & [\(-10\),1] & [\(-6.38,-2.31\)] & [\(-6.38,-2.41\)] \\ \hline \(M_{Z^{\prime}}/\mathrm{MeV}\) & [10,135] & [10,135] & [10,135] \\ \hline \(\log_{10}\theta_{Z}\) & – & [\(-6.09,-1.94\)] & [\(-6.09,-2.05\)] \\ \hline \(\varepsilon^{u}\) & [\(-17.25,17.16\)] & [\(-17.25,17.16\)] & [\(-10.00,9.42\)] \\ \hline \(\varepsilon^{d}\) & [\(-15.31,16.05\)] & [\(-15.31,16.05\)] & [\(-8.606,9.221\)] \\ \hline \(\varepsilon^{e}_{L}\) & – & [\(-1.504,1.462\)] & [\(-0.856,0.808\)] \\ \hline \(\varepsilon^{e}_{R}\) & – & [\(-19.87,19.84\)] & [\(-10.91,11.55\)] \\ \hline \end{tabular} \end{table} Table 5: Scan and benchmark point ranges corresponding \(2\) and \(1\sigma\) allowed regions of COHERENT experiment. point distributions small couplings are favoured in \(\varepsilon^{u}\), \(\varepsilon^{d}\), \(\varepsilon^{e}_{L}\) and \(\varepsilon^{e}_{R}\). The corresponding BPs are shown in Fig. 10. ## 5 Conclusions and future prospects We have considered an exciting possibility for NSI, which escapes the high-energy experimental constraints and detection by neutrino oscillation experiments. Former experiments are unable to probe the interactions with a light mediator, while flavour-universal Figure 6: Left: Available parameter space in \((\varepsilon^{u},\varepsilon^{d})\) plane corresponding to the scan ranges in Table 5 except that for the mass of the \(Z^{\prime}\), for which \(M_{Z^{\prime}}\in[1,10^{9}]\,\)keV. We used the momentum transfer squared \(q^{2}\) corresponding to that in the COHERENT experiment. The region between black lines is consistent with the \(2\sigma\) bounds from COHERENT. The data points are coloured according to the mass of the \(Z^{\prime}\). In the lower right sector two clearly different \(Z^{\prime}\) mass regions can be identified: light (turquoise) and heavy (red) areas. Right: benchmark points consistent with SWSM freeze-out dark matter scenario. Figure 7: Left: The \(\eta\) parameter as a function of \(\varepsilon^{u}/\varepsilon^{d}\). Right: As in Fig. 6 right panel, but the data points are coloured according to \(\eta\), which corresponds to azimuthal angle in \((\varepsilon^{u},\varepsilon^{d})\) plane. Region between dotted lines corresponds to \(1\sigma\) bounds from COHERENT. couplings between the mediator and a neutrino are manifested as an irrelevant phase factor in neutrino oscillation Hamiltonian. In the presence of sterile neutrinos the factor does not disappear, but is suppressed [41]. The only viable avenue to probe flavour-universal light NSI couplings is then to consider CE\(\nu\)NS. We derived the bounds for flavour-universal NSI both in light and heavy mediator case, and found that large NSI couplings (\(\varepsilon\simeq 10\)) are allowed for the light NSI scenario, while \(\varepsilon\lesssim 10^{-2}\) for the heavy case. We then considered a specific model, the super-weak extension of the Standard Model. We obtained the NSI couplings in the SWSM, which allowed us to investigate the parameter space SWSM as allowed by the existing constraints of CE\(\nu\)NS on the NSI parameters. We found that in this range the model prefers small values for the mass Figure 8: Benchmark points in \((g_{z},X)\) planes, with \(X=\eta\), \(\theta_{Z}\) and \(M_{Z^{\prime}}\). The colour corresponds to \(M_{Z^{\prime}}\) in MeV units. Figure 9: Available parameter space in \((\tan\beta,g_{z})\) plane, where colour corresponds to \(M_{Z^{\prime}}\). Lower and upper bounds for \(M_{Z^{\prime}}\) are imposed by dark matter scenario of SWSM [34]. We have superimposed NA64 constraint and analytical bounds from Eq. (27), where lower bound is achieved with \(\eta=0\) and upper bound with \(\eta=0.656\). of the new gauge boson and also for the ratio \(w/v\) of the VEVs. The kinetic mixing parameter is weakly constrained, but we found that its possible values are compatible with \(\varepsilon^{u}/\varepsilon^{d}\in[-1.17,-0.92]\). If we added the constraint set by the NA64 experiment on the mass of dark photon, we could constrain further the viable parameter space to \(\tan\beta\gtrsim 2\) and \(g_{z}\sim 10^{-6}-10^{-3}\). Our study demonstrated that even low-energy experiments have significant potential on constraining new physics discovery. Both higher-intensity and higher-energy experiments are needed for the progressive discovery of light and heavy NSI interactions. While the limits from CE\(\nu\)NS are quite loose at present, their expected improvement will constrain the parameter space of the SWSM severely.
2304.03187
Cavitation Rheology of Model Yield Stress Fluids Based on Carbopol
Measuring surface tension of yield stress fluids has remained a critical challenge due to limitations of the traditional tensiometry techniques. Here, we overcome those limits and successfully measure the surface tension and mechanical properties of a model yield stress fluid based on Carbopol gels via a needle-induced cavitation (NIC) technique. Our results indicate that the surface tension is approximately 70, and is independent of the rheology of yield stress fluid over a wide range of yield stress values. In addition, we demonstrate that a Young modulus smaller than 1 kPa can be successfully measured for Carbopol gels with NIC method. Finally, we present a time-resolved flow structure around the cavity in a host of yield stress fluids, and assess the impact of fluid rheology on the detailed form of flow around the cavity. Interestingly, prior to the critical point associated with cavitation, the yield stress fluid is weakly deformed suggesting that the measured surface tension data reflect the near equilibrium values. Beyond the critical point, the yield stress fluid experiences a strong flow that is controlled by both the critical pressure and the non-Newtonian rheology of the yield stress fluid.
Hadi Mohammadigoushki, Kourosh Shoele
2023-04-06T16:11:04Z
http://arxiv.org/abs/2304.03187v1
# Cavitation Rheology of Model Yield Stress Fluids Based on Carbopol ###### Abstract Measuring surface tension of yield stress fluids has remained a critical challenge due to limitations of the traditional tensiometry techniques. Here, we overcome those limits and successfully measure the surface tension and mechanical properties of a model yield stress fluid based on Carbopol gels via a needle-induced cavitation (NIC) technique. Our results indicate that the surface tension is approximately 70\(\pm\)3 mN/m, and is independent of the rheology of yield stress fluid over a wide range of yield stress values \(\sigma_{y}=0.5-120\) Pa. In addition, we demonstrate that a Young modulus smaller than \(E<\)1 kPa can be successfully measured for Carbopol gels with NIC method. Finally, we present a time-resolved flow structure around the cavity in a host of yield stress fluids, and assess the impact of fluid rheology on the detailed form of flow around the cavity. Interestingly, prior to the critical point associated with cavitation, the yield stress fluid is weakly deformed suggesting that the measured surface tension data reflect the near equilibrium values. Beyond the critical point, the yield stress fluid experiences a strong flow that is controlled by both the critical pressure and the non-Newtonian rheology of the yield stress fluid. Introduction Yield stress fluids are common in our daily life and have been the subject of intense research in the past decades [1; 2; 3; 4]. Prime examples include food stuff, paint, home care products, drilling fluids, oil extraction products, concrete as well as other advanced functional materials such as colloidal gels [5], emulsions [6], soft glassy materials [7] and jammed suspensions [8]. In the classical description, yield stress materials behave like a solid (or deform in a finite way) below a critical stress threshold known as yield stress and flow beyond this stress threshold [9]. Yield stress fluids commonly interact with solid substrates [10]. For example, application of yield stress lotions to skin, painting the wall, 3D printing of polymer melts on substrates. The performance of the yield stress material in these applications is controlled by its wetting and surface tension. In addition, surface tension of the yield stress fluids plays a critical role in environmentally important applications such as oil-sand pond reclamation [11; 12] and nuclear waste management [13; 14]. For a simple liquid and at equilibrium the viscous stresses are negligible and provided that the gravitational forces are small, an equilibrium surface tension can be quantified [15]. Unlike simple liquids, measuring the surface tension of a yield stress fluid has remained a critical challenge mainly because at equilibrium, the residual stresses in the yield stress fluids are no longer negligible. Carbopol gels provide a model yield stress fluid for surface tension analysis and have been the subject of several studies [16; 17; 18; 19; 20; 21]. Carbopol gel is a transparent system, and its rheological properties can be fine tuned by variation of Carbopol concentration or solution pH [22]. This material is generally considered as a non-thixotropic yield stress fluid [2; 23], making this system a model yield stress fluid for studies of surface tension. The earliest attempt in measuring the surface tension in Carbopol solutions goes back to the work of Hu et al. [16]. These authors used a maximum bubble pressure (MBP) method, and reported a surface tension of approximately around the measured value for pure water (\(\gamma\approx 72.5\) mN/m) for their solutions [16]. A closer inspection of their rheological data reveals that the Carbopol based fluids used by these authors (Carbopol 934 at concentrations between 0.05-0.1 wt%) did not exhibit a yield stress rheology [16]. Manglik et al. [17] also used the MBP method and measured the surface tension of a range of Carbopol solutions with concentrations ranging from 0 to 2000 ppm, and showed that the surface tension in this range of concentration is also close to the surface tension of water [17]. However, the latter study did not report the details of Carbopol neutralization and the rheology data, which makes it unclear if those solutions were yield stress fluids [17]. More recently, Geraud et al. [18] used a capillary rise experiment to measure the surface tension of a yield stress fluid based on Carbopol gel. These authors noted a surface tension of 51 mN/m for their system [18]. Boujel and Coussot [19] used a Wilhelmy plate and suggested that this device generates an equilibrium surface tension around 66 mN/m at vanishingly small Capillary numbers [19]. The Capillary number is defined as the ratio of the yield stress to the surface tension. Jorgensen et al. [20] used a bridge tensiometer that involved compression and extension of the yield stress fluids between two walls. These authors showed that the surface tension data obtained from compression experiments are smaller than those measured during extension and this deviation increases as the yield stress of the material increases [20]. More recently, Lopez and co-workers used a ring tensiometer and reported a surface tension value of 73.4 mN/m for their Carbopol based yield stress fluids[21]. A summary of the above literature is reported in Table (1). Apparently, there is no consensus on the value of the surface tension in yield stress fluids. The complications associated with the use of solid substrates may play a significant role in this disparity. In particular, majority of prior experiments have measured surface tension by using methods that involve a contact between solid substrates and the yield stress fluid (e.g., Wilhemly plate or Bridge tensiometer). It is very well known that yield stress fluids are susceptible to wall-slip[24; 25; 26; 27; 28], and therefore, the fluid contact-line on such substrates may not be pinned during surface tension measurements. In particular, Geraud et al.[18] briefly noted that in capillaries with smooth surfaces, the yield stress fluid shows significant wall-slip. However, Jorgensen et al. [20] and Boujhel & Coussot [19] neither specified the type of surfaces used in their experiments, nor investigated the impact of wall-slip on their surface tension data. Alternatively, there exists other methods for surface tension measurements that do not rely on fluid contact with a solid substrate. Examples include drop weight[29], pendant drop[30; 31] and maximum bubble pressure (MBP)[32; 33; 34] methods. Although drop weight and pendant drop methods have been well established for simple fluids, these methods are not suitable for yield stress fluids not only because the residual yield stresses affect the drop behavior compared to simple liquids, but also the basic theories do not incorporate the impact of residual yield stresses in the flow analysis for calculation of the surface tension. The MBP method was first introduced by Simon in 1851[34], and has been widely used to measure the surface tension of a broad range of simple liquids[35; 36; 33; 37]. In the MBP experiments a capillary tube (or a needle) is immersed in the fluid, and subsequently a gas is pumped through the needle into the liquid. As gas is injected into the needle, the gas-liquid interface forms a curved shape and its curvature is controlled by the pressure difference between the gas inside the needle and the surrounding fluid. In principle, for any fluidic environment, the pressure inside the needle (or cavity) \(P\) is balanced by the outside pressure and can be obtained as: \[P=P_{h}+\gamma(1/r_{1}+1/r_{2})+P_{out}, \tag{1}\] \begin{table} \begin{tabular}{c c c c c} \hline Carbopol Type Concentration (wt\%) & \(\sigma_{y}\) [Pa] & Technique used & \(\gamma\) [mN/m] & Reference \\ \hline \hline 934 & 0-0.2 & – & Maximum Bubble Pressure & 73 & [17] \\ 934 & 0.05-0.1 & – & Maximum Bubble Pressure & 72.5 & [16] \\ ETD 2050 & 0.5 & 4 & Capillary Rise & 52 & [18] \\ ETD 2050 & 0.25-2 & 0.3-38 & Bridge Tensiometer & \(\approx\)15-100 & [20] \\ ETD 2050 & 0.25-0.5 & 0.3-1.75 & Ascending Bubble & 59-66 & [20] \\ U 10 & 0.1-0.5 & 9-80 & Wilhelmy Plate & 66 & [19] \\ 980 & 0.09-0.2 & 3-28.5 & LAUDA Tensiometer & 73.4 & [21] \\ \hline \end{tabular} \end{table} Table 1: Summary of previous studies that have measured the surface tension for Carbopol gels. Here \(\sigma_{y}\) and \(\gamma\) denote the yield stress and the surface tension of the fluid. where \(P_{h}=\rho gz\) is the hydrostatic pressure at the needle tip, \(\gamma\) is the surface tension, \(r_{1}\) and \(r_{2}\) are the principal radii of curvature of the cavity and \(P_{out}\) is the pressure of the surrounding medium resisting against motion and growth of the cavity. Experimental observations in simple liquids have shown that as the gas is injected into the needle, the total pressure inside the needle increases and at some critical point, it goes through a maximum and suddenly drops to small values[32; 33; 38]. This maximum pressure has been used to evaluate the surface tension of the liquids[32; 33; 38; 39]. However, to obtain the surface tension from the maximum pressure data, one has to account for multiple factors (i.e., curvature of the cavity, Buoyancy as well as hydrodynamic stresses in the liquid). Previous studies have shown that in the limit of small needle sizes \(R<1\)mm, a spherical cavity forms at the tip of the needle[40]. In particular, at the maximum pressure (\(P_{c}\)), the cavity forms a hemispherical shape with a radius equal to that of the needle radius, and therefore, Eq. 1 can be expressed as: \(P_{c}-P_{h}=\frac{2\gamma}{R}+P_{out}\). Any gas injection beyond this point causes the cavity to become unstable, and grow rapidly before detaching from the needle. For needles with \(R>1mm\), the cavity becomes non-spherical[41]. It has been shown that the impact of cavity non-sphericity on the surface tension could be accounted by introducing a correction factor \(f\) into the above equation such that: \[P_{c}-P_{h}=f\frac{2\gamma}{R}+P_{out}. \tag{2}\] The correction factor can be obtained as[40]: \(f=\sum_{i=0}^{5}a_{i}(\frac{R}{a})^{i}\), where capillary length \(a=\sqrt{\frac{2\gamma}{\Delta\rho g}}\). Note \(a_{i}\) values are tabulated in the literature[40; 42]. For viscous liquids \(P_{out}\) is associated with the viscous stresses. Previous studies have investigated the impact of hydrodynamic stresses on surface tension data of a range of viscous fluids[43]. Fainerman and co-workers showed that the viscous stresses can increase the surface tension by up to 3 mN/m over two orders of magnitude variation in the viscosity of a solution with an equilibrium surface tension of 70 mN/m, thereby, suggesting a small influence of the viscous stresses on equilibrium surface tension of viscous liquids[43]. ## II Cavitation rheology Yield stress fluids are different from simple liquids in that below the yield stress threshold, they behave like a solid and barely deform. As a result, the gas pressure inside the capillary tube must overcome the residual stresses in the solid before they can plastically deform the surrounding material. Over the course of last decade, Crosby and co-workers have developed a needle-induced cavitation (NIC) rheology technique based on the MBP method for a wide range of synthetic hydrogels, rubbers and block co-polymers[44; 45; 46; 47; 48; 49; 50; 51]. It has been shown that for elastic materials, and in the limit of small needle sizes, similar to simple liquids, the maximum pressure occurs at the point where cavity forms a hemisphere at the tip of the needle and the critical pressure \(P_{c}\) is related to surface tension and Young modulus \(E\) of the gel as[44; 52]: \[P_{c}=5E/6+2\gamma/R, \tag{3}\] where, the elasticity of the surrounding material acts as an additional pressure that cavity must overcome with \(P_{out}=5E/6\). Therefore, the maximum pressure measurement in the MBP method not only allows one to estimate the surface tension of the gel, but also its elastic modulus, hence the term cavitation rheology was used. Note that in Eq. (3) the surrounding material behaves as a neo-Hookean solid with no plastic deformation. As a result, the impact of local yielding of the materials on surface tension data is neglected. The latter assumption may hold for stiff materials such as synthetic hydrogels, rubbers and block co-polymers that are stiff and exhibit strong elasticities. The yield stress fluids based on Carbopol gels are expected to be much softer than synthetic hydrogels, acrylic triblock gels or copolymers used in previous studies[44, 51]. For yield stress materials based on Carbopol, the \(P_{out}\) resistance may be related to both the elastic response of the yield stress fluid before reaching plasticity as well as the plastic response of the medium after it passes the yield limit in the vicinity of a growing bubble. Consequently, Eq. (3) should be modified for soft yield stress fluids prepared from Carbopol solutions. To the best of our knowledge, the theoretical analysis of the cavitation phenomenon in soft yield stress fluids has not been considered before. Additionally, there are currently no studies that employ the NIC technique to measure the surface tension of the yield stress fluids based on Carbopol gels. Although previous studies have used the NIC method to measure the Young modulus of a wide range of stiff materials with a modulus in the range of 1 kPa \(<E<60\) kPa[51], the yield stress fluids based on Carbopol gels are expected to be much softer than stiff gels and co-polymers[51]. Hence, it is still unclear if the NIC technique is sensitive enough to measure a Young modulus in the range of E \(<\) 1 kPa that is relevant for Carbopol gels. The main goal of the first part of this paper is to develop a theoretical framework for cavitation in soft yield stress fluids and its subsequent application in experiments to evaluate the surface tension and the mechanical properties of the yield stress fluids based on Carbopol gels. On a relevant note and from a biological perspective, a very important health issue, traumatic brain injury (TBI), has been associated with the cavitation of a bubble in biological tissues that exhibit yield stress properties[47]. The leading hypothesis is that the cavitation introduces a strong flow, which consequently deforms and damages the surrounding tissue, and that causes TBI[53]. To test this hypothesis, one must first evaluate the flow field generated by a cavitation process in a yield stress material. A direct and in situ measurement of the flow profile around a cavity in yield stress materials do not exist. Imaging the detailed form of flow structure around a cavity in a model yield stress fluid provides important insights that will significantly advance our understanding of the origin of the TBI. The main goal of the second part of this paper is to provide the first temporally-resolved form of flow structure around a cavity in model yield stress fluid based on Carbopol gels. ## III Materials and Methods Yield stress is observed in a host of materials. The most popular yield stress fluid is a polymeric gel based on an aqueous solution of Carbopol[2]. Different models of Carbopol (940, 980, Ultrez-10 and etc) are commercially available. In this study we use Carbopol 940 and Ultrez-10 which are known as non-thixotropic model yield stress fluids[2]. The concentration of the Carbopol is varied from 0.02 wt% to 0.5 wt%. Yield stress fluids are made by gently mixing Carbopol with de-ionized water and neutralized by adding 1.5 mass fraction of triethanolamine to the Carbopol solution. In addition to Carbopol based fluids, we use a Newtonian fluid (corn syrup from Golden Barrel) for the purpose of comparison with the yield stress fluids (see rheological properties in Table S(1) of the supplementary materials). Yield stress fluids were characterized using a commercial rheometer (TA Discovery HR 10) and a standard concentric cylinders geometry with R\({}_{i}\) =14.01 mm and R\({}_{o}\) = 15.185 mm, where R\({}_{i}\) and R\({}_{o}\) are the radii of the inner and outer cylinders. Because wall-slip is significant and can affect the rheological characterization of the yield stress fluids, we have roughened the concentric cylinders geometry using a sand blasting protocol. As in our previous studies[54; 55; 56], two types of measurements are performed: Small Amplitude Oscillatory Shear (SAOS) was used to obtain the linear viscoelastic responses. In particular, the storage and loss moduli are measured as a function of angular frequency. In addition, the flow curves are measured using a steady applied shear experiment. To measure the surface tension and the Young modulus of the yield stress fluids, we use an in-house custom-made NIC method. Fig. 1 shows a schematic of the custom made NIC technique, which consists of a programmable syringe pump (model NE-1000 from New Era), blunt needle (from McMaster-Carr), tubing, wiring, differential pressure sensor (model PX-26 series from OMEGA), high-speed camera (Phantom miro M310), data acquisition device; DAQ (from National Instruments), and a computer. The LabVIEW software and the DAQ allow the differential pressure sensor, programmable syringe pump, and high-speed camera to work in a synchronized manner. Using this setup, we are able to record, and capture the temporal evolution of cavity growth and the associated pressure changes. The inner radius of the needle (or the capillary tube) used in these experiments is varied from \(R\) = 76 \(\mu\)m - 850 \(\mu\)m, which allows us to access a wide range of critical pressures. In each experiment, the needle is gently placed in the yield stress fluid and the air is injected in the capillary tube at a constant rate of \(Q=0.3\)\(\mu\)L/hr. Fluids are placed in a cubical container with flat side walls to minimize optical distortions. In addition to NIC method, we use a pendant drop technique to measure the surface tension of primarily non-yield stress fluids (see more details about this technique in the supplementary materials). The surface tension data obtained from the pendant drop experiments (for simple liquids) are compared with the results obtained by the NIC method to ensure the accuracy of the NIC method. We have also performed particle image velocimetry (PIV) to temporally resolve the flow field around the cavity in Carbopol gels. For PIV analysis, we generate a sheet of laser light (with a wavelength of 532 nm) that passes through the cavity. The fluids are seeded by glass microspheres (model 110P8 from Potters with a mean diameter \(\approx 8\)\(\mu\)m). The small amount of these seeding particles (40 ppm by mass) does not affect the rheological properties of the fluids. ## IV Results and Discussion ### Bulk rheology Fig. 2(a) shows the flow curves of sample representative yield stress fluids based on Ultrez-10 along with the best fit to the Herschel-Bulkley fluid model (dashed curves). The Herschel-Bulkley model is defined as: \(\sigma=\sigma_{y}+K\dot{\gamma}^{n}\), where \(\sigma\), \(\sigma_{y}\), \(K\), \(\dot{\gamma}\) and \(n\) denote the shear stress, yield stress, consistency factor, rate of deformation and the shear-thinning index, respectively. As expected, the increase in the Carbopol concentration gives rise to a stronger yield stress fluid. In addition, Fig. 2(b) shows the measured storage and loss modulii as a function of angular frequency for sample yield stress fluids based on Ultrez-10. As the Carbopol concentration increases both the storage and loss moduli increase, which is again expected and consistent with the reported values in the literature [22]. Moreover, the storage modulus approaches an asymptotic value (called shear modulus hereafter) as the angular frequency decreases. Similar rheological properties have been reported for the yield stress fluids based on CBP-940 and a summary of the rheological properties of the Carbopol solutions are included in the table (S1) and Fig. S1 of the supplementary materials. In addition, the shear viscosity of these fluids is not sensitive to ramp-up or ramp-down in Figure 1: A schematic of the NIC device along with the necessary parts used in this paper. shear rate (see Fig. S2 in the supplementary materials), thereby confirming that these yield stress fluids are not thixotropic. Fig. 3 shows a summary of the yield stress and the shear modulus of these yield stress fluids as a function of Carbopol concentration. Interestingly, at low Carbopol concentrations, the yield stress and the shear modulus increase rather sharply as the Carbopol concentration increases. However, beyond a critical concentration (between 0.1-0.15 wt%), the increase in yield stress and the shear modulus becomes more gradual. This trend can be rationalized as follows. Carbopol is made up of high molecular weight polymers that swell upon mixing and neutralization in aqueous solutions [57]. At low Carbopol concentrations, the swollen polymer particles form a percolated structure, which at higher concentrations, the polymer swelling increases, thereby, decreasing the distance between polymer particles in the solution. The decrease in particle spacing increases the yield stress and the shear modulus. Beyond a critical point, where particles create a jammed structure, further increase of the Carbopol content does not dramatically affect the jammed nature of the solution and therefore, the yield stress and the elastic modulus increase more gradually. ### Needle induced cavitation As noted above, the maximum pressure inside the capillary tube in soft yield stress fluids may be related to surface tension, the yield stress and the elastic modulus of the material. Therefore, in the first step, we developed a mechanical analysis of the cavitation in yield stress fluids. In the limit of small capillary size (\(R<1mm\) or for a nearly spherical bubble), in addition to the surface tension pressure \(2\gamma/R\), the effective pressure acting on the yield materials surrounding the cavity \(P_{out}=P_{out}(\sigma_{y},E)\). In the near field, adjacent to Figure 2: (a) steady shear stress as a function of applied shear rate for yield stress fluids based on Carbopol Ultrare-10. (b) Storage (filled symbols) and loss moduli (open symbols) as a function of angular frequency. Different symbols correspond to various Carbopol concentration as 0.06 wt% (\(\circ\)), 0.1 wt% (\(\square\)), 0.3 wt% (\(\diamond\)) and 0.5 wt% (\(\triangle\)). the bubble, the material may yield and far away from the bubble, the yield stress material behaves as an elastic solid. Considering the yield stress material as a linear elastic perfectly plastic material, the total pressure acting on the cavity \(P_{out}=P_{out}|_{1}+P_{out}|_{2}\). Here \(P_{out}|_{1}\) and \(P_{out}|_{2}\) are pressures associated with plastically deformed response of the surrounding medium near cavity and the confinement induced by the elastic response of the yield stress material, respectively. Based on our analysis, the total pressure contribution from the surrounding yield stress material on the cavity can be written as (see appendix for more details on the derivation): \[P_{out}\,=\frac{2\sigma_{y}}{3}\left\{1+\ln\left(\frac{2E}{3\sigma_{y}}\right) \right\}+\frac{2\pi^{2}}{27}E. \tag{4}\] Note the latter term in total pressure (\(P_{out}|_{2}=\frac{2\pi^{2}}{27}E\)) is analogous to the Eq. 3 derived for elastic networks [52; 44]. Therefore, in the limit of small capillary tube diameters, the maximum pressure inside the growing bubble in a soft yield stress material can be given as: \[P_{c}\,=\frac{2\gamma}{R}+\frac{2\sigma_{y}}{3}\left\{1+\ln\left(\frac{2E}{3 \sigma_{y}}\right)+\left(\frac{\pi}{3}\right)^{2}\frac{E}{\sigma_{y}}\right\}. \tag{5}\] The above Eq. 5 will be used to assess the surface tension as well as the elastic modulus of the yield stress fluids based on Carbopol. Subsequent to the above theoretical analysis, we performed needle-induced cavitation experiments in a Newtonian corn syrup as well as yield stress fluids. Fig. 4 shows the temporal evolution of the pressure in the Newtonian solution based on corn syrup (Fig. 4(a)) and a sample yield stress fluid (0.5wt% Ultrez-10; Fig. 4(b)). In these experiments, the volume of the air left in the syringe at any point in time can be given as: \(V(t)=V_{0}-Qt+V_{b}\), where \(V_{0}\), \(Q\) and \(V_{b}\) are the initial air volume (which is 10mL in this work), flow rate by which the air is pumped and the volume of the cavity. If we assume that air is an ideal gas and experiments are performed in isothermal Figure 3: (a) Yield stress and (b) shear (elastic) modulus as a function of Carbopol concentration for the two yield stress fluid systems. The continuous and dashed lines are guide to the eye. conditions, \(PV=P_{0}V_{0}\) and therefore, we will have: \[\Delta P/P_{0}=(P-P_{0})/P_{0}=1/(1-\frac{Q}{V_{0}}t+\frac{V_{b}}{V_{0}})\,-\,1 \tag{6}\] In the limit of \(V_{b}<<V_{0}\), and \(Qt<V_{0}\) (which are satisfied in our experiments) pressure should increase linearly with time regardless of the type of fluid. Indeed, our results of Fig. 4(a,b) show that for both Newtonian and the yield stress fluids, the pressure increases linearly up to a critical value before it drops quickly. Additionally, the critical pressure increases as the needle diameter decreases, which is consistent with the predictions of Eq. (2) for the Newtonian fluid and Eq. (5) for the yield stress fluid. The corresponding temporal evolution of the cavity size are shown for the corn syrup (top row in Fig. 4) and the yield stress fluid (bottom row in Fig. 4). As the pressure increases Figure 4: Temporal evolution of the pressure in (a) corn syrup and (b) 0.5wt% yield stress fluid based on Ultra-10. The transient pressure is shown for various needle tube diameters \(d\). The lower snapshots show the simultaneous temporal evolution of cavity in corn syrup (top row) and in 0.5wt% yield stress fluid based on Ultra-10 (lower row). Note that (i-v) in snapshots refer to the time instances at which pressure is measured before or after the critical point in part (a) and part (b). The scale bar in snapshot is 0.8 mm. towards the critical point, the cavity gradually protrudes out of the needle tip and attains a smaller radius of curvature. At the critical point, the cavity forms a hemisphere with a diameter that is approximately equal to the diameter of the needle for both Newtonian and the yield stress fluids. This is unsurprising because the correction factor \(f\) in these experiments is very close to unity (\(f=0.9968\)). Beyond the critical pressure, the explosive growth of the cavity radius gives rise to the pressure drop. The experiments of Fig. 4 are performed in all solutions and the measured critical pressure values along with the best fit to Eq. 5 could be used to assess the surface tension and the Young modulus of the yield stress fluids. Prior to assessment of the yield stress fluid properties, and in order to ensure the accuracy and precision of our NIC device, we first report the results for a broad range of non-yield stress fluids. Fig. 5 shows the critical pressure as a function of needle diameter for corn syrup along with Carbopol based solutions that are in the dilute regime and show non-yield stress rheology (see Table (S1) in the supplementary materials for rheological properties). Subsequently, the results were fitted to Eq. (2). In experiments performed in this study, the maximum needle radius used is 0.85mm and therefore, the maximum correction factor due to non-sphericity of the cavity is about \(f=0.97\). Additionally, unlike previous MBP studies that have mainly used one needle size to assess the surface tension of liquids, we fit the critical pressure data to a wide range of needle sizes and that reduces the error associated with using only one measurement point. The resulting intercepts and slopes are summarized in Table (2). The first notable observation is that \(P_{out}\) contribution from viscous stresses is very small (\(\leq 8\) Pa) both for the corn syrup as well as other non-yield stress fluids based on Carbopol. Note that the maximum \(P_{out}\) measured in our experiments is smaller than the pressure associated with the surface tension of the liquid by orders of magnitude. For example, for the largest needle used, \(R=0.85\)mm, the surface tension pressure is \(\approx 0.17\) kPa while \(P_{out}=0.0004-0.008\) kPa indicating a negligible impact of the viscous stresses on surface tension measurements. In addition, although some of these viscous liquids (cf. Carbopol 940 0.025 wt % and 0.05 wt % in Table S(1) of the supplementary information) have significantly different shear viscosity, the resulting \(P_{out}\) is still negligible and similar in magnitude. We conclude from these results that the impact of viscous stresses on surface tension data is negligible. This conclusion is consistent with previous findings on viscous fluids[43]. Secondly, the slope of each graph, which represents the estimated surface tension of these fluids are listed in Table 2. In particular, for corn syrup, the measured surface tension is consistent with the value reported in the literature[58, 59]. In addition, for dilute Carbopol solutions the surface tension is very close to that measured for pure water, independent of the type and concentration of the Carbopol. The latter data are consistent with measurements of Manglik et al[17] and Hu et al.[16] who have reported a surface tension of \(\approx\) 72.5-73 mN/m for dilute Carbpol solutions based on Carbopol 934. Finally, included in the Table 2 are the measured surface tension data of these fluids by a pendant drop method (see details in Fig. S3 of the supplementary materials). The resulting surface tension values obtained by the pendant drop method are consistent with the NIC results, thereby, confirming the accuracy of our NIC method for evaluating the surface tension of simple liquids. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Fluid & [wt\%] & \(P_{out}\) [kPa] & \(\gamma\) [mN/m] & \(\gamma^{*}\) [mN/m] & \(\gamma^{**}\) [mN/m] \\ \hline corn syrup & & -0.008 & 83.2 & 81\({}^{59}\) & 80.5\(\pm\)2 \\ 940 & 0.025 & -0.006 & 70 & \(--\) & 71.0\(\pm\)2 \\ & 0.05 & -0.005 & 72 & \(--\) & 72.2\(\pm\)1 \\ Ultrez\(-\)10 & 0.04 & -0.004 & 71.1 & \(--\) & 70.9\(\pm\)2 \\ & 0.05 & 0.004 & 72.1 & \(--\) & 72.2\(\pm\)3 \\ \hline \end{tabular} \end{table} Table 2: A summary of the surface tension and fitting to the Newtonian as well as dilute Carbpol based solutions. \({}^{*}\) denotes the surface tension data reported in the literature and \({}^{**}\) are the surface tension values obtained by a pendant drop method. Figure 5: Critical pressure as a function of capillary tube size for non-yield stress fluids. The dashed lines indicate the best linear fit to the experimental data. Subsequent to the above experiments, the needle induced cavitation experiments are performed on all of the yield stress fluids and the critical pressure for a wide range of needle sizes is measured. Fig. 6 shows the critical pressure as a function of capillary radius for the two yield stress systems of this paper. Included in these figures is also the best fit of Eq. 5 to the experimental data. First, the slope of the data in all experiments is similar hinting at a similar surface tension data for all of these systems over a broad range of Carbopol concentrations. Additionally, the intercept of the fitted line to these experimental data is no longer negligible and increases as the Carbopol concentration increases. Finally, the measured critical pressure data are independent of the imposed flow rate (\(Q=0.1-10\)\(\mu\)L/hr; see Fig. S4 in the supplementary materials) and the needle insertion procedure (whether needle is gently inserted into the sample or retracted following a protocol suggested in [51]). The resulting surface tension data for the two yield stress fluids are shown in Fig. 7(a). Interestingly, the surface tension does not change substantially as a function of yield stress. The average surface tension data amongst all these yield stress fluids is approximately 70\(\pm\)3 mN/m. These results are consistent with the assumption made by Lopez et al [21] that the surface tension of the yield stress fluids does not change as the yield stress increases. Additionally, Boujel and Coussot [19] reported the equilibrium surface tension of the yield stress fluids to be approximately 10% less than the surface tension of the pure water, which are close to our measured values as well. The difference between the data provided by Boujel and Coussot [19] and our measured values is presumably due to potential role of wall-slip in experiments of Boujel and Coussot [19]. Figure 6: Critical pressure as a function of inverse capillary size for yield stress fluids based on (a) Ultraz-10 and (b) 940. The dashed line is the best fit of Eq. 5 to the experimental data. In addition to surface tension, we have assessed the Young modulus, \(E\) of the yield stress fluids by fitting the intercepts to the Eq. 5. Fig. 7(b) shows the Young modulus as a function of shear modulus for all of the yield stress fluids considered in this study. For both systems, the Young modulus increases linearly with the shear modulus. ### Flow visualization around cavity Fig. 4 indicates that the cavitation experiments comprise of two steps. In the first stage, as the pressure increases linearly with time, a cavity forms at the tip of the needle and gradually decreases its radius of curvature. The second step is associated with the post critical pressure and consists of an instability that is characterized by sudden expansion of the cavity. Fig. 8(a) shows the temporal evolution of the pressure for the Newtonian corn syrup along with two yield stress fluids. The time-resolved velocity profiles for these three sample experiments are shown in Fig. 8(c-e) at different points along the pressure curves of the Fig. 8(a). At times where the pressure values are below the critical pressure and even at the critical pressure, where the radius of the cavity is the same as the needle inner radius, the flow field around the cavity is weak (see (i) Fig. 8(c-e)). Therefore, these fluids, regardless of their rheology, do not experience a strong flow at the critical point. The latter observation gives further credence to the aforementioned assumption that at the critical point, the viscous stresses are weak and their impact on surface tension is negligible. However, post-instability (i.e., \(t-t_{c}>0\)) sudden expansion of cavity introduces a strong flow to the surrounding fluid shortly after the critical point (see the velocity field in (ii) Fig. 8(c-e)). Subsequently, at some point during rapid pressure drop, the flow reaches its maximum strength (see (iii) in the velocity field of Fig. 8(c-e)). Eventually, the flow field Figure 7: (a) Surface tension as a function of yield stress for yield stress fluids of this study. (b) Young modulus as a function of shear modulus for the yield stress fluids. The line in part (b) is the equivalent point. subsides to equilibrium at longer times (see (iv) in Fig. 8(c-e)). The above trend is observed over the range of the needle sizes used in this study both for the viscous and the yield stress fluids. The most dominant component of the fluid flow around a cavity is the compression. Therefore, to characterize the strength of the compression around the cavity, we use the maximum extension rate around the cavity as a function of time. Therefore, the volumetric extension rate in spherical coordinate, at each point in time, is defined as: \[\dot{\varepsilon}=\nabla\cdot\mathbf{U}=\frac{\partial u_{r}}{\partial r}+ \frac{1}{r}\frac{\partial u_{\theta}}{\partial\theta}+\frac{u_{r}}{r}+\frac{1} {r\sin(\theta)}\frac{\partial u_{\phi}}{\partial\phi}+\frac{u_{r}+u_{\theta} \cot(\theta)}{r}. \tag{7}\] Here \(\mathbf{U}\) is the fluid velocity and \(r\) is the instantaneous distance from the center of the bubble. Although flow around the cavity is three dimensional, our PIV method allows us to map the 2D velocity profiles around the cavity. Additionally, because of symmetry of the flow around the cavity, \(u_{\theta}=0\) and \(u_{\phi}=0\). Hence, \(\dot{\varepsilon}=\frac{\partial u_{r}}{\partial r}+\frac{2u_{r}}{r}\). The extension rate is obtained by first fitting a smooth function to the experimentally measured velocity profiles and then, differentiating from the smooth velocity function. At each point in time, the fluid around the cavity experiences a maximum extension rate, which typically occurs close to the cavity boundaries. We assess the flow strength in each of the above experiments by comparing the maximum extension rate that each fluid experiences during cavitation process. Fig. 8(b) shows the temporal evolution of the maximum extension rate that each fluid experiences around the cavity. Interestingly, a careful comparison between the flow field of the Newtonian and the yield stress fluids of Fig. 8(c-e), reveals a significant difference between these cases. Despite the critical pressure being the same for the Newtonian fluid and 0.2wt% Carbopol solution, the velocity fields and the maximum strain rate in the yield stress fluid are stronger than the Newtonian counterpart. This difference is presumably connected to the non-Newtonian rheology of the yield stress fluid. On the other hand, as the critical pressure for the onset of cavitation increases, the flow strength (velocity field and/or the characteristic maximum strain rate) before and at the onset of cavitation (e.g., (i) in Fig. 8(e)) is very negligible and similar to those shown for other systems in Fig. 8(c-d). However, for post-instability, the flow field is much stronger in Fig. 8(e) than those shown for other systems in Fig. 8(c,d). Taken together, these results suggest that in addition to the critical pressure, the non-Newtonian rheology of the yield stress fluids control the detailed form of flow structure around a cavity. ### Dimensionless analysis Finally, we come back to analyzing the stresses involved in cavitation experiments. In principle, several forces can be involved in cavitation experiments; inertia, viscous, gravitational, elastic, yield stress and the surface tension. To assess the importance of these stresses, we use a range of dimensionless numbers. Note that the surface tension values are evaluated at the critical point, where pressure shows a maximum. Hence, we will assess the dimensionless numbers at the critical point. Figure 8: (a) Temporal evolution of pressure in NIC experiments measured for various fluids and needle sizes. Here \(t_{c}\) refers to the time that pressure reaches the critical value. (b) The temporal evolution of the maximum strain rate around the cavity for various fluids. Here \(t_{max}\) refers to the time that strain rate reaches the maximum value. The maximum strain rates that fluids experience over the course of cavitation are 18 [1/s] for corn syrup, 75 [1/s] for 0.2 wt% and 115 [1/s] for 0.5 wt% Carbopol gels. The temporal evolution of the velocity profiles are shown at different instances during cavity formation and growth. Each of velocity profiles corresponds to corn syrup (c), 0.2wt% Ultrez-10 (d) and 0.5wt% Ultrez-10 (e). Prior and at the critical point, the deformation of the fluid, although small, is controlled by the gas injection rate. As noted before, the injection rate used in these experiments is very small and fixed for all needle sizes to \(Q=0.3\)\(\mu\)L/hr. We can estimate a characteristic deformation rate (at the walls of the capillary tube) associated with this flow rate as \(\dot{\gamma}\approx 4Q/\pi R^{3}\approx 10^{-4}-10^{-2}\) [1/s]. The importance of inertia can be assessed using a Reynolds number defined as: \(Re=\rho\dot{\gamma}d^{2}/\eta(\dot{\gamma})\). Here, \(\rho\), \(\dot{\gamma}\), \(d\) and \(\eta\) represent the density of the fluid, characteristic deformation rate, the diameter of the capillary tube and the shear dependent viscosity of the surrounding fluid. Although inertia depends on the choice of the needle and fluid, the maximum Reynolds number for all experiments is very small (\(Re\approx 10^{-6}\)), rendering the effects of inertia negligible. The impact of gravitational forces can be estimated by a Bond number defined as \(Bo=\Delta\rho ga^{2}/\gamma\). At the critical point, the cavity forms a hemisphere with a radius equal to the needle size. Therefore, at the critical point, the Bond number is controlled by the needle size and varies as \(Bo\approx 3\times 10^{-3}-0.1\) for all needle sizes used in this paper. Therefore, the effect of gravity on surface tension data is negligible. The latter finding is consistent with nearly spherical bubble shapes observed in all of our experiments. To assess the impact of viscous stresses, we first start with cavitation experiments on Newtonian and the non-yield stress fluids that suggest the viscous stresses are negligible compared to the surface tension effects. In experiments with the yield stress fluids, the viscous stresses, if strong enough, may plastically deform the material near the cavity. To assess the relative importance of viscous and yield stresses at the critical point, we employ a Bingham number defined as \(Bi=\sigma_{y}/\tau_{v}\), where viscous stress \(\tau_{v}=\eta(\dot{\gamma})\dot{\gamma}\). At the critical point, the Bingham number varies as \(Bi\approx 0.9-0.97\) indicating that viscous stresses can barely deform the yield stress fluid around the cavity at the critical point. This result suggests that in experiments with the yield stress fluids, the medium around the cavity should experience a very weak deformation in the vicinity of the cavity and the elastic response of the surrounding material should be dominant. In fact, in fitting the experimental data of Fig. 6(a,b) to Eq. (5), we noticed that the pressure contribution from the plastically deformed zone near the surface of the cavity (indicated by \(P_{out}|_{1}\)) is much smaller than the elastic resistance of the surrounding fluid (\(P_{out}|_{1}<<P_{out}|_{2}\)). The latter result is consistent with the above hypothesis that the viscous stresses are not strong enough to significantly deform the yield stress fluids at the critical pressure and the elastic response of the yield stress material is dominant (i.e., \(\tau_{v}/E<<1\) or \(\sigma_{y}/E<<1\)). ## V Conclusion In summary, we have performed needle induced cavitation experiments to assess the surface tension, Young modulus and the detailed form of flow structure around the cavity in a broad range of yield stress fluids. The findings of this study can be summarized as follows: First, the measured surface tension values for Carbopol based yield stress fluids is close to surface tension of pure water over a wide range of yield stress values (\(\sigma_{y}=0.5-120\) Pa). Secondly, we demonstrated that NIC technique can be successfully used to measure the Young modulus as small as 10 Pa for yield stress gels. Thirdly, our flow visualization experiments revealed that for \(P\leq P_{c}\), the fluid is barely perturbed by viscous stresses. However, post-instability the strength of the flow increases up to a local maximum in strain rate (or deformation rate) before it subsides gradually towards equilibrium at longer times. Finally, our results show that the flow strength post-instability is controlled by the critical pressure as well as the non-Newtonian rheology of the yield stress fluids. Although we performed NIC experiments in a range of yield stress fluids with \(\sigma_{y}=0.5-120\) Pa, by no means this method (and assessment of surface tension) is limited to this range of yield stress values and may be adopted for stiffer gels. Moreover, the impact of this work goes beyond the evaluation of the surface tension for yield stress materials. In fact, a broad range of biological tissues and cells are soft and assessing their Young modulus requires highly sensitive and sophisticated methods such as AFM or nano-indentation methods that often generates complex and ambiguity in the obtained results [60; 61]. The results of this work lend credence to NIC method as a fast and particularly straightforward technique that can be used for measuring the mechanical properties of biological samples [62]. This will be the focus of our future work. ## VI Supporting Information Illustrates further information about the rheological properties of the fluids and the cavitation rheology experiments. ## VII Acknowledgement The authors are grateful to Anas Al-Humiari, Scott Hannahs and Richard Crisler for their help in implementing the NIC device. HM is grateful to Philipe Coussot, Randy Ewoldt and David Venerus for several helpful discussions. ## VIII Appendix: Plastic Response of Expanding Bubble in a Yield Stress Environment The calculation of the plastic response of an expanding bubble is based on the theoretical formulation by Bishop _et al._[63]. We assume the surrounding material is extended between the internal radius of \(R_{0}\) (here is the effective radius of the bubble as it starts to penetrate the surrounding elastic medium) and a large external radius of \(R_{\infty}\) as shown in Fig. 9a. The inner surface is under uniform pressure of \(P_{out}\), while for a very large medium, we assume that there is no pressure on the external surface. Because of the spherical symmetry of the problem, a stress-based solution is used to formulate the problem when the material remains elastic. The nonzero stress components are the radial stress \(\sigma_{r}\) and the hoop stresses \(\sigma_{\theta}=\sigma_{\phi}\) In the absence of body forces, the equilibrium equation in the radial direction is \[\frac{d\sigma_{r}}{dr}+\frac{2}{r}\left(\sigma_{r}-\sigma_{\theta}\right)\,=\,0. \tag{8}\] In addition, the only displacement component is in the radial direction, \(u=u(r)\) and the corresponding stain components in spherical polar coordinates are \[\epsilon_{rr}=\frac{du}{dr},\hskip 28.452756pt\epsilon_{\theta\theta}\,=\, \epsilon_{\phi\phi}\,=\,\frac{u}{r}. \tag{9}\] For a linearly elastic medium with Young Modulus of \(E\) and Poisson ratio of\(\nu\), the stress-strain relations can be written as, \[\epsilon_{rr}=\frac{1}{E}(\sigma_{r}-\nu\sigma_{\theta}),\hskip 28.452756pt \epsilon_{\theta\theta}=\frac{1}{E}\left[\sigma_{\theta}-\nu(\sigma_{r}+ \sigma_{\theta})\right], \tag{10}\] and to form Beltrami-Michell compatibility relationship of \[\frac{d}{dr}\left(\sigma_{r}+2\sigma_{\theta}\right)\,=\,0. \tag{11}\] Eqs. 8 and 11 along with the boundary conditions can be solved to find the Lame solution in spherical polar coordinate as[64], \[\sigma_{r}\,=\,-P_{out}\left(\frac{R_{\infty}^{3}}{r^{3}}-1\right)/\left( \frac{R_{\infty}^{3}}{R_{0}^{3}}-1\right) \tag{12}\] \[\sigma_{\theta}\,=\,\sigma_{\phi}\,=\,P_{out}\left(\frac{R_{\infty}^{3}}{2r^{ 3}}+1\right)/\left(\frac{R_{\infty}^{3}}{R_{0}^{3}}-1\right) \tag{13}\] where \(r\) is the radial distance. Here, \(\sigma_{r}\) is compressible stress and \(\sigma_{\theta}\) is tensile stress and \(|\sigma_{r}|>|\sigma_{\theta}|\). Equivalently, we can represent the spherical stress field as the summation of the hydrostatic isotropic pressure of \(-\sigma_{\theta}\) (equivalently a hydrostatic tension of \(\sigma_{\theta}\)) and uni-axial compressive stress in the radial direction and zero in other directions of \((\sigma_{\theta}-\sigma_{r},0,0)\). This stress condition simplifies the von Mises yield condition of the elastic materials as[65], \[\sigma_{\theta}-\sigma_{r}\,=\,\sigma_{y} \tag{14}\] From Equations 12,13 and 14, we find that the corresponding pressure to the onset of the yield condition in linear elastic perfectly plastic materials is, \[P_{out,\sigma_{y}}\,=\,\frac{2\sigma_{y}}{3}\left(1-\frac{R_{0}^{3}}{R_{\infty}^ {3}}\right) \tag{15}\] For an infinite domain, we have \(\frac{R_{0}}{R_{\infty}}\to 0\) and the inner layer of medium near the surface yields at a pressure equal to two-thirds of the yield stress. Given the typical \(\sigma_{y}\) values of Carbopol gels, one can find that in almost all cases the material passes its purely elastic response regime very fast at the beginning of the bubble creation. Afterward, we have an elastic-perfectly plastic response. In this case, Eqs. 8 and 11 are used to find the stress components in the elastic region, while in the plastic region, the modified system of equations of Eqs. 8 and 14 are employed. To find the final solution, the continuation of the radial stress is used, and with the assumption that the yield surface is placed at a radius \(R_{c}\) (Fig. 9b), we can find the stress fields inside and outside the plastic regions as \[\left.\begin{array}{l}\sigma_{r}=-\frac{2\sigma_{y}}{3}\left(\frac{R_{0}^{3} }{r^{3}}\right)\\ \sigma_{\theta}=\frac{2\sigma_{y}}{3}\left(\frac{R_{0}^{3}}{2r^{3}}\right) \end{array}\right\}\;\;\;c\leq r\;\;\;\;(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq a spherical bubble that is expanded from zero radii. In this case, we can show that \(R_{c}/R\) should remain constant based on the self-similarity and takes the form of[65] \[\frac{R_{c}}{R}\,=\left(\frac{E}{3(1-\nu)\sigma_{y}}\right)^{\frac{1}{3}} \tag{19}\] and therefore we can rewrite the equation 18 as \[P_{out}\,=\frac{2\sigma_{y}}{3}\left\{1+\ln\left(\frac{E}{3(1-\nu)\sigma_{y}} \right)\right\} \tag{20}\] Assuming the \(\nu\) of the incompressible medium is 0.5, we can find that \(P_{out}\) is a constant independent of the bubble radius and is equal to \[P_{out}\,=\frac{2\sigma_{y}}{3}\left\{1+\ln\left(\frac{2E}{3\sigma_{y}}\right)\right\} \tag{21}\] Given that we assume here the bubble expands from a zero initial radius, the above solution should be regarded as an upper limit of a bubble expansion from a finite-size needle in a perfect plastic regime. In the above model, we assumed that the stress level could not exceed a fixed yield limit. However, for many polymeric media, there is still elastic stress even after the yield condition[66], especially if the material is confined and can not undergo large permanent deformation. Equivalently, it is possible to define the terminal steady-state stress in the post-yield condition as[65] \[\sigma\,=\,\sigma_{y}+h_{\sigma}(\epsilon) \tag{22}\] where \(h_{\sigma}\) is a function expressing the change of total stress as a function of the logarithmic total strain \(\epsilon\). Here, the material undergoes strain-hardening or work-hardening and the von-Mises yield condition of Eq. 14 is modified to[65], \[\sigma_{\theta}-\sigma_{r}\,=\,\sigma_{y}\,+\,h\left\{2\ln\left(\frac{r}{r_{0 }}\right)-\frac{1-2\nu}{E}\left(\sigma_{r}+\sigma_{\theta}\right)\right\}. \tag{23}\] where \(r_{0}\) is the initial radial position of the element. If the medium is incompressible (\(\nu=\frac{1}{2}\)), the equilibrium condition of Eq. 8 can be simplified as \[\frac{d\sigma_{r}}{dr}\,=\,\frac{2}{r}\left(\sigma_{\theta}-\sigma_{r}\right) \,=\,\frac{2\sigma_{y}}{r}+\frac{2h_{\sigma}[2\ln(r/r_{0})]}{r} \tag{24}\] If we assume that the bubble is expanded from zero radius in the infinite medium, we can use the self-similarity property of the spherical shape and scale spatial lengths based on the current size of the bubble \(R\). By adopting this condition, and solving Eq. 24, a a revised relation for \(P_{out}\) can be found in the form of \[P_{out}\,=\frac{2\sigma_{y}}{3}\left\{1+\ln\left(\frac{2E}{3\sigma_{y}} \right)\right\}+2\int_{1}^{\left(\frac{2E}{3\sigma_{y}}\right)^{\frac{1}{3}}} \,h_{\sigma}\left\{\frac{2}{3}\ln\left(\frac{t^{3}}{t^{3}-1}\right)\right\} \frac{dt}{t} \tag{25}\] where for the linear \(h_{\sigma}(\epsilon)=E\epsilon\) function (when the stress-strain curve is represented with a slope of \(E\).) [63], the relation can be simplified as \[P_{out} = \frac{2\sigma_{y}}{3}\left\{1+\ln\left(\frac{2E}{3\sigma_{y}} \right)\right\}+\frac{2\pi^{2}}{27}E \tag{26}\]
2304.04830
Spherical Harmonics for the 1D Radiative Transfer Equation II: Thermal Emission
Approximate methods to estimate solutions to the radiative transfer equation are essential for the understanding of atmospheres of exoplanets and brown dwarfs. The simplest and most popular choice is the "two-stream method" which is often used to produce simple yet effective models for radiative transfer in scattering and absorbing media. Toon et al. (1989) (Toon89) outlined a two-stream method for computing reflected light and thermal spectra and was later implemented in the open-source radiative transfer model PICASO. In Part~I of this series, we developed an analytical spherical harmonics method for solving the radiative transfer equation for reflected solar radiation (Rooney et al. 2023), which was implemented in PICASO to increase the accuracy of the code by offering a higher-order approximation. This work is an extension of this spherical harmonics derivation to study thermal emission spectroscopy. We highlight the model differences in the approach for thermal emission and benchmark the 4-term method (SH4) against Toon89 and a high-stream discrete-ordinates method, CDISORT. By comparing the spectra produced by each model we demonstrate that the SH4 method provides a significant increase in accuracy, compared to Toon89, which can be attributed to the increased order of approximation and to the choice of phase function. We also explore the trade-off between computational time and model accuracy. We find that our 4-term method is twice as slow as our 2-term method, but is up to five times more accurate, when compared with CDISORT. Therefore, SH4 provides excellent improvement in model accuracy with minimal sacrifice in numerical expense.
Caoimhe M. Rooney, Natasha E. Batalha, Mark S. Marley
2023-04-10T19:37:53Z
http://arxiv.org/abs/2304.04830v1
# Spherical Harmonics for the 1D Radiative Transfer Equation II: Thermal Emission ###### Abstract Approximate methods to estimate solutions to the radiative transfer equation are essential for the understanding of atmospheres of exoplanets and brown dwarfs. The simplest and most popular choice is the "two-stream method" which is often used to produce simple yet effective models for radiative transfer in scattering and absorbing media. Toon et al. (1989) (Toon89) outlined a two-stream method for computing reflected light and thermal spectra and was later implemented in the open-source radiative transfer model PICAS0. In Part I of this series, we developed an analytical spherical harmonics method for solving the radiative transfer equation for reflected solar radiation (Rooney et al., 2023) which was implemented in PICAS0 to increase the accuracy of the code by offering a higher-order approximation. This work is an extension of this spherical harmonics derivation to study thermal emission spectroscopy. We highlight the model differences in the approach for thermal emission and benchmark the 4-term method (SH4) against Toon89 and a high-stream discrete-ordinates method, CDISORT. By comparing the spectra produced by each model we demonstrate that the SH4 method provides a significant increase in accuracy, compared to Toon89, which can be attributed to the increased order of approximation and to the choice of phase function. We also explore the trade-off between computational time and model accuracy. We find that our 4-term method is twice as slow as our 2-term method, but is up to five times more accurate, when compared with CDISORT. Therefore, SH4 provides excellent improvement in model accuracy with minimal sacrifice in numerical expense. Radiative transfer (1335) -- Radiative transfer equation (1336) 0000-0002-4002]Caolimne M. Rooney 0000-0002-4880-7880]Natasha E. Batalha 0000-0002-0788-7880]Mark S. Marley ## 1 Introduction Studying the atmospheres of planets and substellar objects relies on computationally efficient methods to solve the radiative transfer equation in scattering and absorbing media. However, exact solutions typically do not exist. Scientists rely on approximate, parameterized methods to estimate solutions, but inclusion of intricate microphysical detail can render these models computationally intractable for useful applications (Stephens & Preisendorfer, 1984; Thomas & Stamnes, 2002; Chandrasekhar, 1960; Liou, 2002). The goal of radiative transfer parameterization in numerical models for exoplanet and brown dwarf atmospheres is to provide computationally efficient yet accurate methods to calculate radiative fluxes and heating rates (Stephens, 1984). The approach to solving the radiative transfer equation is frequently chosen through a balance between accuracy and computational efficiency. In many practical cases, there are significant uncertainties associated with defining characteristics of the atmosphere, such as the composition, scattering phase function and opacities. Such uncertainties often dominate over small model errors in the solution, and therefore, obtaining a computationally efficient solution often takes precedence. The most popular approximate methods for solving the radiative transfer equation are the (1) discrete-ordinates method (Chandrasekhar, 1960; Stamnes et al., 1988, 2000), (2) Monte-Carlo method (Modest, 2013; Iwabuchi, 2006) and (3) spherical harmonics method (Modest, 1989, 2013; Olfe, 1967; van Wijngaarden & Happer, 2022). The general approach of the discrete-ordinates method (DOM) is to discretize the solid angle by a finite number \(L\) of directions or "streams", along which the radiative intensities are tracked. DISORT (Stammes et al., 1988, 2000) is an example of a discrete ordinate algorithm for radiative transfer that is capable of simulating thermal emission, absorption, and scattering for arbitrary phase functions across the electromagnetic spectrum. The convergence of DOM can depreciate for optically thick media (Modest, 2013; Fiveland & Jessee, 1996; Lewis & Miller, 1984), however, there exist a number of acceleration schemes to improve the convergence rate of DOM (Fiveland & Jessee, 1996; Lewis & Miller, 1984). Monte-Carlo methods track emitted photons throughout the media, and although accepted to be a largely accurate method, it is computationally taxing which makes it unsuitable for some applications (Iwabuchi, 2006; Mayer, 2009). The spherical harmonics (SH) approximation, denoted \(P_{L-1}\), operates by expanding the intensity and phase function into a series of \(L\) spherical harmonics, or Legendre polynomials. This decouples spatial and directional dependencies. This method involves fewer equations than DOM and is potentially more accurate with comparable computational expense, but higher order expansions are mathematically complex and increasingly difficult to implement as \(L\) increases (Ge et al., 2015; van Wijngaarden & Happer, 2022). Such models have been studied for reflected solar radiation (e.g., most recently in our Part 1. Rooney et al., 2023). For example, the two-stream discrete-ordinates (\(L=2\)) and two-term spherical harmonics (\(P_{1}\)) techniques are often used to provide simple yet effective models for atmospheric radiative transfer and are widely considered some of the simplest and most prolific approximations (Meador & Weaver, 1980; King, 1986; Chandrasekhar, 1960; Mihalas & Mihalas, 2013; van Wijngaarden & Happer, 2022; Li & Ramaswamy, 1996; Zhang & Li, 2013). Two-stream methods are most useful for obtaining angle-averaged quantities such as heating rates and albedos (Schuster, 1905; Meador & Weaver, 1980; Heng & Marley, 2018). These studies of reflected solar radiation have shown that even though the two-stream methods are computationally preferable, they are often unsuitable for certain physical conditions. For example, non-physical solutions to the two-stream method are obtained for the case of a collimated incident beam (Meador & Weaver, 1980). However, there exist model adjustments to correct for these limitations in the two-stream method. In particular, applying the delta (\(\delta\))-adjustment to the two-stream technique improves the accuracy of radiative flux calculations by taking into account strong forward scattering due to large particles (Joseph et al., 1976; Wiscombe, 1977; King, 1986; Liou et al., 1988). Though these corrections often help to improve accuracy, it has also been shown that improved accuracy can be achieved by considering four-stream approximations (Liou, 1974; Liou et al., 1988; Cuzzi et al., 1982; Rooney et al., 2023). Four-stream methods are also still able to leverage the \(\delta\)-approximation to allow for strong forward scattering. However, an increase in the order of approximation comes with the penalty of an increase in computational expense. Four-stream approximations are significantly more efficacious in cases of non-isotropic scattering and can even be used in general circulation models (Liou et al., 1988; Heng & Marley, 2018). Their performance has been explored for both homogeneous and inhomogeneous atmospheres in reflected solar radiation (Liou, 1973; Liou et al., 1988; Fu, 1991; Shibata & Uchiyama, 1992). As well as solar radiation, these approximations can also be applied to study scattering in the presence of thermal emission (Mihalas, 1978; Toon et al., 1989; Fu et al., 1997), which is the focus of this work. Infrared scattering is essential to understand emission spectra of exoplanets and brown dwarfs (Taylor et al., 2021), due to the ubiquity of clouds in atmospheres (Marley et al., 2013; Gao & Powell, 2021) and the defining role they play in sculpting thermal emission. Specifically, the thermal emission for some classes of extrasolar planets and brown dwarfs (e.g., the L dwarfs) arises, in some wavelengths, from within the scattering, absorbing cloud layers. Therefore, in those cases it is particularly important to treat the radiative transfer within the cloud as carefully as possible. Additionally, the radiative-equilibrium temperature structure of an atmosphere (e.g., Mukherjee et al., 2023) depends upon the difference between upwards and downwards incident and emergent fluxes. Overall, an accurate treatment within the scattering cloud decks is required to have confidence in computed thermal profiles. Two-stream methods have been implemented for infrared thermal radiation, such as the well-known work of Toon et al. (1989), who derived a general two-stream solution for the upward and downward fluxes within a single homogeneous layer. By considering continuity of flux across a number of stacked, homogeneous layers, the single-layer solution is extended to a multi-layer atmosphere. The final solution is obtained through the two-stream source function technique, with the source function written in terms of the two-stream intensity. As a side note, related to the two-stream technique are popular analytical calculations of temperature-pressure profiles to further understand the thermal atmospheric structure of atmospheres (Hubeny et al., 2003; Hansen et al., 2008; Guillot, 2010; Heng & Kopparla, 2012; Heng & Showman, 2015; Robinson & Catling, 2012; Parmentier & Guillot, 2014). The Toon et al. (1989) methodology has in particular been utilized extensively for the study of planetary and substellar atmospheres (e.g. McKay et al., 1989; Marley et al., 1999; Burrows et al., 1997; Fortney et al., 2005; Marley et al., 2021) and this implementation is available in open-source Python code PICASO(Batalha et al., 2022). This approach, however, is currently limited to two-stream approximations. Despite its usefulness in radiative transfer calculations due to its simplicity and ease of implementation, Toon et al. (1989) reported that relative errors in the emissivity calculated by such approaches can be as much as 10% in optically thin cases. In addition to the potential errors reported by Toon et al. (1989), it has also been shown that increasing the approximations to four-streams improves the accuracy of the infrared models, similar to the reflected solar case (Fu et al., 1997; Liou et al., 1988; Lin et al., 2013). Fu et al. (1997) applied the \(\delta\)-two and four-stream discrete-ordinates method (Chandrasekhar, 1960; Stamnes et al., 1988, 2000) to solve the infrared radiative transfer equation in a vertically inhomogeneous atmosphere. By comparing the approximations to the high-order \(\delta\)-129-stream model, the authors found that the \(\delta\)-two-stream scheme can produce acceptable results under most atmospheric conditions, but suffers from large errors for small optical depth. The \(\delta\)-four-stream method yields high accuracy in radiative fluxes and heating rates under all atmospheric conditions considered, however, the authors acknowledge a significant increase in computational cost. Zhang et al. (2016) also investigated \(\delta\)-two and four-stream discrete-ordinates for infrared radiative transfer, demonstrating an analytical approach. By comparing the methods to a \(\delta\)-64-stream DOM method, the authors similarly conclude that the four-stream method outperforms two-streams, particularly for small optical depths, reporting relative errors as high as 15% for two-stream versus 2% for four-stream. These studies motivated the development of an analytical spherical harmonics method for solving the radiative transfer equation. The first component for solar radiation was published recently in Rooney et al. (2023). In a similar vein to the Toon et al. (1989) methodology, Rooney et al. (2023) derived and solved a system of equations for the upwards and downwards fluxes at every layer of our atmosphere, with the critical difference of a \(\delta\)-adjusted four-term spherical harmonics (\(P_{3}\)) approximation in place of Toon's two-stream approach. By applying the source-function technique to calculate the azimuthally averaged intensity emerging from the top of a vertically inhomogeneous atmosphere, we can compute the spectrum for clear and cloudy planets or brown dwarfs. Though the spherical harmonics approaches for reflected light and thermal emission are largely identical, the primary differences lie in the source terms, boundary conditions and the related applications. Therefore, the present work is an extension to the derivation in Rooney et al. (2023), namely, applying the spherical harmonics model to thermal emission spectroscopy. We aim to make this manuscript easily cross-referenced with the numerical method implemented within PICASO source code. Throughout the manuscript, we refer the reader to Rooney et al. (2023) for more intricate detail into the derivation of the spherical harmonics method for 1D radiative transfer, when necessary. Here, we include only the key mathematical expressions that define the thermal emission model. We have also included persistent hyperlinks that can be accessed by clicking the following icon: \(\lnot\), that will redirect the reader to the relevant lines of code (stored on Github) corresponding to the relevant mathematical expression. We outline this work as follows: in Sections 2 and 3, we briefly explain the derivation of the spherical harmonics (SH) method for thermal radiation. As aforementioned, the SH method for reflected light and thermal emission are largely identical, with the exception of the source term and boundary conditions. In this section, we focus on the differences incurred by considering the thermal source term and relevant boundary conditions. We consider both two and four-stream approximations for plane-parallel atmospheres of many layers, where we apply the source-function technique to handle the multi-layer aspect of the model. In Section 4, we compare the two and four-term spherical harmonics models and the Toon et al. (1989) approach implemented in PICASO to a 16 and 32-stream discrete ordinates method, CDISORT to illustrate the accuracy gains by increasing the number of streams from two to four. We also explore the impact of this order increase on computational time, and discuss the timing-accuracy trade-off that might be considered when choosing a model. ## 2 Solving the radiative transfer equation using spherical harmonics We wish to use the spherical harmonics technique to solve the azimuthally-averaged, one-dimensional radiative transfer equation: \[\mu\frac{\partial I}{\partial\tau}(\tau,\mu)=I(\tau,\mu)-\frac{w_{0}}{2}\int_ {-1}^{1}I(\tau,\mu^{\prime})P(\mu,\mu^{\prime})\mathrm{d}\mu^{\prime}-2\pi(1- w_{0})B(T), \tag{1}\] where the location within the atmosphere is specified by \(\tau\in[0,\tau_{N}]\), (where \(\tau_{N}\) is the cumulative optical depth), \(I\) is the azimuthally averaged intensity and \(w_{0}\) is the single scattering albedo, \(B(T)\) is the Planck function at temperature \(T\), and \(P(\mu,\mu^{\prime})\) is the azimuthally averaged scattering phase function. We note the similarities between the radiative transfer equation for thermal emission (1) and that for reflected light, outlined in Rooney et al. (2023). The difference lies in the final term on the right-hand side, the source term \(S(T)\), defined as \[S(T)=\begin{cases}2\pi(1-w_{0})B(T),&\text{thermal emission},\\ \frac{w_{0}}{4\pi}F_{\odot}e^{-\frac{\tau}{\rho_{0}}}P(\mu,-\mu_{0}),&\text{ reflected light}.\end{cases} \tag{2}\] We emphasise that all other terms in the azimuthally-averaged, one-dimensional radiative transfer equation (1) are identical for reflected light and thermal emission. This allows us to largely follow the spherical harmonics model derivation outlined in Rooney et al. (2023) for reflected light, with a few modifications to allow for the different source term. We will highlight these differences throughout this work. However, we refer the reader to Rooney et al. (2023) for a more in-depth discussion of the general model derivation. By expanding the phase function and intensity in terms of Legendre polynomials \(P_{l}\), up to given order \(L\): \[P(\mu,\mu^{\prime}) =\sum_{l=0}^{L}\chi_{l}P_{l}(\mu)P_{l}(\mu^{\prime}), \tag{3}\] \[I(\tau,\mu) =\sum_{l=0}^{L}(2l+1)I_{l}(\tau)P_{l}(\mu), \tag{4}\] where the coefficients \(\chi_{l}\) of the phase function expansion can be determined from the orthogonal property of Legendre polynomials (Liou, 2002): \[\chi_{l}=\frac{2l+1}{2}\int_{-1}^{1}P(\cos\Theta)P_{l}(\cos\Theta)\mathrm{d} \cos\Theta. \tag{5}\] we can substitute (3) and (4) into (1) and use both the orthogonality property and recursion relation of Legendre polynomials to obtain \[\sum_{l=0}^{L}\left[(l+1)\frac{\mathrm{d}I_{l+1}}{\mathrm{d}\tau}+l\frac{ \mathrm{d}I_{l-1}}{\mathrm{d}\tau}\right]P_{l}(\mu)=\sum_{l=0}^{L}[a_{l}I_{l} (\tau)-b_{l}\delta_{0l}]P_{l}(\mu). \tag{6}\] Here, \(\delta_{0l}\) is the Dirac-delta function (\(\delta_{0l}=1\) for \(l=0\), and \(0\) otherwise) and \[a_{l} =(2l+1)-w_{0}\chi_{l}, \tag{7}\] \[b_{l} =2\pi(1-w_{0})B(T), \tag{8}\] for \(l=0,\cdots,L\). Here, \(a_{l}\) is identical to that derived for reflected light (Rooney et al., 2023), whereas the expressions for \(b_{l}\) differ. This is because the source term (2) is relevant only for the \(b_{l}\) terms. Thus, any analysis involving only the \(a_{l}\) terms and not the \(b_{l}\) terms will be identical for reflected light and thermal emission. We assume that the Planck function with a single layer \(B(T)\) can be represented as a Taylor series expansion (as done in Toon et al., 1989), namely \[B(T(\tau))=B_{0}+B_{1}\tau, \tag{9}\] where \(B_{0}\) is the Planck function evaluated at \(\tau=0\) (or the top of the layer) and \(B_{1}\) is related to the Planck function at temperature \(T_{\mathrm{bot}}\) at the bottom of the layer \(\tau_{N}\): \[B_{1}=\frac{B(T_{\mathrm{bot}})-B_{0}}{\tau_{N}} \tag{10}\] ### \(P_{1}\) Multiple Layers For clarity and to demonstrate the spherical harmonics methodology, Rooney et al. (2023) began with an atmosphere consisting of a single horizontally homogeneous layer, before extending the analysis to the more practical case of multiple layers. Here, we proceed immediately to the multiple-layer solution. Let us first study the two-stream spherical harmonics problem, denoted \(P_{1}\), where \(L=1\) represents the highest Legendre polynomial in the expansion. Consider an atmosphere consisting of \(N\) horizontally homogeneous layers, where layer \(n\) is characterized by single scattering albedo \(w_{0,n}\), asymmetry parameter \(g_{0,n}\) and optical thickness \(\partial\tau_{n}=\tau_{n}-\tau_{n-1}\) for \(n=1,\ldots,N\). To solve the radiative transfer equation (1) in the \(n^{\text{th}}\) layer we rescale the optical depth as \[\hat{\tau}=\tau-\tau_{n-1},\qquad\hat{\tau}\in[0,\partial\tau_{n}]. \tag{11}\] Dropping the hats, we continue with the solutions within layer \(n\) for \(\tau\in[0,\partial\tau_{n}]\). We can formulate (6) as a matrix system within layer \(n\): \[\frac{\mathrm{d}}{\mathrm{d}\tau}\begin{pmatrix}I_{0,n}\\ I_{1,n}\end{pmatrix}=\begin{pmatrix}0&a_{1,n}\\ a_{0,n}&0\end{pmatrix}\begin{pmatrix}I_{0,n}\\ I_{1,n}\end{pmatrix}-\begin{pmatrix}b_{1}\\ b_{0}\end{pmatrix}, \tag{12}\] Closely following the methodology outlined in Rooney et al. (2023), we arrive at the layer-wise solution: \[\begin{pmatrix}I_{0,n}\\ I_{1,n}\end{pmatrix}=\begin{pmatrix}e^{-\lambda_{n}\tau}&e^{\lambda_{n}\tau}\\ -q_{n}e^{-\lambda_{n}\tau}&q_{n}e^{\lambda_{n}\tau}\end{pmatrix}\begin{pmatrix} X_{0,n}\\ X_{1,n}\end{pmatrix}+\frac{2\pi(1-w_{0,n})}{a_{0,n}}\begin{pmatrix}B_{0,n}+\tau B _{1,n}\\ \frac{B_{1,n}}{a_{1}}\end{pmatrix}, \tag{13}\] for \(\tau\in[0,\partial\tau_{n}]\), where \[a_{l,n}=(2l+1)-w_{0,n}\chi_{l,n}, \tag{14}\] is the multi-layer extension of (7), and \[\lambda_{n}=\sqrt{a_{0,n}a_{1,n}},\qquad q_{n}=\lambda_{n}/a_{1,n}. \tag{15}\] Following Rooney et al. (2023), we can rewrite system (13) in terms of fluxes, \[\begin{pmatrix}F_{n}^{-}\\ F_{n}^{+}\end{pmatrix}=\begin{pmatrix}Q_{n}^{+}e^{-\lambda_{n}\tau}&Q_{n}^{-}e^ {\lambda_{n}\tau}\\ Q_{n}^{-}e^{-\lambda_{n}\tau}&Q_{n}^{+}e^{\lambda_{n}\tau}\end{pmatrix} \begin{pmatrix}X_{0,n}\\ X_{1,n}\end{pmatrix}+\begin{pmatrix}Z_{n}^{-}\\ Z_{n}^{+}\end{pmatrix}, \tag{16}\] where \(Q_{n}^{\pm}=\pi(1\pm 2q_{n})\not\cup\) and \(Z_{n}^{\pm}\) given by \(\not\cup\) \[Z_{n}^{\pm}(\tau)=\frac{\pi(1-w_{0,n})}{a_{0,n}}\left(B_{1,n}\tau+B_{0,n}\pm \frac{2}{a_{1,n}}B_{1,n}\right), \tag{17}\] with boundary conditions \(\not\cup\) \[F_{1}^{-}(0) =0, \tag{18}\] \[F_{n}^{-}(\partial\tau_{n}) =F_{-n+1}^{-}(0),\] (19) \[F_{n}^{+}(\partial\tau_{n}) =F_{n+1}^{+}(0), \tag{20}\] and \[F_{N}^{+}(\tau_{N})=\begin{cases}\pi\left(B(\tau_{N})+\frac{2}{3}\frac{\partial B }{\partial\tau}(\tau_{N})\right),&\text{non-terrestrial},\\ \pi B(\tau_{N})+A_{S}F^{-}(\tau_{N})&\text{terrestrial, hard surface},\end{cases} \tag{21}\] where \(A_{S}\) is the surface reflectivity. The final boundary condition (21) is derived from Mihalas (1978) in Appendix A. These boundary conditions enforce that there is no incident diffuse flux at the top of the atmosphere, and the upward flux at the surface is either that from a (potentially) reflective surface or an estimate of the upwards flux emerging from an atmosphere that continues below the lowermost model grid point (e.g., a giant planet or brown dwarf atmosphere). The spherical harmonics flux problem is formulated in PICASO by representing the system in terms of banded matrices, and solved using the solve_banded functionality of SciPy (Virtanen et al., 2020)\(\not\cup\). ### P3 Multiple Layers Similarly, the \(P_{3}\) problem for multiple layers has the solution: \[\begin{pmatrix}I_{0,n}\\ I_{1,n}\\ I_{2,n}\\ I_{3,n}\end{pmatrix}=\begin{pmatrix}e^{-\lambda_{1,n}\tau}&e^{\lambda_{1,n}\tau}&e^ {-\lambda_{2,n}\tau}&e^{\lambda_{2,n}\tau}\\ R_{1,n}e^{-\lambda_{1,n}\tau}&-R_{1,n}e^{\lambda_{1,n}\tau}&R_{2,n}e^{-\lambda_ {2,n}\tau}&-R_{2,n}e^{\lambda_{2,n}\tau}\\ Q_{1,n}e^{-\lambda_{1,n}\tau}&Q_{1,n}e^{\lambda_{1,n}\tau}&Q_{2,n}e^{-\lambda_ {2,n}\tau}&Q_{2,n}e^{\lambda_{2,n}\tau}\\ S_{1,n}e^{-\lambda_{1,n}\tau}&-S_{1,n}e^{\lambda_{1,n}\tau}&S_{2,n}e^{-\lambda_ {2,n}\tau}&-S_{2,n}e^{\lambda_{2,n}\tau}\end{pmatrix}\begin{pmatrix}X_{0,n}\\ X_{1,n}\\ X_{2,n}\\ X_{3,n}\end{pmatrix}-\frac{2\pi(1-w_{0,n})}{a_{0,n}}\begin{pmatrix}B_{0,n}+\tau B _{1,n}\\ a_{1,n}\\ 0\\ 0\end{pmatrix}, \tag{22}\] for \(\tau\in[0,\partial\tau_{n}]\), where \(\sphericalangle\) \[\lambda_{1,2,n}=\sqrt{\frac{1}{2}(\beta_{n}\pm\sqrt{\beta_{n}^{2}-4\gamma_{n} })},\qquad\beta_{n}=a_{0,n}a_{1,n}+\frac{1}{9}a_{2,n}a_{3,n}+\frac{4}{9}a_{0,n} a_{3,n},\qquad\gamma_{n}=\frac{1}{9}a_{0,n}a_{1,n}a_{2,n}a_{3,n}, \tag{23}\] and \(\sphericalangle\) \[R_{1,2,n}=-\frac{a_{0,n}}{\lambda_{1,2,n}},\qquad Q_{1,2,n}=\frac{1}{2}\left( \frac{a_{0,n}a_{1,n}}{\lambda_{1,2,n}^{2}}-1\right),\qquad S_{1,2,n}=-\frac{3} {2a_{3,n}}\left(\frac{a_{0,n}a_{1,n}}{\lambda_{1,2,n}}-\lambda_{1,2,n}\right). \tag{24}\] This problem can be written in terms of fluxes as \[\begin{pmatrix}F_{n}^{-}\\ f_{n}^{-}\\ F_{n}^{+}\\ f_{n}^{+}\end{pmatrix}=\begin{pmatrix}p_{1,n}^{-}e^{-\lambda_{1,n}\tau}&p_{1,n }^{+}e^{\lambda_{1,n}\tau}&p_{2,n}^{-}e^{-\lambda_{2,n}\tau}&p_{2,n}^{+}e^{ \lambda_{2,n}\tau}\\ q_{1,n}^{-}e^{-\lambda_{1,n}\tau}&q_{1,n}^{+}e^{\lambda_{1,n}\tau}&q_{2,n}^{ -}e^{\lambda_{2,n}\tau}&q_{2,n}^{+}e^{\lambda_{2,n}\tau}\\ p_{1,n}^{+}e^{-\lambda_{1,n}\tau}&p_{1,n}^{-}e^{\lambda_{1,n}\tau}&p_{2,n}^{+}e ^{-\lambda_{2,n}\tau}&p_{2,n}^{-}e^{\lambda_{2,n}\tau}\\ q_{1,n}^{+}e^{-\lambda_{1,n}\tau}&q_{1,n}^{-}e^{\lambda_{1,n}\tau}&q_{2,n}^{ +}e^{-\lambda_{2,n}\tau}&q_{2,n}^{-}e^{\lambda_{2,n}\tau}\end{pmatrix}\begin{pmatrix} X_{0,n}\\ X_{1,n}\\ X_{2,n}\\ X_{3,n}\end{pmatrix}+\begin{pmatrix}Z_{1,n}^{-}\\ Z_{2,n}^{+}\\ Z_{2,n}^{+}\end{pmatrix}, \tag{25}\] where \(p_{1,2,n}^{\pm}=\pi(1\pm 2R_{1,2,n}+\frac{5}{4}Q_{1,2,n})\), \(q_{1,2,n}^{\pm}=\pi(-\frac{1}{4}+\frac{5}{4}Q_{1,2,n}\pm 2S_{1,2,n})\)\(\diamondsuit\), and \[Z_{1,n}^{\pm}(\tau) =\frac{\pi(1-w_{0,n})}{a_{0,n}}(B_{1,n}\tau+B_{0,n}\pm\frac{2}{a_ {1,n}}B_{1,n})), \tag{26}\] \[Z_{2,n}^{\pm}(\tau) =-\frac{\pi(1-w_{0,n})}{4a_{0,n}}(B_{1,n}\tau+B_{0,n}). \tag{27}\] The boundary conditions for the \(P_{3}\) flux problem are \(\sphericalangle\) \[F_{1}^{-}(0) =0, f_{1}^{-}(0) =0, \tag{28}\] \[F_{n}^{-}(\partial\tau_{n}) =F_{n+1}^{-}(0), f_{n}^{-}(\partial\tau_{n}) =f_{n+1}^{-}(0),\] (29) \[F_{n}^{+}(\partial\tau_{n}) =F_{n+1}^{+}(0), f_{n}^{+}(\partial\tau_{n}) =f_{n+1}^{+}(0), \tag{30}\] and \[F_{N}^{+}(\tau_{N}) =\begin{cases}\pi\left(B(\tau_{N})+\frac{2}{3}\frac{\partial B}{ \partial\tau}(\tau_{N})\right),&\text{non-terrestrial},\\ \pi B(\tau_{N})+A_{S}f^{-}(\tau_{N}),&\text{terrestrial, hard surface},\end{cases} \tag{31}\] \[f_{N}^{+}(\tau_{N}) =\begin{cases}-\frac{\pi B(\tau_{N})}{4},&\text{non-terrestrial},\\ -\frac{\pi B(\tau_{N})}{4}+A_{S}f^{-}(\tau_{N}),&\text{terrestrial, hard surface},\end{cases} \tag{32}\] for \(n=1,2,\cdots,N-1\)\(\diamondsuit\), where \(A_{S}\) is the surface reflectivity. As with the \(P_{1}\) case, the derivation of the bottom boundary conditions (31)-(32) are derived in Appendix A from Mihalas (1978). As for the \(P_{1}\) case, the spherical harmonics flux problem is formulated in PICASO by representing the system in terms of banded matrices, and solved using the solve_banded functionality of SciPy (Virtanen et al., 2020). ## 3 Source function technique Following the methodology of Toon et al. (1989), we apply the source function technique to calculate the emergent intensity from the top of the atmosphere. The radiative transfer equation (1) can be solved to yield the azimuthally integrated intensity at angle \(\mu\) at the top of the \(n^{\text{th}}\) layer (\(\tau=0\)) as \[I_{n}(0,\mu)=I_{n}(\partial\tau_{n},\mu)e^{-\frac{\partial\tau_{n}}{\mu}}+\frac {1}{\mu}\int_{0}^{\partial\tau_{n}}S_{vt}e^{-\frac{\tau}{\mu}}\mathrm{d}\tau, \tag{33}\] for \[S_{vt}=\frac{w_{0,n}}{2}\int_{-1}^{1}I_{t}(\tau,\mu^{\prime})P(\mu,\mu^{ \prime})\mathrm{d}\mu^{\prime}+S_{n}(\tau), \tag{34}\] where \[S_{n}(\tau)=2\pi(1-w_{0,n})(B_{0,n}+B_{1,n}\tau). \tag{35}\] Toon et al. (1989) showed that infrared intensities can be estimated with sufficient accuracy by using the two-stream approximation to define the source function in the equation of radiative transfer, therefore, we use \(I_{t}\), the solution to the \(P_{1}/P_{3}\) problem outlined in Section 2, in place of the true intensity in the source term (34). Therefore we can rewrite (34) as \[S_{vt}=w_{0,n}\sum_{l=0}^{L}\chi_{l}I_{l}(\tau)P_{l}(\mu)+S_{n}(\tau). \tag{36}\] Let us consider the integral term in (33). Using (36), this can be written as \[\int_{0}^{\partial\tau_{n}}S_{vt}e^{-\frac{\tau}{\mu}}\mathrm{d}\tau=w_{0,n} \sum_{l=0}^{L}\chi_{l}P_{l}(\mu)\int_{0}^{\partial\tau_{n}}I_{l}(\tau)e^{- \frac{\tau}{\mu}}\mathrm{d}\tau+\int_{0}^{\partial\tau_{n}}S_{n}(\tau)e^{- \frac{\tau}{\mu}}\mathrm{d}\tau. \tag{37}\] We can calculate the second term on the right-hand side of (37) to be \(\zeta\!\!\!/\) \[\int_{0}^{\partial\tau_{n}}S_{n}(\tau)e^{-\frac{\tau}{\mu}}\mathrm{d}\tau=2 \pi\mu(1-w_{0,n})\left[B_{0,n}\left(1-e^{-\frac{\partial\tau}{\mu}}\right)+B_ {1,n}\left(\mu-\left(\partial\tau+\mu\right)e^{-\frac{\partial\tau}{\mu}} \right)\right] \tag{38}\] Next, let us write \(A_{n,\text{int}}=\int_{0}^{\partial\tau_{n}}I_{l}(\tau)e^{-\frac{\tau}{\mu}} \mathrm{d}\tau\). This is calculated identically for reflected light, and is outlined in detail in Rooney et al. (2023), where the solution is given by \[A_{n,\text{int}}=A_{n}X_{n}+N_{n} \tag{39}\] where matrix \(A_{n}\) is defined in Rooney et al. (2023)\(\zeta\!\!\!/\) and \(X_{n}\) are the coefficients we solve for in the \(P_{1}\) and \(P_{3}\) flux problems (16)-(21) and (25)-(32) respectively. Vector \(N_{n}\) differs from that for reflected light. For infrared sources, \(N_{n}\) is defined as \[N_{0,n} =\frac{2\pi\mu\left(1-w_{0,n}\right)}{a_{0,n}}\left[B_{0,n}\left( 1-e^{-\frac{\partial\tau}{\mu}}\right)+B_{1,n}\left(\mu-\left(\partial\tau+ \mu\right)e^{-\frac{\partial\tau}{\mu}}\right)\right], \tag{40}\] \[N_{1,n} =\frac{2\pi\mu(1-w_{0,n})}{a_{0,n}}\frac{B_{1,n}}{a_{1,n}}\left( 1-e^{-\frac{\partial\tau}{\mu}}\right),\] (41) \[N_{2,n} =N_{3,n}=0. \tag{42}\] Substituting \(A_{n,\text{int}}\) (39) and the integrated source term (38) back into (37), we can use (33) to calculate the azimuthally integrated intensity emerging from the top of the \(n^{\text{th}}\) layer. By beginning at the bottom of the atmosphere (\(n=N\)) and working our way up layer-by-layer, we can derive the azimuthally integrated intensity at the top of the atmosphere. This intensity is used to calculate the infrared flux to predict the observed atmospheric spectra. ## 4 Analysis To quantitatively analyze the performance of the spherical harmonics method for infrared radiative transfer, we compare our results with Toon89 and CDISORT, a version of the discrete ordinate solver, DISORT, written in C rather than FORTRAN (Stamnes et al., 1988, 2000; Mayer & Kylling, 2005; Buras et al., 2011). CDISORT is a versatile, well-tested and widely used radiative transfer software, with advanced numerical capabilities. CDISORT has the capacity to model \(L\)-stream discrete ordinates approximations, where \(L\) is arbitrary and considerably greater than 4 (we will study 16 and 32 stream calculations in this work). One important note to consider before delving into comparisons of Toon89 and CDISORT is the different scattering phase functions. The Toon89 methodology utilizes the hemispheric mean phase function for infrared scattering (Toon et al., 1989). The hemispheric mean approach is derived by assuming that the phase function takes the value of \(1+g_{0}\) in the forward scattering hemisphere, and \(1-g_{0}\) in the backward scattering hemisphere, where \(g_{0}\) denotes the asymmetry parameter. Toon et al. (1989) chose this technique because for infrared wavelengths, it assumes the correct relationship between flux and intensity and produces the proper emissivity in the limiting case of dominant absorption (\(w_{0}=0\)) for a semi-infinite atmosphere. On the other hand, CDISORT, SH2 and SH4 all utilize the Henyey-Greenstein phase function (Henyey & Greenstein, 1941): \[P_{\rm HG}(\cos\Theta)=\frac{1-g_{0}^{2}}{(1+g_{0}^{2}-2g_{0}\cos\Theta)^{3/2}}, \tag{43}\] where the scattering angle \(\Theta\) is defined as \[\cos\Theta=\mu\mu^{\prime}-\sqrt{1-\mu^{2}}\sqrt{1-\mu^{\prime 2}}\cos(\phi- \phi^{\prime}). \tag{44}\] for incoming and outgoing radiation angular directions \((\mu,\phi)\) and \((\mu^{\prime},\phi^{\prime})\) respectively. We emphasize this difference in computational methods to foreshadow differences that arise between the methodologies. In what follows, we first compare two benchmark spectra in SS4.1. Then, we isolate the dependence of each method's accuracy on scattering parameters (single scattering, asymmetry) in SS4.2. Lastly, we baseline the timing of these methodologies in SS4.3 to investigate the trade-off between computational time and accuracy. ### Comparison of benchmark spectra We consider two different benchmark atmospheres on which to conduct our analysis: (i) a brown dwarf with effective temperature \(T_{\rm eff}=1200\) K, gravity \(g=200\) m/s\({}^{2}\), solar metallicity, solar C/O and forsterite, iron, corundum clouds, and (ii) planet similar to Jupiter with \(g=\)25m/s\({}^{2}\), semi major axis=5 AU, orbiting a Sun-like star, with H\({}_{2}\)O and NH\({}_{3}\) clouds. The cloudy and non-cloudy infrared spectra, as predicted by the Toon et al. (1989) implementation in PICASO(Batalha et al., 2022), is denoted Toon89. As shown in Figure 1 the cases chosen both have spectra that are largely affected by the presence of clouds. In both cases, the clouds act to prevent photon contribution from the deepest, hottest, layers. As a result, the pressure range probed by the cloudy models is limited to the upper, cooler layers, creating spectral features that appear muted, relative to the cloud-free counterpart. We choose these cases in order to test the accuracy of these methodologies in the scattering-dominated limit for typical cloudy objects. We note that the different methodologies agree in the case of no scattering, indicating that any spectral differences in the cloudy cases are a consequence of the approximations implemented to deal with scattering. In Figure 2(b) we plot the infrared spectra for cloudy atmosphere (i), predicted by 16-stream CDISORT, Toon89, two-term spherical harmonics (SH2) and four-term spherical harmonics (SH4). Note that we indicate whether the models utilise the Henyey-Greenstein (HG) or hemispheric mean (hem-mean) phase function in the figure legend. Figure 2(b) depicts the single scattering, asymmetry and optical depth profiles with pressure, averaged in the wavelength range 1-1.4\(\mu\)m, as indicated by the grey dashed lines on the spectra plot. We choose this wavelength window to average the scattering parameters as it is a region with significant differences in the spectra produced by Toon89 and the other models. The cloud profile is shaped by two cloud layers: one smaller cloud layer below 1 bar overlayed by a larger optical depth cloud layer above 1 bar. In the high optical depth region (\(\tau>\)0.5), the associated optical properties range between 0.6-0.75 for the asymmetry and 0.8-0.94 for the single scattering. These values are typical for these condensate species and present a less forward scattering example, as compared with the "Jupiter-like" example. There is an immediately noticeable difference between Toon89, CDISORT and SH2/4. Given the close agreement of SH2 with 16-stream CDISORT compared to Toon89, which is also a two-stream technique, we can isolate that it is the choice of phase function that leads to this difference. Rooney et al. (2023) conducted an investigation into the accuracy gain when applying four-term spherical harmonics to predict the scattering of reflected light in atmospheres, and compared geometric albedo produced by SH2, SH4 and Toon89 with the doubling method, calculated by Liou et al. (2017). The results are shown in Figure 1. The results are shown in Figure 2. The results are shown in Figure 3. The results are shown in Figure 4. The results are shown in Figure 5. The results are shown in Figure 6. The results are shown in Figure 7. The results are shown in Figure 8. The results are shown in Figure 9. The results are shown in Figure 10. The results are shown in Figure 11. The results are shown in Figure 12. The results are shown in Figure 13. The results are shown in Figure 14. The results are shown in Figure 15. The results are shown in Figure 16. The results are shown in Figure 17. The results are shown in Figure 18. The results are shown in Figure 19. The results are shown in Figure 20. The results are shown in Figure 21. The results are shown in Figure 22. The results are shown in Figure 23. The results are shown in Figure 24. The results are shown in Figure 25. The results are shown in Figure 26. The results are shown in Figure 27. The results are shown in Figure 28. The results are shown in Figure 29. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 21. The results are shown in Figure 22. The results are shown in Figure 23. The results are shown in Figure 24. The results are shown in Figure 25. The results are shown in Figure 26. The results are shown in Figure 27. The results are shown in Figure 28. The results are shown in Figure 29. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 21. The results are shown in Figure 22. The results are shown in Figure 23. The results are shown in Figure 24. The results are shown in Figure 25. The results are shown in Figure 26. The results are shown in Figure 27. The results are shown in Figure 28. The results are shown in Figure 29. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 21. The results are shown in Figure 22. The results are shown in Figure 23. The results are shown in Figure 24. The results are shown in Figure 25. The results are shown in Figure 26. The results are shown in Figure 27. The results are shown in Figure 28. The results are shown in Figure 29. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 21. The results are shown in Figure 22. The results are shown in Figure 23. The results are shown in Figure 24. The results are shown in Figure 25. The results are shown in Figure 26. The results are shown in Figure 27. The results are shown in Figure 28. The results are shown in Figure 29. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 20. The results are shown in Figure 21. The results are shown in Figure 22. The results are shown in Figure 23. The results are shown in Figure 24. The results are shown in Figure 25. The results are shown in Figure 26. The results are shown in Figure 27. The results are shown in Figure 28. The results are shown in Figure 29. The results are shown in Figure 29. (1973). The authors concluded that SH2 and Toon89 performed comparably, and that the choice between spherical harmonics or discrete-ordinates had little impact on the solution accuracy when compared to the doubling method. The only difference between the SH2 method applied in this work and that applied in Rooney et al. (2023) is the thermal source and boundary conditions, which are identically applied to Toon89 in PICASO. However, the PICASO implementation of Toon89 for reflected light leverages a post-processed Henyey-Greenstein phase function for direct scattering, opposed to the hemispheric mean approach used in infrared (Batalha et al., 2022). For a clearer understanding of how SH4 compares to SH2, we also plot the percentage difference between 16-stream CDISORT and Toon89, SH2 and SH4 in Figure 3. We notice that the largest deviance of SH2 from 16-stream CDISORT is around 8.5%, whereas SH4 is always within 2.5%. We conduct the same analysis for the Jupiter-like profile in Figure 4, where the infrared spectra predicted by 16-stream CDISORT, Toon89, SH2 and SH4 are plotted on the right, alongside the single scattering, asymmetry and optical depth profiles with pressure, averaged in the wavelength range 8.2-9\(\mu\)m. We notice immediately that the four models are in close agreement throughout the entire wavelength range, with the greatest differences evident between 8-9\(\mu\)m and for wavelengths greater than 13\(\mu\)m. We attribute this to the different cloud condensate optical properties of the Jupiter-like profile. The cloud profile in this case is also shaped by two cloud layers: one larger cloud deck around 1 bar and another smaller optical depth cloud layer around 0.02 bar. In the high optical depth region, the associated optical properties range between 0.8-0.94 for the asymmetry and 0.53-0.8 for the single scattering. Since this atmosphere exhibits greater forward scattering but with lower values for the single scattering albedo, this suggests that the accuracy of SH4, SH2, and Toon89 methods are highly dependent on what the strength of scattering and asymmetry of the cloud is. We again plot the percentage difference between 16-stream CDISORT and Toon89, SH2 and SH4 in Figure 5, and observe a maximum deviance around 11% for Toon89 (ignoring wavelengths less than 6.6\(\mu\)m where spectra values themselves are very small). We also notice that SH2 and SH4 exhibit effectively identical agreement with 16-stream CDISORT, with percentage differences staying below 3.5%. ### Dependence of accuracy on scattering parameters Since it is clear that there is a large accuracy dependence on single scattering and asymmetry, we compare the spectra produced by Toon89 and SH4 with 32-stream CDISORT for a range of single-scattering albedos and asymmetry parameters in a test atmosphere. We define the test atmosphere with the same pressure-temperature and optical depth profile as the \(T_{\rm eff}=1270\)K case studied in Section 4.1, however we force the cloud single-scattering albedo \(w_{0}\) and asymmetry parameter \(g_{0}\) to take constant values for clarity. We sweep over a grid of parameter values and calculate Figure 3: Percentage differences between the spectra produced by CDISORT and Toon89, SH2 and SH4 for the \(T_{\rm eff}=1270K\) profile, where HG and hem-mean indicates that the model phase function is Henyey-Greenstein or hemispheric mean respectively. the resulting spectra for each of our three models. We consider \(w_{0}\) in the range 0.1-1.0, and \(g_{0}\) in the range 0.0-0.9. Note that we consider a finer grid for high (\(w_{0}>\)0.9) single-scattering albedo. Taking an average of the spectra values over the wavelength range 1-10\(\mu\)m for each pair of \(w_{0}\) and \(g_{0}\) values, we calculate the percentage difference between 32-stream CDISORT and each of Toon89 and SH4. We plot the results as heatmaps in Figure 6, where Figure 6(a) depicts the percentage difference in infrared flux between Toon89 and 32-stream CDISORT, and Figure 6(b) displays that of SH4 and CDISORT. To further elucidate for which parameters SH4 and Toon89 better agree with CDISORT, we subtract the absolute percentage difference of SH4 with CDISORT from that of Toon89, and plot the result in Figure 6(c). In the red-colored regions, SH4 out-performs Toon89 when compared to CDISORT. The white regions represent Figure 4: Comparison between the infrared spectra predicted by 16-stream CDISORT, PICASO, 2-term spherical harmonics (SH2) and 4-term spherical harmonics (SH4) for a Jupiter-like profile. The scattering properties plotted in subfigure (a) correspond to the average values within the 8.2-9\(\mu\)m wavelength region, as marked by the grey dashed lines on the spectra plot (b). \(\lnot\diamond\) Figure 5: Differences in spectra for cloudy Jupiter. the cases where SH4 and Toon89 have comparable agreement with CDISORT. The blue-colored regions represent when Toon89 is in closest agreement with CDISORT. We see from Figure 6 that Toon89 exhibits a maximum percentage difference of around 60% with 32-stream CDISORT for high asymmetry (\(g_{0}>0.8\)) and moderate single-scattering albedo (\(0.5<w_{0}<0.8\)). The lowest errors occur for extreme values of single scattering, namely \(w_{0}=0.1\) and \(w_{0}>0.99\). We note the excellent agreement for \(w_{0}=0\), which validates the justification of using the hemispheric mean phase function by Toon et al. (1989) to ensure correct emissivity in the \(w_{0}=0\) limit. Superior agreement is achieved by SH4 when compared with 32-stream CDISORT, with the maximum error of -6% occurring for single-scattering albedo exceeding 0.95. By comparing the two heatmaps in this region, we see that Toon89, using the hemispheric mean approximation, agrees more closely with 32-stream CDISORT than SH4, even though both SH4 and CDISORT use the Henyey-Greenstein phase function. This implies that for high single-scattering (\(w_{0}=1\)), the hemispheric-mean approximation marginally outperforms low-order spherical harmonic expansions for the Henyey-Greenstein phase function. ### Timing-accuracy trade-off Despite the improvements of moving to SH4, we still must consider the timing-accuracy trade-off. To elucidate this, we analyze the computational expense of SH2 and SH4 as the number of layers is increased from 40 to 140, alongside the maximum percentage difference of their thermal spectra with that of a 16-stream, 140-layer CDISORT model. We run this analysis on the \(T_{\text{eff}}=1270\)K atmosphere studied above, over a wavelength range of 0.7-2\(\mu\)m and plot our results in Figure 7. As the focus of the present study is to assess the trade-off between computational expense and model accuracy, we compare only SH2 and SH4 to CDISORT to illustrate the increase in cost and improvement in agreement with higher-fidelity models when moving from two to four terms. The modelling approach of SH2 and SH4 is identical bar the number of terms, whereas the Toon89 methodology differs both in model choice (discrete-ordinates versus spherical harmonics) and phase function (hemispheric mean versus Henyey-Greenstein). In an attempt to attribute any differences in computational expense and agreement with CDISORT to only the number of layers chosen for the model, we compare SH2 with SH4. From Figure 7(a) we see an increase in computational expense, \(t\), for both SH2 and SH4 when the number of layers, \(N\), is increased from 40 to 140 scaling approximately as \(t=\mathcal{O}(N)\). Overall, SH4 is approximately twice as expensive as SH2. However, in Figure 7(b) SH4 has a significant increase in model agreement with the CDISORT test case as we increase the number of layers from 40 to 140. For 140 layers, SH4 is within 2% of CDISORT versus 9.7% for SH2, Figure 6: Heatmaps depicting the percentage difference in average flux produced by (a) Toon89 and (b) SH4 with 32-stream CDISORT. Figure (c) is produced by subtracting the absolute percentage differences of SH4 from that of Toon89 to elucidate for which parameters one method agree with CDISORT better than the other. Dark-red represents cases where SH4 outperforms Toon89, as compared to CDISORT. White (toward zero) represents cases where Toon89 and SH4 perform comparably. \(\ldiamond\) illustrating that although twice as slow as SH2, SH4 is nearly five times more accurate when benchmarked against 16-stream CDISORT. In cases where model accuracy is important and in the single scattering/asymmetry regions outlined in Section 4.2, SH4 is the obvious choice due to its superior agreement with higher-fidelity models over its lower-term counter-model SH2. However, in the instance when rapid solutions are required, the additional computational expense of SH4 might be undesirable and the efficient SH2 will be the model of choice. ## 5 Conclusion Following from the analysis conducted by Rooney et al. (2023), we extended the spherical harmonics approach to solving the radiative transfer equation, implemented in modelling software PICASO Batalha et al. (2022), to thermal emission. In particular, we considered a four-term expansion of spherical harmonics, an increase from the original, two-stream implementation in PICASO, which we denoted Toon89 to reflect its heritage from Toon et al. (1989). The general spherical harmonics methodology for reflected light and thermal emission is largely the same, except for the source function, boundary conditions, and use cases. The main objective of this work was to build on the rigorous derivation of the model for reflected light studied by Rooney et al. (2023), and explain the differences in the model for thermal emission. Without re-deriving every equation in the model, we highlighted the differences incurred by considering a thermal source, and outlined the relevant matrix systems being solved by the model. To explore the accuracy performance of the four-stream spherical harmonics model, we compared our results to CDISORT (Stannes et al., 2000). When considered alongside two-term spherical harmonics and the two-stream Toon89 method, this analysis elucidated the increased efficacy of higher-order approximations in radiative transfer calculations for thermal emission, and also demonstrated the impact of the choice of phase function on the resulting spectra. We studied the thermal spectra obtained via the two-stream Toon89 implementation, two and four-term spherical harmonics and CDISORT in Section 4.1 for two different sample atmospheres. This investigation highlighted that the choice of phase function has a large impact on the resultant spectra. The use of hemispheric mean in Toon89 created spectra that were largely different (up to 60%) than those computed from SH2, which utilizes a Henyey-Greenstein phase function. Additionally, we find that accuracy of the order of approximation (two versus four term) is highly dependent on the single scattering albedo and asymmetry of the cloud profile. This motivated a deeper exploration of how the accuracy of the radiative transfer method depends on both values. Figure 7: Analyzing how the (a) computational time and (b) maximum percentage difference of SH2 and SH4 with CDISORT changes as the number of layers is increased from 40 to 140. We use a 16-stream, 140-layer CDISORT model to benchmark the spherical harmonics against. We see an evident increase in computational expense with number of layers, where SH4 is twice as slow as SH2 for the 140-layer case, however, the maximum percentage difference with the benchmark decreases significantly with layers for SH4, where SH4 is within 2% of CDISORT versus 9.7% for SH2. Therefore, we created a grid of models with a fixed atmosphere profile and varied asymmetry parameters and single-scattering albedos to study the Toon89 and SH4 models performance when compared to 32-stream CDISORT. We found that Toon89 performs particularly well for the limiting cases of single-scattering albedo, namely \(w_{0}=0\) and \(w_{0}=1\), however suffered from substantial errors of around 60% for high asymmetry and moderate single-scattering. SH4 experiences maximum error of around -6% for high single-scattering. Finally, we analyzed the timing-accuracy trade-off for the spherical harmonics methods when increasing the number of model layers. By calculating the maximum percentage difference between the thermal spectra produced by SH2 and SH4 with 16-stream, 140-layer CDISORT, we discussed the sacrifice in computational speed for model agreement. This study elucidated that, although increasing model approximation order from two to four terms results in an increase in computational expense, the increase in accuracy when bench-marked against CDISORT is significant. The SH4 model took twice as long as SH2 to calculate the thermal spectra, but produced a result that was nearly five times more accurate when compared to 16-stream CDISORT, with a maximum percentage error of 2%. This analysis demonstrates that a sacrifice of computational expense might be acceptable when a significant increase in accuracy is required from the observational data accuracy, but may not be necessary if numerical efficiency is the priority. In conclusion, we have demonstrated that increasing the order of approximation from two to four streams can produce significant improvement on model accuracy when compared with high-order CDISORT. The spherical harmonics analysis outlined in this paper is implemented in the PICASO framework, alongside the Toon89 methodology, and is available for download and use Batalha et al. (2022). The Jupyter notebook, which reproduces our results, can be found on Github as well </>. C.R.'s research was supported by an appointment to the NASA Postdoctoral Program at the NASA Ames Research Center, administered by Universities Space Research Association under contract with NASA. N.B. & C.R. both acknowledge support from the NASA Astrophysics Division. Additionally, N.B. acknowledges support from NASA'S Interdisciplinary Consortia for Astrobiology Research (NNH19ZDA001N-ICAR) under award number 19-ICAR19_2-0041. We thank Jeff Cuzzi and Sanford Davis for enlightening discussions about some of the finer points of radiative transfer in higher order approximations. Lastly we thank Arve Kylling for helpful discussions regarding CDISORT's radiative transfer methodology. numba (Lam et al., 2015), pandas (McKinney, 2010), bokeh (Bokeh Development Team, 2014), NumPy (Virtanen et al., 2020), (Walt et al., 2011), IPython (Perez and Granger, 2007), Jupyter, (Kluyver et al., 2016), VIRGA (Batalha et al., 2020; Rooney et al., 2022), PICASO(Batalha et al., 2022), MATLAB (MATLAB, 2010), A version of PICASO corresponding to these hyperlinks and the software used in this work is archived on Zenodo as v3.1 with DOI: 10.5281/zenodo.7765171 ## Appendix A Derivation of Boundary Condition To derive the boundary condition for the upward flux \(F^{+}(\tau_{N})\) for an atmosphere that continues below the lower-most level in our model, we consider the intensity of emission at the surface given by Mihalas (1978), namely \[I(\tau_{N},\mu)=B(\tau_{N})+\mu\frac{\mathrm{d}B}{\mathrm{d}\tau}(\tau_{N}).\] (A1) Recalling the expressions for \(F^{\pm}(\tau)=2\pi\int_{0}^{\pm 1}I(\tau,\mu)\mathrm{d}\mu\) and \(f^{\pm}(\tau)=\pi\int_{0}^{\pm 1}I(\tau,\mu)(5\mu^{3}-3\mu)\mathrm{d}\mu\), we obtain the boundary conditions \[F^{+}(\tau_{N}) =\pi\left(B(\tau_{N})+\frac{2}{3}\frac{\partial B}{\partial\tau} (\tau_{N})\right),\] (A2) \[f^{+}(\tau_{N}) =-\frac{\pi B(\tau_{N})}{4}.\] (A3) Similarly, for the case of a hard-surface at the lower-most layer in our model, the intensity of emission at the surface is taken as that of a blackbody: \[I(\tau_{N},\mu)=B(\tau_{N}).\] (A4) Proceeding as above we arrive at \[F^{+}(\tau_{N}) =\pi B(\tau_{N}),\] (A5) \[f^{+}(\tau_{N}) =-\frac{\pi B(\tau_{N})}{4}.\] (A6) Including the effect of surface reflectivity \(A_{S}\), hence, we obtain the boundary conditions \[F^{+}(\tau_{N}) =\pi B(\tau_{N})+A_{S}F^{-}(\tau_{N}),\] (A7) \[f^{+}(\tau_{N}) =-\frac{\pi B(\tau_{N})}{4}+A_{S}f^{-}(\tau_{N}).\] (A8) Note that to minimize the effect of the choice of the lower boundary condition on the computed emergent flux it is always best practice to have the lowermost model layer lie at high optical depth at all wavelengths where practicable.
2310.06160
Entropy Based Multi-robot Active SLAM
In this article, we present an efficient multi-robot active SLAM framework that involves a frontier-sharing method for maximum exploration of an unknown environment. It encourages the robots to spread into the environment while weighting the goal frontiers with the pose graph SLAM uncertainly and path entropy. Our approach works on a limited number of frontier points and weights the goal frontiers with a utility function that encapsulates both the SLAM and map uncertainties, thus providing an efficient and not computationally expensive solution. Our approach has been tested on publicly available simulation environments and on real robots. An accumulative 31% more coverage than similar state-of-the-art approaches has been obtained, proving the capability of our approach for efficient environment exploration.
Muhammad Farhan Ahmed, Matteo Maragliano, Vincent Frémont, Carmine Tommaso Recchiuto
2023-10-09T21:18:14Z
http://arxiv.org/abs/2310.06160v1
# Entropy-Based Multirobot Active SLAM ###### Abstract In this article, we present an efficient multi-robot active SLAM framework that involves a frontier-sharing method for maximum exploration of an unknown environment. It encourages the robots to spread into the environment while weighting the goal frontiers with the pose graph SLAM uncertainly and path entropy. Our approach works on a limited number of frontier points and weights the goal frontiers with a utility function that encapsulates both the SLAM and map uncertainties, thus providing an efficient and not computationally expensive solution. Our approach has been tested on publicly available simulation environments and on real robots. An accumulative 31% more coverage than similar state-of-the-art approaches has been obtained, proving the capability of our approach for efficient environment exploration. keywords: SLAM, Active SLAM, Frontier Detection, Mapping, Entropy + Footnote †: journal: Preprint ## 1 Introduction Simultaneous localization and mapping (SLAM) is an approach where an agent autonomously localizes itself and simultaneously maps the environment while navigating through it. The objective is to find the optimal state vector that minimizes the measurement error between the estimated pose and environmental landmarks. Most SLAM algorithms are passive, i.e., the robot is controlled manually and the navigation or path planning algorithm does not actively take part in robot motion or trajectory. Active SLAM (A-SLAM), however, tries to solve the optimal exploration problem of the unknown environment by proposing a navigation strategy that generates future goal/target positions actions which decrease map and pose uncertainties, thus enabling a fully autonomous navigation and mapping SLAM system without the need of an external controller or human effort. In Active Collaborative SLAM (AC-SLAM) multiple robots interchange information to improve their localization estimation and map accuracy to achieve some high-level tasks such as exploration. The exchanged information can be localization information [1], entropy [2], visual features [3], and frontier points [4]. In this article, we present a multi-agent AC-SLAM system for efficient environment exploration using frontiers detected over an Occupancy Grid (OG) map. In particular, in this work, we aim at: 1. Extending the A-SLAM approach of [5] which uses a computationally inexpensive D-optimality criterion for utility computation to a multi-agent AC-SLAM framework. 2. Proposing a utility function that uses frontier path entropy for computing the rewards of goal candidate frontiers. 3. Introducing a method that coordinates the frontiers sharing among robots to encourage maximum distance between robots for exploration. 4. Implementing the proposed method in ROS using both simulation and real robot experiments and achieving an accumulative 31% more coverage than state-of-the-art methods. The proposed system aims to efficiently maximize the environment exploration while maintaining good SLAM estimates and provides a not computationally expensive solution by reducing the number of goal frontiers. The article is organized as follows: Section 2 summarises the related literature from selected articles. Section 3 presents a thorough explanation of our proposed system, with specific emphasis on the methodologies employed for frontier filtering and management, implementation of the utility function, and coordination among robots. We show the usefulness and application of the system in simulations and real robot experiments in Sections 4.1 and 4.2. Finally, in Section 5 we summarize and conclude this work. Throughout this article, we will use the words _robots_ or _agents_ interchangeably, and the same applies to _frontiers_ and _points_, as they imply the same meaning in the context. ## 2 Related Work As previously mentioned, A-SLAM is designed for situations in which a robot must navigate in an environment that is only partially observable or unknown. In this context, the robot must choose a sequence of future actions while dealing with noisy sensor measurements that impact its understanding of both its state and the map of the environment. This scenario is typically formalized as a specific case of the Partially Observable Markov Decision Process (POMDP), as presented in [24], [17], and [19]. The POMDP formulation of A-SLAM, while widely adopted, is computationally intensive to streamline computation, A-SLAM is usually divided into three key steps: 1) identifying potential goal positions (frontiers, i.e., boundaries between visited and unexplored areas), 2) calculating their associated costs using a utility function where utility is computed using Information Theory (IT) [10] or Theory of Optimal Experimental Design (TOED) [11], hence selecting the next action to be performed, and 3) executing the action, eventually moving the robot to the chosen goal position. Concerning the first step, a common approach consists of letting the robot identify potential exploration targets using the frontier-based approach pioneered by Yamauchi [6]. Figure 1 illustrates frontier detection using lidar measurements within a simulated AWS Modified Hospital environment1. Footnote 1: [https://github.com/mlherd/](https://github.com/mlherd/) To manage exploration, IT strategies are based on the use of information as a measure of utility for taking exploration control actions. As the uncertainties related to the map and robot pose increase over time, the goal is to reduce the uncertainties in the belief space [21] about the unknown environments. The existing approaches propose solutions that make use of particle filters [2][7], Mutual Information [8][9], Bayesian Optimization [9], and entropy [2]. In IT, entropy measures the amount of uncertainty associated with a random variable or random quantity. Higher entropy leads to less information gain and vice versa. The authors in [2] formulate the Shannon entropy of the Map \(M\) as in Equation 1 where the map is represented as an occupancy grid and each cell \(m_{i,j}\) is associated with a Bernoulli distribution \(P(m_{i,j})\). The objective is to reduce both the robot pose and map entropy. \[\mathcal{H}[p(M)]=-\sum_{i,j}(p(m_{i,j})log(p(m_{i,j}))+(1-p(m_{i,j}))log(1-p(m _{i,j})) \tag{1}\] Figure 1: (a) AWS Modified Hospital environment. (b) Frontier detection on the occupancy grid map, red = robot, green = detected frontiers (centroids), white = free space, gray = unknown map area, black = obstacles Alternatively, TOED defines many "optimality criteria", which give a mapping of the covariance matrix to a scalar value. Hence, the priority of a set of actions is based on the amount of covariance in the joint posterior. Less covariance contributes to a higher weight of the action set. The "optimality criterion" deals with the minimization of average variance, minimizing the volume of the covariance ellipsoid (D-optimality), and minimization of the maximum eigenvalue [13]. Many of the concepts previously discussed can be found even in the case of Active Collaborative SLAM (AC-SLAM) with the additional constraint of managing inter-robot robust communications. Typical application scenarios include collaborative localization [18][22], exploration and exploitation tasks [26][26] and trajectory planning [15][16]. In detail, multi-agent frontier-based strategies assess information gain and detect frontiers through interactions, as seen in [28] and in wireless sensing networks [30]. The method described by [26] formulates the problem in an environment represented by primitive geometric shapes. The cost function is somewhat similar to [27], which takes into consideration the discovery of the target area of a robot by another member of the swarm and switches from a frontier to a distance-based navigation function to guide the robot toward the goal frontier. Other research works have implemented frontiers-based coverage approaches that divide the perception task into a broad exploration layer and a detailed mapping layer, making use of heterogeneous robots to carry out the two tasks [3] while solving a Fixed Start Open Traveling Salesman Problem (FSOTSP) for frontier-based viewpoints has been proven as a valid solution to build a volumetric model of the environment with multiple agents [4]. With a similar approach, the authors in [27] and [1] describe frontier-based exploration as an optimization problem where the information gain and localization efficiency (measured as a trace of the covariance matrix) is maximized while navigation cost towards the frontier is penalized. More recently another method for quantifying uncertainty, based on the graph connectivity indices underlying the pose graph SLAM structure, has emerged as a valid approach in the AC-SLAM scenario. Graph connectivity indices are computationally less expensive to measure SLAM uncertainty as compared to TOED and IT approaches discussed previously. In [22], the authors propose a method for identifying weak connections in pose graphs to strategically enhance information exchange when robots are in proximity. In other words, the proposed system identifies the weak connections in the target robot pose-graph, and when the covariance increases to a certain threshold, other agents help to rectify these weak connections and generate trajectories using Rapidly Exploring Random Trees (RRT) [20] to decrease uncertainty and improve localization. This method uses continuous refinement along with the D-optimally criterion to collaboratively plan trajectories. A bidding strategy is defined, which selects the winning host robot based on the least computational cost, feasible trajectory, and resource-friendly criteria. ## 3 Methodology ### Overview of the proposed approach Overall, the AC-SLAM approaches discussed in Section 2 quantify the uncertainty using the entire map entropy and by using the full covariance matrix which renders them computationally expensive. Further, they do not favour the spread of agents for maximum coverage requirements. For these reasons, here we propose an AC-SLAM method that encourages sparsity between agents while utilizing a utility function that takes into account the uncertain propagation not only of the pose-graph (D-optimality) but also of the map (frontier path). Indeed, using our approach we manage to spread the agents while maintaining a good SLAM estimate. Also, when compared to the state-of-the-art methods described in Section 2 our method provides a computationally efficient solution by working on fewer frontiers, maximizing the exploration while using a utility function incorporating modern D-Optimality along with path entropy. Figure 2 shows the architecture and communication pipeline of our proposed approach. We have built our method upon [5] which uses a Lidar-based SLAM back-end and proposes a utility function based on the D-Optimality criterion as a maximum number of spanning trees of the graph Laplacian of the pose-graph. Each robot performs its SLAM using Open Karto 2 and detects local frontiers regarding its map. A map merging node 3 (map-merging-node) merges local maps into a global map, so that all the computed frontiers are referenced to the global map. Frontiers from each agent are concatenated into a list and further processed by the Filtering and Classification (_merge-points-server_) module (Section 3.2). This module removes redundant frontiers and further filters the frontiers/points by keeping only those points that are at (or near to) the border of the merged map (global map). This new list of points is given back to each agent which computes its utility and reward matrix for each point as we will see in the Utility Computation (_assigner_ node) module (Section 3.3). This reward matrix is further processed by the Update Rewards & Goal Selection (_choose-goals-server_) module (Section 3.4) which updates the rewards keeping into account the sparsity and number of already selected goal points for each agent. Finally, the selected goal for each agent is sent to the Path planning & control module (ROS Navigation stack) which uses Dijkstra's algorithm [12] for global and Dynamic Window Approach (DWA) [33] as local planners. Since the approach has been implemented in ROS and involves a centralized frontiers sharing system as shown in Figure 2 each agent uses two main nodes responsible for utility computation and frontier detection: the _assigner_ node computes the proposed utility function and assigns the goal points to the agent, while the _detector_ nodes use OpenCV and RRT-based frontier detection from [5]. The following nodes are a part of the _central server_ of the system: 1) the _map-manager_ node, responsible for the communication among the entire server and each agent in the system, 2) the _merge-points-server_ node, responsible for merging lists of points acquired by different agents, and 3) the _choose-goals-server_ node, which chooses a specific target point in the list. By adopting a specific policy (Section 3.4), the server also tries to distribute the robots in the environment and reduce the exploration time. Concerning following Sections and Algorithms, we can summarize the workflow as: 1. Each agent detects the points \(p\) and passes them to the central server (subject to its availability) on a dedicated topic. 2. The manager node takes the list of points \(p_{list}\) passed by the agents and sends it to _merge-points-server_. 3. The _merge-points-server_ takes as input the lists received by all robots, merging the points into a unique list using Algorithm 1, also checking the actual frontiers on the merged map \(M\) through the Algorithm 2 and limiting the dimension of the list using the procedure explained in Algorithm 3. Eventually, it gives back the list to each agent. 4. Each agent computes the reward matrix \(R_{m}\) on the received list, with the approach described in Section 3.4, sending it to the _choose-goals-server_. 5. The _choose-goals-server_ server updates the reward through Algorithms 4 and 5 to take into account all the points already assigned. The selected target point is fed back to the robot. Figure 2: Architecture of the proposed method. Local nodes of each robot (blue), ROS server (red), map-merging mode (black) 6. The global and local planners of the ROS package _move_base_ are responsible for driving each agent to the selected frontier. Once the agent reaches the target, the workflow restarts from step 1. In the following Sections, we describe comprehensively steps 3, 4, and 5, which represent the core of the proposed approach. ### Filtering and classification Each agent is responsible for building its map and merging local frontier points as shown in Algorithm 1. Some parts of the map from each agent can be overlapped and the frontiers could lie in an already mapped area when considering the merged map. Since usually the goal of an exploration task is to cover the entire area by minimizing the exploration time, the frontiers lying in the middle of the merged map are not significant because moving an agent to them would not increase the overall discovered area. To avoid the need to consider these points as _goal-like_ points, we decided to filter the points considering only the actual frontiers of the merged map. To this purpose, Algorithm 1 takes a list of points \(P_{list}\) and the merged map \(M\) and it checks if each point has enough unknown cells (PRC_UNK) around it, in a radius (RAD) (line 3), i.e., if the point is near the border. If so, the point is added to the global list \(uni_{pts}\). This process (line 3) of Algorithm 1 is detailed in Algorithm 2. In particular, it may be observed how the percentage PER_UNK of unknown cells (Line 18) contained in the radius RAD is used to discriminate whether to keep a point in the list or not (Line 19). ``` 0:\(p_{list}\), \(M\)\(\triangleright\)\(p_{list}\) = \(\{x_{1},y_{1},...x_{N},y_{N}\}\), merged_map = \(M\) 0:\(uni_{pts}\)\(\leftarrow\) 0 0:for\(p\in p_{list}\)do 0:if\(N_{hd}(p,M,\texttt{RAD},\texttt{PRC\_UNK})\)then 0:if\(p\notin uni_{pts}\)then 0:\(uni_{pts}\)\(\leftarrow\)\(p\)\(\triangleright\) add \(p\) to unique list of points 0:endif 0:endif 0:endfor ``` **Algorithm 1** Merge Points To provide an example of this approach, Figure 2(b) shows an example list of two points \(P\) and \(Q\) in a partially discovered map. The Algorithm 1 in conjunction with Algorithm 2 identifies, for both points, a circle as shown in Figure 2(a) and computes the percentage of the unknown cells over the total one contained inside the circle. Once the Figure 3: General 2(a) and Occupancy Grid Map Representation 2(b) of the Discretized circle ``` 1:function\(N_{bdr}(p,M,rad,\text{PRC\_UNK})\) 2:\(c_{x}\leftarrow(p_{x}-M_{origx})/M_{res}\) 3:\(c_{y}\leftarrow(p_{y}-M_{origx})/M_{res}\) 4:\(Rad_{c}\leftarrow\text{rad}/M_{res}\) 5:\(Circ_{c}\leftarrow[]\), \(unk_{cut}\), \(tot_{c}\gets 0\) 6:for\(i,j\in Rad_{c}\)do 7:\(cel_{l}\gets c_{x}+i\), \(cel_{j}\gets c_{y}+j\) 8:if\(cel_{l},cel_{j}\geq 0\) & ``` 9:\(M_{w},M_{h}\)then 10:\(idx\gets cel_{l}+cel_{j}\times M_{width}\) 11:\(Circ_{c}\gets cel_{l}\), \(cel_{j}\) 12:if\(M_{data}[idx]=-1\)then 13:\(unk_{cut}\gets unk_{cut}+1\) ``` **Algorithm 2** Check if a Point is Near the Map Border ``` 1:\(p_{list}\), \(M\), \(rad\), PRC\(\_\)UNK 2:\(c_{x}\leftarrow(p_{x}-M_{origx})/M_{res}\) 3:\(c_{y}\leftarrow(p_{y}-M_{origx})/M_{res}\) 4:\(Rad_{c}\leftarrow\text{rad}/M_{res}\) 5:\(Circ_{c}\leftarrow[]\), \(unk_{cut}\), \(tot_{c}\gets 0\) 6:for\(i,j\in Rad_{c}\)do 7:\(cel_{l}\gets c_{x}+i\), \(cel_{j}\gets c_{y}+j\) 8:if\(cel_{l},cel_{j}\geq 0\) & ``` **Algorithm 3** Check list dimension ``` 1:\(uni_{pts}\geq\text{NUM\_PTS}\)then 2:\(uni_{pts}\gets uni_{pts}\) 3:\(uni_{pts}\leftarrow[]\) 4:\(rad=rad\) + 0.25 5:for\(p\in uni_{pts}\)do 6:for\(p\in uni_{pts}\)do 7:if\(N_{bdr}(p,M,rad,\text{PRC\_UNK})\)then 8:if\(p\notin uni_{pts}\)then 9:\(uni_{pts}\gets p\) 10:endif 11:endif 12:endfor 13:endfor 14:endwhile ``` **Algorithm 4** Check list list list dimension ``` 1:\(uni_{pts}\geq\text{NUM\_PTS}\)then 2:\(uni_{pts}\gets uni_{pts}\) 3:\(uni_{pts}\leftarrow[]\) 4:\(rad=rad\) + 0.25 5:for\(p\in uni_{pts}\)do 7:if\(N_{bdr}(p,M,rad,\text{PRC\_UNK})\)then 8:if\(p\notin uni_{pts}\)then 9:\(uni_{pts}\gets p\) 10:endif 11:endif 12:endfor 13:endif 14:endwhile ``` **Algorithm 5** Check list list dimension percentage is computed, this point is kept or discarded based on the threshold of PER_UNK set during the program execution. In the specific case, opportunely setting PER_UNK, point \(P\) will be added to the global list, i.e., considered as a border point, whereas point \(Q\) will be discarded. Even discarding points that are not on the border, having many agents may lead to having extensive lists of points that need to be processed; to avoid this problem, we decided to bind the number of points to process using Algorithm 3. After the list is created, there is another check to validate the boundaries of the list dimension as described in Algorithm 3. Concerning this algorithm, if the list has fewer points than the minimum required (i.e., MIN_PTS), the same is recomputed by decreasing the threshold PER_UNK used in Algorithm 2 by 10% as shown in Line 4. On the other hand, if the list has more points than a maximum threshold NUM_PTS, the list is reprocessed by increasing the radius RAD (Algorithm 2) by 0.25m (Line 15 of Algorithm 3). ### Utility computation Once the global list of frontier points is created, it is sent to the _assigner_ node of each agent for the computation of the utility function for each frontier candidate, by applying the Bresenham's line algorithm Bresenham (1976) to get the occupancy values of all the pixels within the straight path from the agent to the frontier location. For computing the Entropy, we assign a probability value equal to \(P_{unknown}=0.1\) to the occupancy values of unknown pixels, to quantify low entropy and high information gain (as we are more interested in unknown areas of the environment). Occupancy values of obstacles and free space are mapped with a probability \(P_{known}=0.45\), quantifying high entropy and low information gain. Using these probability values and Equation 1, the path entropy \(E\) for each frontier candidate is computed. The path entropy is then normalized with the number of pixels/cells within the frontier path \(L\). We apply an exponential decay operator \(\gamma=\exp^{-(\lambda dist)}\) with decay factor \(\lambda=0.6\) (kept constant as we assume static environment) and Euclidean distance \(dist\) for penalizing frontiers with large distances. Finally, the proposed utility \(U_{2}\) as shown in Equation 2 is computed by weighing normalized entropy with \(\rho=10^{\beta}\), where \(\beta\) is a factor which depends on the number of spanning trees of the weighted graph Laplacian \(L_{w}\) computed in Equation 3 and adopted from Bender et al. (2015). More explicitly \(\beta\) is the the number of digits before the decimal places of \(U_{1}\) and acts as a balancing factor between entropy and the number of spanning trees. Eventually, we obtain the proposed utility function \(U\) in Equation 4, which not only provides a good SLAM estimate based on the modern D-Optimality criterion but also increases the coverage of the unknown map by reducing the frontier path entropy. \[U_{2}=(1-E/L)*\rho+\gamma \tag{2}\] \[U_{1}=\text{Spann}(L_{w}) \tag{3}\] \[\text{Reward}=U=\max(U_{1}+U_{2}) \tag{4}\] The Reward \(r_{0:N}\) is assigned through the matrix \(R_{m}\) for each agent, as shown in Equation 5, where \(N\) is the number of points detected. \[R_{m}=\left[\begin{array}{cccc}\text{Reward}&\text{X}&\text{Y}\\ \hline r_{0}&x_{0}&y_{0}\\ r_{1}&x_{1}&y_{1}\\ \vdots&\vdots&\vdots\\ r_{N}&x_{N}&y_{N}\\ \end{array}\right] \tag{5}\] ### Update Rewards and Goal selection Once each agent has computed its reward matrix, each matrix is sent to the _choose-goals-server_ (hereafter referred to as _server_) to opportunely update the rewards and select the goal point of each agent. Since the server can manage one reward matrix at a time, it is asynchronously handled by the various agents, with a priority approach: assuming to have a system of \(M>1\) agents, taking two general agents \(i\) and \(j\), which request the server at the same time, the goal of \(i\) is processed before \(j\)'s if \(i<j\). Moreover, the server also stores the already assigned locations, to avoid reusing them. Indeed, to explore the biggest possible area of the environment, the goals for the agents need to be spread. To foster this sparsity, since there are many agents, we decided to update the reward function using Algorithm 5 and Equations 6-8, where \(d\) is the distance between the chosen goal and all other goal points. \[K=\frac{\texttt{max}\texttt{ reward in matrix}}{\texttt{number of already chosen points}} \tag{6}\] \[k=\frac{K}{d^{2}} \tag{7}\] \[R_{new}=R_{old}-k \tag{8}\] It may be observed (Equation 8) how \(k\) represents a subtracting factor for the Reward matrix elements, updated when a target goal for one agent is selected. Since \(k\) is inversely dependent on the distance computed between the last chosen goal and the considered frontier point (Equation 7), the closer the point is closer to the already chosen goal, the higher will \(k\), decreasing the probability that the point will be chosen as the next goal, thus achieving the task of spreading the goals in the environment. The complete procedure for the selection of points (goals) is described in Algorithm 4, also including the relative function to update rewards when a goal is selected, described in Algorithm 5. ``` 1:function\(sel_{pn}(PRewards,N_{goals})\) 2:\(goals\leftarrow[]\) 3:\(Mat\leftarrow\) Equation 5 using \(PRewards\) and \(N_{goals}\) 4:if\(Mat\neq\{\}\)then 5:\(p\leftarrow\)Point2D() 6:if\(cho_{coals}\neq\{\}\)then 7:\(Mat\gets upd\_rewards(cho_{coals},\,Mat)\) 8:endif 9:\(MaxRew_{ID}\leftarrow\) get maximum reward index in \(Mat\) 10:\(p.x,p.y\leftarrow\) x,y value in \(Mat\) at \(MaxRew_{ID}\) 11:for\(pt\in cho_{coals}\)do 12:if\(pt.x,p.y=p.x,p.y\)then 13:\(Mat[MaxRew_{ID},0]\leftarrow-\infty\) 14:\(MaxRew_{ID}\leftarrow\) get maximum reward index in \(Mat\) 15:\(p.x,p.y\leftarrow\) x,y value at \(MaxRew_{ID}\) 16:endif 17:endfor 18:\(cho_{coals}\gets p\) 19:\(goals\gets p\) 20:endif 21:return\(goals\) 22:endfunction ``` **Algorithm 4** Select Points Algorithm 4 takes as input a list of points along with their rewards \(PRewards=\{r_{1},x_{1},y_{1},.....,r_{N},x_{n},y_{n}\}\) (computed with Equation 5, line 3), and updates the \(R_{m}\) matrix using Algorithm 5 (line 7), which first checks whether the frontier corresponds with the already selected goal, discarding the point in this case, and then updates the reward as described in Equations 6-8. The goal corresponding to the higher reward for the current agent, updated with the described policies, is eventually passed back to the agent. The choice of the parameter \(K\) (Equation 6 is mainly motivating by two reasons: * the term max reward in matrix allows for scaling \(K\) concerning the Reward Matrix of each single agent, normalizing the subtractive factor; * the term number of already chosen points allows for distributing the reward update also taking into account the number of already selected points. Indeed, when the number of targets already explored becomes significant, each point will only receive a smaller portion of the total reward, resulting in a more limited effect of the subtractive parameter. As described before, agents are managed asynchronously with a priority approach. The priority assigned to robots can lead to having one or more robots with low priority being stuck because they are always prioritized by higher-priority agents. To avoid this issue, the server also takes into account the number of requests not related to each agent. Once this number exceeds a certain predefined threshold, the corresponding agent will be associated with the higher priority. This approach will avoid having robots stuck, distributing goals more uniformly and is more effective then synchronous approach presented in our previous work [35]. ``` 1:function\(upd\_rewards(cho_{coads},\,Mat)\) 2:\(MaxRew_{w}\leftarrow\) max. reward in \(Mat\) 3:\(K\leftarrow\) Equation 6 4:for\(c\in ch_{coads}\) & \(g\in Mat\)do 5:if\(c_{x,y}=g_{x,y}\)then 6:\(Mat[g]=-\infty\) 7:endif 8:if\(Mat[g]\neq-\infty\)then\(\triangleright\) compute distance from chosen goals 9:\(d2\leftarrow\sqrt{(c_{x}-g_{x})^{2}+(c_{y}-g_{y})^{2}}\) 10:if\(d2\neq 0\)then 11:\(Mat[g]\leftarrow\) Equation 8 12:else 13:\(Mat[g]=-\infty\) 14:endif 15:endif 16:endfor 17:return\(Mat\) 18:endfunction ``` **Algorithm 5** Update Rewards ## 4 Experimental Evaluation ### Simulation Environment To evaluate the proposed approach, simulations were performed with the simulation environment Gazebo on a PC with an Intel Core i7(r) (32GB RAM) and NVIDIA RTX 1000 GPU, equipped with Ubuntu 20.04 and ROS Noetic. Practically, a team of RosBots 2, equipped with lidar sensors were deployed in a modified version of the Willow Garage (W.G)4 environment and of the AWS Hospital1, with area of 2071 and 1243 \(m^{2}\) respectively. Footnote 1: [https://github.com/aws-robotics/aws-robomaker-hospital-world](https://github.com/aws-robotics/aws-robomaker-hospital-world) Since our proposed approach aims at environment exploration while working on minimum frontier points with efficient AC-SLAM, we decided to use the following performance metrics: * _percentage of map coverage_, to quantify the evolution of the covered map concerning the ground truth map; * _number of frontier points_, to measure the average points reduction (corresponding to a decreased computational cost) achieved with the method described in Section 3.2. Figure 4 shows the resultant Occupancy Grid map along with the pose graphs of 3 agents in the AWS Hospital environment, ctrconsidering 30 minutes of exploration time. We can observe that the proposed method provides an accurate map with 80% coverage while maintaining good SLAM accuracy. Figure 5 shows the resulting individual pose graphs from Open Kardo SLAM, also indicating the loop closures performed by robots, which helps in reducing uncertainty. Figure 5(a) shows the results of 4 simulations (S1 to S4) using 2 and 3 robots, with and without the proposed utility function, again related to 30 minutes of exploration time, in the Willow Garage environment. In this scenario, we compared our proposed approach (designed as 'our') with the one using the utility function proposed by [5], as shown in Equation 3 (hereinafter referred as 'MAGS'). We can observe that with our approach (S2, S4) we manage to get 11% and 8% more area covered as compared to MAGS (S1, S3) respectively. This implies an area of \(228m^{2}\) and \(166m^{2}\) covered. As far as dealing with the computational complexity, i.e., in terms of reducing the number of processed frontier points, Figure 5(b) provides an insight into how the number of points used is reduced. We can deduce that using the methods in Section 3.2 and setting PER_UNK = 60%, RAD= \(1m\) we manage to drastically reduce the average number of points processed from 31 to 6 for S1 (81%), 37 to 7 for S2 (82%), 58 to 9 for S3 (85%), and 48 to 2 for S4 (96%) respectively, also exploring the environment more efficiently. This reduction in points is directly related to the computational complexity, as fewer points require computing the utility function with lesser frequency, hence accelerating the overall performance of the proposed system. The same comparison has been carried out in the smaller AWS environment, with a simulation time of 25 minutes. Figure 7 shows the area coverage and points reduction with 'our' (S2, S4) and 'MAGS' (S1, S3) approaches. We can Figure 4: AWS Modified Hospital with 3 robots. Figure 5: Resulting pose graphs of individual robots. observe that with the proposed approach (S2, S4) we managed to get a higher percentage (3% - 9%) of the area covered. This implies an increased coverage area between \(37m^{2}\) and \(112m^{2}\). Since the AWS Hospital environment is much smaller than the Willow Garage one, results related to map coverage are quite high for both MAGS and our approach, but the small increase in coverage seems to confirm the effectiveness of our approach. Also in this case, the usage of the _Filtering and Classification_ method described in Section 3.2 leads to a meaningful reduction in the number of points processed from 36 to 1 for S1 (98%), 29 to 4 for S2 (87%), 43 to 1 for S3 (98%), and 76 to 0 (no new points detected) for S4 respectively. The average values are smaller compared to those in Figure 5(b) (1.5 vs 6) since also the environment is smaller than before. Hence, the system is saturated with an average coverage of 80%, and no further exploration and coverage task is required. The above-mentioned simulation results indicate the efficiency of our approach as compared to the state-of-the-art methods. To further evaluate the methodology, we performed tests with a team of ground robots in a real environment, whose results are presented in the following section. Figure 6: Willow Garage (W.G) % area coverage and points reduction comparison Figure 7: AWS Hospital % area coverage and points reduction comparison. ### Real Environment Experiments in a real environment were performed using two ROSBot 2R robots5 with RPLidar A2 (Figure (a)a) with ROS on Ubuntu 20.04.6 (LTS). The robots are equipped with an Intel Xeon(r) W-2235 CPU 3.80GHz x 12, with 64Gb RAM and Nvidia Quadro RTX 4000 GPU. The environment consists of a room and two corridors measuring \(81m^{2}\) in total as shown in Figure (b)b. Figure (c)c shows the resultant Occupancy Grid map along with SLAM pose graphs, using the proposed approach with a team composed of two robots (red circles). Footnote 5: [https://husarion.com/manuals/rosbot/](https://husarion.com/manuals/rosbot/). Figure (a)a shows the percentage of maps discovered over time. As for simulation results, we compared the proposed utility function with the 'MAGS' one in eight experiments, considering an exploration time of 2 minutes. It can be shown that the proposed approach achieves a coverage of 98.85%, higher than the coverage percentage achieved with MAGS (94.25%). This implies a higher portion of map covered with the proposed approach (4.6%) and hence proves the effectiveness of the proposed method. Figure (b)b compares the number of points processed and detected in the four experiments. We can observe a significant reduction in the number of used points, from 6 to 5 for Exp 1 (17%), 7 to 3 for Exp 2 (58%), 3 to 2 for Exp 3 (34%), and 6 to 2 for Exp 4 (67%), resulting again in a reduction of the computational load. Since the area of the experimental environment is much smaller, the exploration time was limited to only 2 minutes. Still, the increase of the map coverage (4.6%) was similar to the one achieved within the simulation (3% - 11%). The maximum number of points detected and processed is smaller concerning the simulation results, with an average reduction in the frontier points used equal to 44%. ## 5 Conclusion In this article, we presented a multi-robot collaborative active SLAM framework for an environment exploration task. The proposed framework provides a utility function that incorporates the SLAM uncertainty and path entropy for the efficient selection of goal frontier candidates. We also propose an efficient frontier filtering method that encourages sparsity while working on a reduced number of frontier candidates, hence providing a less computationally expensive solution. The implementation exploits a ROS-based client-server paradigm in a modular software architecture. Through various simulation results on publicly available environments and experiments in a real environment, we have proven the usefulness and applicability of our method as compared to selected state-of-the-art approaches and manage to achieve an accumulative 31% more coverage. In the future, we plan to extend our approach to visual AC-SLAM using heterogeneous robots to exploit the visual features and viewpoint changes of the environment. ## Acknowledgement This work was carried out in the framework of the NExT Senior Talent Chair DeepCoSLAM, which was funded by the French Government, through the program Investments for the Future managed by the National Agency for Research ANR-16-IDEX-0007, and with the support of Region Pays de la Loire and Nantes Metropole. This research was also supported by the DIONISO project (progetto SCN_00320-INVITALIA), which is funded by the Italian Government.
2303.06058
A General Recipe for the Analysis of Randomized Multi-Armed Bandit Algorithms
In this paper we propose a general methodology to derive regret bounds for randomized multi-armed bandit algorithms. It consists in checking a set of sufficient conditions on the sampling probability of each arm and on the family of distributions to prove a logarithmic regret. As a direct application we revisit two famous bandit algorithms, Minimum Empirical Divergence (MED) and Thompson Sampling (TS), under various models for the distributions including single parameter exponential families, Gaussian distributions, bounded distributions, or distributions satisfying some conditions on their moments. In particular, we prove that MED is asymptotically optimal for all these models, but also provide a simple regret analysis of some TS algorithms for which the optimality is already known. We then further illustrate the interest of our approach, by analyzing a new Non-Parametric TS algorithm (h-NPTS), adapted to some families of unbounded reward distributions with a bounded h-moment. This model can for instance capture some non-parametric families of distributions whose variance is upper bounded by a known constant.
Dorian Baudry, Kazuya Suzuki, Junya Honda
2023-03-10T16:43:48Z
http://arxiv.org/abs/2303.06058v2
# A General Recipe for the Analysis of Randomized Multi-Armed Bandit Algorithms ###### Abstract In this paper we propose a general methodology to derive regret bounds for randomized multi-armed bandit algorithms. It consists in checking a set of sufficient conditions on the _sampling probability_ of each arm and on the family of distributions to prove a logarithmic regret. As a direct application we revisit two famous bandit algorithms, Minimum Empirical Divergence (MED) and Thompson Sampling (TS), under various models for the distributions including single parameter exponential families, Gaussian distributions, bounded distributions, or distributions satisfying some conditions on their moments. In particular, we prove that MED is asymptotically optimal for all these models, but also provide a simple regret analysis of some TS algorithms for which the optimality is already known. We then further illustrate the interest of our approach, by analyzing a new Non-Parametric TS algorithm (\(h\)-NPTS), adapted to some families of unbounded reward distributions with a bounded \(h\)_-moment_. This model can for instance capture some non-parametric families of distributions whose variance is upper bounded by a known constant. **Keywords:** Multi-Armed Bandits, Thompson Sampling, MED + Footnote †: 1: \(\forall(F_{1},\ldots,F_{K})\in\mathcal{F}^{K}\); \(\forall\alpha>0\), \(\mathbb{E}[N_{k}(T)]=o(T^{\alpha})\) for all \(k\) satisfying \(\Delta_{k}>0\). ## 1 Introduction A Multi-Armed Bandit (MAB) is a problem in which a learner sequentially picks an action among \(K\) alternatives, called arms, and collects a random reward. The rewards collected from an arm \(k\in[K]\) are all drawn independently from a distribution \(F_{k}\), of mean \(\mu_{k}\). At each time step \(t\) the learner chooses an arm \(A_{t}\), adapting her strategy in order to maximize the expected sum of rewards. For a time horizon \(T\) this is equivalent to minimizing the _regret_, formally defined as \[\mathcal{R}_{T}=\mathbb{E}\left[\sum_{t=1}^{T}(\mu^{\star}-\mu_{A_{t}})\right] =\sum_{k=1}^{K}\Delta_{k}\mathbb{E}\left[N_{k}(T)\right]\;, \tag{1}\] where \(N_{k}(T)=\sum_{t=1}^{T}\mathbbm{1}(A_{t}=k)\) is the total number of selections of arm \(k\) up to time \(T\), and \(\Delta_{k}\) is the _sub-optimality gap_ of arm \(k\): \(\Delta_{k}=\mu^{\star}-\mu_{k}\) for \(\mu^{\star}=\max_{j\in\{1,\ldots,K\}}\mu_{j}\). Assuming that \(F_{1},\ldots,F_{K}\) come from the same family of distributions \(\mathcal{F}\), Lai and Robbins (1985) and Burnetas and Katehakis (1996) proved (resp. for single-parametric and general families) that a uniformly efficient bandit algorithm1 satisfies the following lower bound for any sub-optimal arm \(k\): Footnote 1: \(\forall(F_{1},\ldots,F_{K})\in\mathcal{F}^{K}\); \(\forall\alpha>0\), \(\mathbb{E}[N_{k}(T)]=o(T^{\alpha})\) for all \(k\) satisfying \(\Delta_{k}>0\). \[\liminf_{T\to\infty}\frac{\mathbb{E}[N_{k}(T)]}{\log(T)}\!\geq\!\frac{1}{ \mathcal{K}_{\inf}^{\mathcal{F}}(F_{k},\mu^{\star})}\;,\quad\mathcal{K}_{ \inf}^{\mathcal{F}}(F_{k},\mu^{\star})=\inf_{G\in\mathcal{F}}\left\{\mathrm{ KL}(F_{k},G)\!:\!\mathbb{E}_{G}(X)\!>\!\mu^{\star}\right\}\;, \tag{2}\] where \(\mathrm{KL}(.,.)\) denotes the Kullback-Leibler divergence between two distributions. We call an algorithm _asymptotically optimal_ if it admits a regret upper bound _matching_ this lower bound. Furthermore, if \(\mathbb{E}[N_{k}(T)]=\mathcal{O}(\log(T))\) we say that the algorithm achieves a _logarithmic regret_. Families of distributionsThe lower bound presented in (2) depends on the characteristics of the family of distributions \(\mathcal{F}\), on which assumptions have to be made before designing bandit algorithms. For instance, _Single Parameter Exponential Families_ (SPEF) are a usual parametric model (see Definition 13). They include several usual distributions such as Bernoulli, Poisson, or Gaussian distributions with known variance. In some cases a multiparameter exponential family is also considered such as _Gaussian distributions with unknown variances_. In other cases, non-parametric assumptions may be more suitable. For instance, one can consider the model where the rewards are supported in a _known bounded range_\([b,B]\), or the model of \(\sigma\)_-sub-Gaussian_ distributions (see e.g. Definition 5.2 in Lattimore and Szepesvari, 2020), which essentially assumes that their tails are no heavier than the Gaussian distribution with variance \(\sigma^{2}\). When the distributions are _heavy-tailed_, some _moment condition_ may be assumed (see e.g. Bubeck et al., 2013): \(\mathbb{E}_{\nu}[|X|^{1+\varepsilon}]\leq B\) for known \(\varepsilon>0,B>0\). More recently, Agrawal et al. (2020) introduced a more general assumption: \(\mathbb{E}_{\nu}[h(|X|)]\leq B\) for a known convex function \(h\) satisfying \(x=o(h(|x|))\). We name for convenience such assumption an (uncentered) _\(h\)-moment condition_. We further consider its _centered_ version: \(\mathbb{E}_{\nu}[h(|X-\mathbb{E}_{\nu}[X|])]\leq B\), that we simply call a _centered_\(h\)-moment condition. Some interesting examples would be (a variant of) the \(\sigma\)-sub-Gaussian model and a family of distributions with bounded variance. Some Bandit AlgorithmsThere is a vast literature on MABs (see Lattimore and Szepesvari, 2020 for a survey), so the following selection is non-exhaustive and only introduce the main families of asymptotically optimal policies. The most celebrated is certainly _optimism in face of uncertainty_(Agrawal, 1995; Auer et al., 2002). The simple UCB1 strategy (Auer et al., 2002) achieves logarithmic regret for bounded-support distributions, while the more sophisticated KL-UCB principle provides optimal algorithms for SPEF (Cappe et al., 2013), bounded distributions (Cappe et al., 2013; Agrawal et al., 2021), and uncentered \(h\)-moment conditions when \(h(x)=x^{1+\varepsilon}\) for \(\varepsilon>0\)(Agrawal et al., 2021). _Thompson Sampling_ (TS) (Thompson, 1933) is a second widely studied class of policies. At each time step, these Bayesian algorithms use an appropriate conjugate prior/posterior to sample the true distribution or its parameter, and choose the best arm in this sampled environment. Several TS algorithms are optimal: with Jeffreys priors for SPEF (Korda et al., 2013), with well-tuned inverse-gamma priors for Gaussian distributions (Honda and Takemura, 2014), and with Dirichlet prior/posteriors for bounded distributions (Riou and Honda, 2020). A third family notably includes several optimal algorithms: _Minimum Empirical Divergence_. MED (Honda and Takemura, 2011) and IMED (Honda and Takemura, 2015) are respectively the randomized and deterministic versions of this principle. The former is optimal for multinomial distributions (Honda and Takemura, 2011), while the latter is optimal for semi-bounded distributions (Honda and Takemura, 2015) and SPEF (Pesquerel et al., 2021). More recently, Bian and Jun (2022) proved a logarithmic regret for a MED algorithm designed for sub-Gaussian distributions, under the name _Maillard Sampling_. Finally, some recent works focused on providing alternative non-parametric algorithms to these three approaches (see e.g. Kveton et al., 2019, 2019). It has been proved for instance that some algorithms based on _sub-sampling_(Baransi et al., 2014; Chan, 2020; Baudry et al., 2020, 2021) are optimal for SPEF or Gaussian distributions without using the knowledge of the family. Outline and contributionsAfter comparing the results obtained in the literature, a striking observation is that diverse strategies (KL-UCB, TS, IMED) are all proved to be optimal for almost the same families of distributions (SPEF and bounded distributions in particular). This raises two intriguing questions: _What are the fundamental properties shared by these families of distributions? Given a family of distribution, are all these algorithms different variants of the same exploration strategy?_ In this paper, we try to answer these questions for _randomized_ bandit algorithms. To this end, we first formulate our framework in Section 2, and confirm that it easily captures MED and a computationally efficient variant of TS. We then provide in Section 3 a unified regret analysis for general algorithms depending on upper and lower bounds on the _sampling probability_ of each arm, and exhibit five simple sufficient conditions ensuring a logarithmic (or even optimal) regret for a given algorithm on the family \(\mathcal{F}\). As a direct application, we derive in Section 4 regret guarantees of MED and TS algorithms for various families of distributions. Our result not only covers some families \(\mathcal{F}\) where the optimality of MED has not been proved, but also leads to a simple analysis of TS where the optimality has been known, under a slight change of the algorithm. We also prove that the families with \(h\)-moment conditions satisfy the above sufficient conditions for the optimality, so MED is the first algorithm with the optimal regret bound for the centered case. This setting has a special technical difficulty since the space of the distributions is not compact nor convex unlike the uncentered case. To demonstrate the strength of our simple analysis, we then propose a variant of TS (\(h\)-NPTS) for the nonparametric models with uncentered or centered \(h\)-moment conditions in Section 5. We show that its sampling probability can be bounded in a form such that the analysis can be captured within the same framework as MED, which completes the picture that MED and TS can be interpreted as two variants of the same exploration strategy. Furthermore, \(h\)-NPTS is an efficient alternative to MED and the existing optimal algorithm (Agrawal et al., 2020) in the uncentered case, as it does not require any optimization procedure. ## 2 Preliminaries: randomized bandit algorithms In this section we introduce some notation to describe randomized bandit algorithms. We then detail the MED and TS algorithms considered in this paper. NotationAt each step \(t\), a randomized bandit algorithm \(\pi\) chooses an arm \(A_{t}\) based on past observations \(\mathcal{H}_{t-1}=(A_{1},X_{1},\ldots,A_{t-1},X_{t-1})\), where we denote by \(X_{t}\) the reward collected at time \(t\). We define the _sampling probability_ of each arm \(k\) at time \(t\) as \[p_{k}^{\pi}(t)\coloneqq\mathbb{P}_{\pi}(A_{t}=k|\mathcal{H}_{t-1})\.\] This probability depends on the empirical distribution of each arm \(k\), that we denote by \(F_{k}(t)\). We also sometimes use this notation for the empirical cumulative distribution function (cdf) with a slight abuse of notation. Throughout the paper, we denote by \(\mu_{k}(t)\) the empirical mean of an arm \(k\) at time \(t\), and by \(\mu^{\star}(t)=\max\mu_{k}(t)\) the best empirical mean. We also use for convenience the notation \(F_{k,n}\) and \(\mu_{k,n}\), denoting respectively the empirical distribution and mean corresponding to the \(n\) first observations collected from an arm \(k\). Finally, we denote by \(\mathcal{P}\) the set of probability distributions on \(\mathbb{R}\). Given these elements, our analysis is based on the following assumption. **Assumption 2.1** (Upper and lower bound on \(p_{k}^{\pi}(t)\)): _There exist some function \(D_{\pi}:\mathcal{P}\times\mathbb{R}\mapsto\mathbb{R}\) and some **positive** sequences \((c_{n})_{n\in\mathbb{N}},(C_{n})_{n\in\mathbb{N}}\) and a **non-negative** sequence \((a_{n})_{n\in\mathbb{N}}\) such that the bandit algorithm \(\pi\) satisfies for any \(k\) in \(\{1,\ldots,K\}\) that_ \[p_{k}^{\pi}(t)\exp\left(N_{k}(t)\frac{D_{\pi}(F_{k}(t),\mu^{\star}(t))}{1+a_{N _{k}(t)}}\right)\in[c_{N_{k}(t)}^{-1},C_{N_{k}(t)}]. \tag{3}\] This assumption is the very core of our general recipe to analyze randomized policies, but it will be the only assumption directly made on a given policy. Intuitively, a good choice of \(D_{\pi}\) would be as close as possible to \(\mathcal{K}_{\inf}^{\mathcal{F}}\) to obtain an optimal algorithm. The analysis will rely on the properties of \(D_{\pi}\) on the family \(\mathcal{F}\), and we now introduce our first assumption. **Assumption 2.2** (A1): \(D_{\pi}\) _is continuous in its second argument, the mapping \(x\mapsto D_{\pi}(F,x)\) is non-decreasing for any distribution \(F\in\mathcal{P}\), and \(D_{\pi}(F_{k}(t),\mu)=0\) for any \(\mu\geq\mu_{k}(t)\)._ In Section 3 we provide an upper bound on the expected number of pulls of each sub-optimal arm under a policy \(\pi\), assuming only that Assumptions 2.1 and 2.2 hold. Before presenting this result, we motivate Assumption 2.1 by introducing the two policies that are the focus of this paper: Minimum Empirical Divergence (MED) and a slight variant of Thompson Sampling (TS). Minimum Empirical DivergenceThe most intuitive way to design an algorithm \(\pi\) satisfying Assumption 2.1 for a given function \(D_{\pi}\) is to directly fix the sampling probability as \[p_{k}^{\pi}(t)\propto\exp\left(-N_{k}(t)\frac{D_{\pi}(F_{k}(t),\mu^{\star}(t)) }{1+a_{N_{k}(t)}}\right)\, \tag{4}\] for a given sequence \((a_{n})_{n\in\mathbb{N}}\). This is exactly the definition of the MED algorithm, proposed by Honda and Takemura (2011), that defines exactly \(D_{\pi}:=\mathcal{K}_{\inf}^{\mathcal{F}}\). More recently, Bian and Jun (2022) proposed Maillard Sampling (MS) for sub-gaussian distributions, following the same principle but with \(D_{\pi}(F_{k}(t),\mu_{\star}(t))=\frac{(\mu^{\star}(t)-\mu_{k}(t))^{2}}{2}\). In the present paper we consider general functions \(D_{\pi}\), but will try to design them as close as possible to \(\mathcal{K}_{\inf}^{\mathcal{F}}\). We keep the name MED for simplicity, but will carefully specify \(D_{\pi}\) in each context. Furthermore, the tuning of \((a_{n})\) will be motivated by our analysis, and will be non-zero only for a few specific cases of unbounded distributions. We can directly verify that Assumption 2.1 holds with \(c_{n}^{-1}=\frac{1}{K}\) and \(C_{n}=1\) for any \(n\in\mathbb{N}\), as under (A1) we have that \(D_{\pi}(F_{k}(t),\mu^{\star})\geq 0\) with equality if \(\mu_{k}(t)=\mu^{\star}(t)\). Thompson SamplingUsual TS algorithms are _index policies_: a sample \(\widetilde{\mu}_{k}(t)\) is drawn from each arm \(k\), and \(A_{t}=\text{argmax }\widetilde{\mu}_{k}(t)\) is pulled. Unfortunately, showing that Assumption 2.1 is satisfied by such algorithm is very intricate, as the performance of every arm has to be considered to bound each sampling probability. Furthermore, as discussed in Remark 3 of Riou and Honda (2020) or in Baudry et al. (2021b) such sampling for the best arm increases the computational cost in non-parametric cases and may be avoided by a slight change in the algorithm. We follow this direction since, as we will discuss in Section 5, TS is particularly promising as a computationally efficient version of MED. Assuming that the learner is provided a TS sampler that can return a sampled mean \(\widetilde{\mu}_{k}(t)\) for each arm given \((F_{k}(t),\mu_{k}(t))_{k\in\{1,\ldots,K\}}\), the proposed variant performs the following steps: 1. Sample means \(\widetilde{\mu}_{k}(t)\) according to the sampler for each arm satisfying \(\mu_{k}(t)<\mu^{\star}(t)\). 2. Pull an arm at random from \(\mathcal{A}_{t}=\{k:\{\mu_{k}(t)=\mu^{\star}(t)\}\cup\{\widetilde{\mu}_{k}(t)\geq \mu^{\star}(t)\}\}\). We detail the algorithm in Appendix C.1 (Algorithm 2), along with the usual index policy (Algorithm 3) written with the same notations for an easy comparison. Under this algorithm, we directly obtain that when an arm is empirically sub-optimal its sampling probability satisfies \[p_{k}^{\pi}(t)\in\left[\frac{\mathbb{P}(\widetilde{\mu}_{k}(t)\geq\mu_{\star} (t))}{K},\mathbb{P}(\widetilde{\mu}_{k}(t)\geq\mu_{\star}(t))\right]\,\] and \(p_{k}^{\pi}(t)\in[K^{-1},1]\) when the arm is empirically optimal. Hence, Assumption 2.1 holds iff (3) holds for \(\mathbb{P}(\widetilde{\mu}_{k}(t)\geq\mu_{\star}(t))\) instead of \(p_{k}^{\pi}(t)\). We call such quantity a _Boundary Crossing Probability_ (BCP). Bounds on the BCP are available in the literature for most existing TS policies, and so the analysis presented in this paper will allow us to revisit their proofs and simplify them under this slight algorithmic change. For example, the following algorithms are covered in our analysis: * TS with a conjugate prior and posterior for SPEF (Korda et al., 2013; Jin et al., 2022) and under the assumption that the means belong to a finite range \([\mu_{0}^{-},\mu_{0}^{+}]\). * Gaussian TS with inverse-gamma priors (Honda and Takemura, 2014) for shape parameter satisfying \(\alpha<0\). * Non-Parametric TS (Riou and Honda, 2020) for bounded distributions with a known range. For completeness, we detail their implementation (following Alg. 2) in Appendix C.2 (Algs. 4-6). The fact that only an upper and lower bound on the BCP is needed to extend the analysis of a MED algorithm to a TS algorithm is the main ingredient of our simplified proofs for TS, and will be particularly helpful to analyze the novel \(h\)-NPTS algorithm introduced in Section 5. ## 3 General recipe for the regret analysis of randomized algorithms In the previous section we introduced some examples of algorithms covered by the scope of our analysis. We now present our first result, which is a generic upper bound of the expected number of pulls of each sub-optimal arm \(k\) for any randomized policy satisfying Assumptions 2.1 and 2.2. **Theorem 1**: _Let \(\pi\) be a policy satisfying Assumption 2.1 for a function \(D_{\pi}\) satisfying (A1), and with \(\log(C_{n})=o(n^{\alpha})\) for some \(\alpha<1\). We assume w.l.o.g. that \(\mu^{\star}=\mu_{1}\). Then, for any \(c>0\) satisfying \(c>\lim\limits_{n\to\infty}a_{n}\) and for any \(\varepsilon>0\), it holds for any sub-optimal arm \(k\) that_ \[\mathbb{E}[N_{k}(T)]\leq(1+c)\frac{\log(T)}{D_{\pi}(F_{k},\mu_{1})}+B(T,c, \varepsilon)+o(\log(T))\, \tag{5}\] _with_ \[B(T,c,\varepsilon)= \sum_{n=u}^{+\infty}\mathbb{P}\left(D_{\pi}(F_{k,n},\mu_{1}- \varepsilon)\frac{1+c}{1+a_{n}}\leq D_{\pi}(F_{k},\mu_{1})\right)\] \[+ \sum_{n=1}^{T}c_{n}\mathbb{E}\left[e^{n\frac{D_{\pi}(F_{k,n},\mu_{ 1}-\varepsilon)}{1+a_{n}}}-1\right]+\sum_{n=1}^{T}(c_{n}-1)\mathbb{P}\left(\mu _{1,n}\leq\mu_{1}-\varepsilon\right)\.\] We detail the proof, which is inspired by the analysis of Maillard Sampling (Bian and Jun, 2022), in Appendix B.1. We prove the result under the more general Assumption B.1 (presented in the Appendix) on the sampling probabilities, but we chose to present Assumption 2.1 in the main paper for the clarity of the presentation. The quantity \(B(T,c,\varepsilon)\) encapsulates the terms in the upper bound that require to carefully analyze the properties of \(D_{\pi}\) on the family of distributions \(\mathcal{F}\). Furthermore, the additional \(o(\log(T))\) term in (5) is explicitly upper bounded in the proof: its scaling in \(T\) only depends on \((C_{n})\), and is for instance in \(\mathcal{O}\left(\log(\log(T))\right)\) if \((C_{n})\) is polynomial, and constant if it is bounded. Inspired by this result, we now propose a set of sufficient conditions that allow to upper bound \(B(T,c,\varepsilon)\) by a constant, hence proving a logarithmic regret with Theorem 1. **Assumption 3.1** (Sufficient conditions for logarithmic regret): * _Concentration of \(D_{\pi}\) for sub-optimal arms (A2): for any \(\mu>\mu_{k}\), and \(\delta>0\) small enough,_ \[\sum_{n=1}^{+\infty}\mathbb{P}\left(D_{\pi}(F_{k,n},\mu)\leq D_{\pi}(F_{k},\mu )-\delta\right)=\mathcal{O}(1)\.\] * _Identifiability of the best arm (A3): For any \(\varepsilon>0\), there exists \(\delta_{\varepsilon}>0\) (depending only on \(\mu_{1}\)) such that \(c_{n}=o(e^{n\frac{\delta_{\varepsilon}}{2}})\) and for any empirical distribution2\(F_{1,n}\in\mathcal{F}\) satisfying \(\mu_{1,n}\leq\mu_{1}-\varepsilon\)_ F_1,n}_{1,n}\leq\mu_{1}-\varepsilon\)_ F_1,n}_2}_2}\(D_{\pi}(F_{1,n},\mu_{1})-D_{\pi}(F_{1,n},\mu_{1}-\varepsilon)\geq \delta_{\varepsilon}\.\) Footnote 2: In parametric cases we replace \(F_{1,n}\) by the parametric distribution corresponding to its observations. * _Concentration of \(D_{\pi}(F_{1,n},\mu_{1})\) for the optimal arm (A4): \(D_{\pi}\) admits an exponential concentration inequality of the form_ \[\mathbb{P}(D_{\pi}(F_{1,n},\mu_{1})>x)\leq D_{n}e^{-nx},\] _for any \(x\geq x_{0}\), where \(x_{0}>0\) can be arbitrary, and where \((D_{n})\) is polynomial in \(n\). * _Concentration of the mean estimator (A5): For \(\varepsilon>0\) small enough,_ \[\sum_{n=1}^{T}c_{n}\mathbb{P}(\mu_{1,n}\leq\mu_{1}-\varepsilon)=\mathcal{O}( 1)\.\] We now motivate these assumptions and provide some high-level insights. **(A2)**: allows to upper bound the first term of \(B(T,c,\varepsilon)\) by a constant. It is slightly weaker than assuming that \(D_{\pi}\) is continuous in the first argument w.r.t. some metric and that \(F_{k,n}\) converges "fast enough" to some neighborhood of \(F_{k}\). Intuitively, (A2) ensures the good convergence of the sampling probability of sub-optimal arms in a regime in which the best arm is well estimated. **(A3)-(A4)**: might be the trickiest assumptions to prove for a given family \(\mathcal{F}\), and allow to upper bound the second term of \(B(T,c,\varepsilon)\). Their combination ensures that arm \(1\) is regularly sampled even if it is performing poorly, so that such scenario does not harm the regret. **(A5)** directly upper bounds the last term of \(B(T,c,\varepsilon)\). It depends on the scaling of \((c_{n})_{n\in\mathbb{N}}\) and of available upper bounds on \(\mathbb{P}(\mu_{1,n}\geq\mu_{1}-\varepsilon)\). For instance, the Chernoff method leads to an exponentially decreasing bound if \(\nu_{1}\) admit a finite moment-generating function, and Theorem 2 of Fournier and Guillin (2015) gives bounds decreasing as a power of \(n\) if \(\nu_{1}\) admits some finite moment. In Appendix B.3 we also discuss the possible replacement of the empirical mean by a robust estimator for heavy-tailed distributions. We can now establish the main result of this paper, with a detailed proof in Appendix B.2. **Corollary 2**: _Let \(\mathcal{F}\) be a family of distributions and \((F_{1},\ldots,F_{K})\in\mathcal{F}^{K}\) be an MAB. Let \(\pi\) be a randomized policy satisfying Assumption 2.1 for a function \(D_{\pi}\) verifying (A1)-(A5) on \(\mathcal{F}\) and some sequences \((a_{n},c_{n},C_{n})_{n\in\mathbb{N}}\), with \(a_{n}>0\) and \(a_{n}=\mathcal{O}(n^{-1})\). Then, for any \(c>0\) it holds that_ \[\forall k:\Delta_{k}>0,\quad\mathbb{E}[N_{k}(T)]\leq(1+c)\frac{\log(T)}{D_{\pi }(F_{k},\mu^{\star})}+o(\log(T))\.\] _Furthermore, setting \(a_{n}=0\) is sufficient if \(D_{\pi}\) is upper bounded, i.e. if for any \(\mu\in\mathbb{R}\) there exists a constant \(D_{\mu}^{+}\in\mathbb{R}\) such that for any \(F\in\mathcal{P}\) it holds that \(D_{\pi}(F,\mu)\leq D_{\mu}^{+}\)._ This result is particularly interesting, as it provides a general methodology to derive theoretical guarantees for a given randomized algorithm \(\pi\) on a family of distributions \(\mathcal{F}\) by: 1. Proving that \(\pi\) satisfies Assumption 2.1 for some function \(D_{\pi}\) and a suitable sequence \((a_{n})\). 2. Showing that (A1)-(A5) are satisfied by \(D_{\pi}\) on \(\mathcal{F}\), and concluding with Corollary 2. In the next section we apply this recipe to prove the optimality of several MED and TS algorithms. ## 4 Application to various MED and TS algorithms We now show the interest of the theoretical background presented in previous sections by providing some precise examples of families \(\mathcal{F}\) for which all assumptions (A1)-(A5) are satisfied. In each case, we try to make \(D_{\pi}\) as close as possible to the \(\mathcal{K}_{\inf}\) function corresponding to the family of distribution \(\mathcal{F}\). Due to space limitation, we detail the computation of these functions in Appendix A. **Lemma 3** (Families of distributions satisfying (A1)-(A5) for some function \(D_{\pi}\)): _Let \(F\) be a distribution of mean \(\mu_{F}\) and standard deviation \(\sigma_{F}\), and \(\mu\in\mathbb{R}\) be a threshold. Then, assumptions (A1)-(A5) are satisfied for some function \(D_{\pi}\) if \(\mathcal{F}\) is one of the following families:_ * _Single Parameter Exponential Families_ _with_ \(D_{\pi}(F,\mu)=\mathcal{K}_{\inf}^{\mathcal{F}}(F,\mu)\)_._ * _Bounded distributions_ _with a known upper bound_ \(B\)_, with_ \(D_{\pi}(F,\mu)=\mathcal{K}_{\inf}^{\mathcal{F}}(F,\mu)\)_._ * _Gaussian distributions_3 _with unknown means and variances, if_ F F _is continuous)._ F F _is continuous)._ F _is continuous)._ F _F _is continuous)._ F _F _is continuous)._ F _F _F _is continuous)._ F _F is continuous)._ F is continuous)._ F is continuous)._ F is continuous)._,_ * \(h\)**-moment conditions:** _Let_ \(h\) _be a known positive convex function satisfying_ \(\frac{h(|x|)}{|x|^{1+\eta}}\to+\infty\) _for some_ \(\eta>0\)_, and_ \(\mathcal{F}\) _be defined as_ \[\mathcal{F}=\left\{F\in\mathcal{P}:\ \mathbb{E}\left[h(|X|)\right]\leq B\,\ \mathbb{E}[h(|X|)^{2+\varepsilon_{F}}]<+\infty\text{ for some }\ \varepsilon_{F}>0\right\}\,\] (6) _then (A1)-(A5) hold with_ \(D_{\pi}(F,\mu)=\mathcal{K}_{\mathrm{inf}}^{\mathcal{F}}(F,\mu)\mathbb{1}\left( F\in\mathcal{F}\right)\)_. This is also the case if the condition is_ **centered**_, i.e._ \(h(|X|)\) _is replaced by_ \(h(|X-\mu_{F}|)\) _in (_6_), under the additional assumption that the means are lower bounded by a known constant_ \(\mu^{-}\)_._ Lemma 3 answers one of the questions raised in Section 1: there is indeed a set of fundamental properties shared by all the main families of distributions that have been studied in the bandit literature, that may explain why the same algorithm principles work for all of them. Furthermore, we proved that these assumptions are valid for \(\mathcal{K}_{\mathrm{inf}}^{\mathcal{F}}\) or a function that matches \(\mathcal{K}_{\mathrm{inf}}^{\mathcal{F}}\) once the empirical distribution of an arm is "close" to its true distribution. We postpone the proof in Appendix D, where we provide all the technical details and motivate the modifications made to \(\mathcal{K}_{\mathrm{inf}}^{\mathcal{F}}\) in some cases. We use several results from the literature, but also prove novel results of independent interest. For instance, Lemma 14 is (as far as we know) a novel concentration inequality for the \(\mathcal{K}_{\mathrm{inf}}\) of gaussian distributions with unknown variances. We now show the application of these results for the MED and TS algorithms introduced in Section 2. **Corollary 4** (Theoretical guarantees for various MED and TS algorithms): _Let \(\mathcal{F}\) be one of the family of distributions presented in Lemma 3, where it is associated with a function \(D_{\pi}\). Let us define \(a_{n}=\frac{4}{n}\) if \(\mathcal{F}\) is a SPEF, or the set of Gaussian distributions, and \(a_{n}=0\) otherwise. Then, for any \(c>0\) the MED algorithm tuned with \(D_{\pi}\) and \((a_{n})_{n\in\mathbb{N}}\) satisfies_ \[\mathcal{R}_{T}=\sum_{k:\Delta_{k}>0}(1+c)\frac{\Delta_{k}}{D_{\pi}(F_{k},\mu ^{\star})}\log(T)+o(\log(T)). \tag{7}\] _As \(D_{\pi}(F_{k},\mu^{\star})=\mathcal{K}_{\mathrm{inf}}^{\mathcal{F}}(F_{k},\mu ^{\star})\), the algorithm is even_ **asymptotically optimal**_. Furthermore, the regret bound (7) also holds for the following TS algorithms implemented according to Algorithm 2:_ * _TS with conjugate prior/posterior for SPEF, if the means belong to a known range_ \([\mu_{0}^{-},\mu_{0}^{+}]\)_._ * _Gaussian TS with inverse-gamma priors with shape parameter_ \(\alpha<0\)_._ * _Non-Parametric TS for bounded distributions with a known upper bound_ \(B\)_._ We first established that MED is asymptotically optimal for the families of distributions presented in Lemma 3. This may not be very surprising for parametric and bounded settings, but is particularly insightful for the setting with \(h\)-moment conditions. Indeed, this setting was so far tackled only in the centered case and for \(h(|x|)=|x|^{1+\varepsilon}\) for some \(\varepsilon>0\), for the regret minimization problem (Bubeck et al., 2013; Agrawal et al., 2021). We allow more general definitions of \(h\) inspired by Agrawal et al. (2020) and cover the centered case, under the additional assumptions that \(\mathbb{E}_{F}[h(|X|)^{2+\varepsilon}]<+\infty\) for some \(\varepsilon>0\) and that the means admit a known lower bound. Some illustrative examples of families covered by our analysis are distributions with bounded variance (\(\mathbb{E}_{F}[(X-\mu_{F})^{2}]\leq B\)), and an alternative characterization of a subgaussian model with the Orliz condition (\(\mathbb{E}_{F}[\exp(s^{-1}(X-\mu_{F})^{2})]\leq B\) for some \(s,B\)). We detail this second setting in Appendix D.6. To the best of our knowledge, MED is for instance the first algorithm with regret guarantees for the bounded variance setting, and optimal in the sense of (2) for a subgaussian model. The second contribution of Corollary 4 is in the novel analysis of some existing TS algorithms. In each case, the additional work compared to the MED consists in upper and lower bounding the BCP in a form that matches (3) with the function \(\mathcal{K}_{\inf}^{\mathcal{F}}\). We present these bounds in Appendix C.3-C.5. Furthermore, to support the claim that this analysis is indeed simpler than the existing proofs for TS we consider in Appendix C.6 a representative example with NPTS, and detail which technical results proved in Riou and Honda (2020) could be avoided with our proof. TS as a variant of MEDRegarding their very similar guarantees and analysis, we can reasonably interpret that MED and TS are actually two variants of the same exploration strategy. In fact, it seems that TS can be seen as a way to approximate MED through sampling of the parameters. In the light of this result, a natural question to ask is if TS or MED should be preferred in practice. In our opinion, this depends on two factors: (1) the ability to provide a TS with tight bounds on its BCP under the model \(\mathcal{F}\), and (2) how costly it is to compute \(D_{\pi}\) compared to performing the sampling step of TS. For example, for parametric families the function \(D_{\pi}\) is very easy to compute, and so MED may be an interesting option, but for non-parametric families computing \(D_{\pi}\) at each step may be burdensome and TS may be more appealing. This is the main motivation for the novel Non-Parametric TS algorithm that we propose in the next section, for non-parametric families of distributions satisfying an \(h\)-moment condition. ## 5 A simple NPTS algorithm for families with \(h\)-moment conditions In this section we illustrate the benefits of our approach for the design and analysis of novel bandit algorithms. We consider the family of distributions satisfying a _centered_\(h\)-moment condition, which was already introduced in previous sections. The definition of this family and the assumptions made on the function \(h\) are detailed in Lemma 3. We choose to study more specifically this family of distributions because it allows to consider non-parametric assumptions outside the usual bounded and sub-gaussian families. For instance, for any constants \(m>1,B>0\), we can provide algorithms adapted to a family of distributions with bounded moments: \[\mathcal{F}_{m}^{B}=\left\{F\in\mathcal{P}\,\mathbb{E}_{F}[|X-\mathbb{E}_{F}[X ]|^{m}]\leq B\,\ \mathbb{E}_{F}[|X|^{2m+\varepsilon_{F}}]<+\infty\text{ for some }\ \varepsilon_{F}>0\right\}\.\] This model may be relevant in some problems where limited prior knowledge on the distributions of rewards is available. Bubeck et al., 2013; Agrawal et al., 2021 provided algorithms for the uncentered version of this constraint, but they do not assume that \(\mathbb{E}_{F}[X^{2m+\varepsilon_{F}}]<+\infty\) so their model can consider heavier distributions than our definition. We think that providing algorithms for the centered case is a significant advance, even if additional assumptions are needed for the analysis. In Lemma 3 we proved that (A1)-(A5) hold for \(D_{\pi}(F,\mu)=\mathcal{K}_{\inf}^{\mathcal{F}}(F,\mu)\mathds{1}(F\in \mathcal{F})\), and hence that the corresponding MED algorithm (with \(a_{n}=0\)) is asymptotically optimal. However, computing \(D_{\pi}\) requires to solve an optimization problem at each time step, which can be computationally expensive. Building on the findings of previous sections, we naturally consider a novel Thompson Sampling algorithm for families \(\mathcal{F}\) satisfying a centered \(h\)-moment condition, as a computationally efficient version of MED. While we present all of our results and algorithms in the centered case for simplicity, their adaptation for the uncentered case is straightforward. Non-Parametric TSOur idea is to build on NPTS (Riou and Honda, 2020), as the \(\mathcal{K}_{\inf}^{\mathcal{F}}\) for bounded distributions and families based on \(h\)-moment conditions are similarly expressed as optimization problems. Considering some data \(X_{1},\ldots,X_{n}\) and an upper bound \(B\), a sampling step of NPTS returns a re-weighted mean of the form \[\sum_{i=1}^{n}w_{i}X_{i}+w_{n+1}B\,\text{ with }(w_{1},\ldots,w_{n+1})\sim \mathcal{D}_{n+1}\,\] where \(\mathcal{D}_{n+1}\) is the Dirichlet distribution \(\mathcal{D}_{n+1}\coloneqq\text{Dir}(1,\ldots,1)\). To adapt this principle, we need to tackle two questions. First, we need to replace the upper bound \(B\) by another "exploration bonus", as the role of \(B\) in to ensure that arm \(1\) has a reasonable chance to be sampled even if its first draws are bad. Then, we need to introduce the \(h\)-moment condition in the algorithm. h-NptsWe propose the \(h\)-NPTS algorithm, combining the structure of Algorithm 2 and a sampling step inspired by NPTS. At each time step, we first build a set \(\mathcal{A}\) of candidate arms with the currently optimal arms, and arms with empirical distributions that do not yet belong to \(\mathcal{F}\). Then, "challenger" arms (currently suboptimal arms) are only compared to the best empirical mean, that we denote by \(\widehat{\mu}^{\star}\). As in NPTS, we draw some weights \((w_{1},\ldots,w_{n+1})\sim\mathcal{D}_{n+1}\). Then, we check if there exists an exploration bonus \(x\in\mathbb{R}\) such that the "re-weighted" empirical distribution over \((X_{1},X_{2},\ldots,X_{n},x)\) belongs to the family \(\mathcal{F}\) with expectation at least \(\widehat{\mu}^{\star}\), which is expressed as \[\exists x\geq\widehat{\mu}^{\star}:\ \sum_{i=1}^{n}w_{i}X_{i}+w_{n+1}x\geq \widehat{\mu}^{\star}\,\text{ and }\sum_{i=1}^{n}w_{i}h(|X_{i}-\widehat{\mu}^{\star}|)+w_{n+1}(h(|x- \widehat{\mu}^{\star}|)-\gamma)\leq B\, \tag{8}\] where \(\gamma>0\) is a parameter of the algorithm that slightly advantages the exploration bonus, that we introduce for technical reasons. If (8) holds for an arm \(k\), then it is added to \(\mathcal{A}\). Interestingly, this condition can be checked with a closed formula, so no optimization is needed. To be more specific (see Appendix E.5 for details), (8) is equivalent to \[h\left(\left(\frac{1}{w_{n+1}}\sum_{i=1}^{n}w_{i}(\widehat{\mu}^{\star}-X_{i}) \right)^{+}\right)\leq B+\gamma+\frac{1}{w_{n+1}}\sum_{i=1}^{n}w_{i}(B-h(|X_{i }-\widehat{\mu}^{\star}|))\, \tag{9}\] where \((x)^{+}=\max\{x,0\}\). This check is in fact relatively easy to implement, which is another advantage of the new algorithm structure that we introduced. On the other hand, working with (8) will be more convenient in the theoretical analysis. We provide the detailed \(h\)-NPTS in Algorithm 1, that is arguably much simpler to implement that other optimal algorithms for this setting. ### Analysis Thanks to Lemma 3, we only need to study the sampling probabilities of \(h\)-NPTS. To do that, for any parameters \(\gamma>0,\eta>0\) we define the following function \[\Lambda_{\eta,\gamma}^{\star}(F,\mu)=\max_{(\alpha,\beta)\in\mathcal{R}_{2}^{ \eta,\gamma}}\mathbb{E}_{F}\left[\log(1-\alpha(X-\mu)-\beta(B-h(|X-\mu|)) \right]\,\] where \(\mathcal{R}_{2}^{\eta,\gamma}=\{(\alpha,\beta)\in(\mathbb{R}^{+})^{2}\ :\ \forall x \in\mathbb{R}\,\ 1-\alpha(x+\eta-\mu)-\beta(B+\gamma-h(|x-\mu|)\geq 0\}\). When \(\eta=0\) we use the notation \(\mathcal{R}^{\gamma}\) and \(\Lambda_{\gamma}^{\star}\). We prove in Appendix E.6 that the sampling probabilities of \(h\)-NPTS satisfy Assumption B.1, with \(D_{\pi}:(F,\mu)\mapsto\Lambda_{\gamma}^{\star}(F,\mu)\mathbbm{1}\left(F\in \mathcal{F}\right)\), where \(\Lambda_{\gamma}^{\star}\) matches \(\mathcal{K}_{\inf}^{\mathcal{F}}\) for \(\gamma=0\). The proof is based on the following upper and lower bounds on the BCP. **Lemma 5** (Bounds on the BCP): _Let \(X_{1},\ldots,X_{n}\) be i.i.d. observations of empirical cdf \(F_{n}\), and consider any threshold \(\mu^{\star}>\mathbb{E}_{F_{n}}[X]\). Using the notation \(Z_{i}=h(|X_{i}-\mu|)\) for \(i\in[n]\) and \(Z_{n+1}=h(|X_{n+1}-\mu|)-\gamma\), the Boundary Crossing Probability is defined by_ \[\text{[BCP]}\coloneqq\mathbb{P}\left(\exists X_{n+1}\geq\mu^{\star}:\;\sum_{ i=1}^{n+1}w_{i}X_{i}\geq\mu^{\star},\sum_{i=1}^{n+1}w_{i}Z_{i}\leq B\right)\;.\] _First, there exists a mapping \(C\) such that for any \(\eta>0\) and any \(n\in\mathbb{N}\)_ \[\text{[BCP]}\leq C(F_{n},\eta)\times e^{-n\Lambda^{\star}_{\eta,\gamma}(F_{n },\mu^{\star})}\;, \tag{10}\] _where \(C\) is continuous w.r.t. the Wasserstein metric in \(F_{n}\), is continuous in \(\eta\), and scales in \(\eta^{-2}\). Furthermore, for any \(\delta\geq 0\), there exists two constants \(n_{\delta,\gamma}\in\mathbb{N}\) and \(c_{\delta}>0\) such that_ \[\text{[BCP]}\geq\left\{\begin{array}{ll}c_{\delta}\times e^{-n\left(\Lambda ^{\star}_{\gamma}(F_{n},\mu^{\star})+\delta\right)}&\text{ if }n\geq n_{\delta,\gamma}\;,\\ e^{-n\log(nC_{B,\mu})}&\text{ if }n\leq n_{\delta,\gamma},&\text{ with }C_{B,\mu}=\max \left\{\frac{3h^{-1}(B)-\mu}{h^{-1}(B)-\mu},\frac{B}{B-h\left(\frac{h^{-1}(B)- \mu}{2}\right)}\right\}\;.\end{array}\right.\] The detailed proofs and constants for the upper and lower bounds are presented respectively in Appendix E.1 and E.3. Those two results are novel, and of independent interest. The mapping \(C\) in (10) does not have a simple expression, but this is not detrimental to our analysis. Indeed, we only use it in a part of the proof where the empirical distribution \(F_{k,n}\) of an arm \(k\) is close to its true distribution, so we almost consider the problem-dependent constant \(C(F_{k},\eta)\). Proving these results required (especially for the lower bounds) proof techniques that are novel w.r.t. existing works on Dirichlet sampling in bandits (Riou and Honda, 2020; Baudry et al., 2021; Tiapkin et al., 2022). As an intermediate step we proved a novel lower bound for the bounded case in Appendix E.2 (Lemma 18), that we can thus compare with the results provided in these works. First, Lemma 15 from (Riou and Honda, 2020) allows to lower bound the BCP for any \(\delta>0,n\geq 1\) by \(\Omega(n^{-C(\delta)}\exp(-n(x+\delta))\) (for some function \(C\)), with bonus \(B\) (see details in Appendix C.5). Tiapkin et al. (2022) obtained a tighter result, with a lower bound of the form \(n^{-3/2}e^{-nx}\) (without \(\delta\) in the exponent), but at the cost of both increasing the upper bound \(B\) to \(2B\) and the expectation of its weight to \(\Omega(\log(n))\). Hence, our result is tighter than the first bound, and does not require to change the expected weight of the exploration bonus as for the second bound. Proof ideasWe upper bound the BCP by partitioning the possible values for \(X_{n+1}\) as \((x_{j})_{j\in\mathbb{N}}=(\mu+j\eta)_{j\in\mathbb{N}}\) and using a union bound. Then, we upper bound each term using the Chernoff method, and denoting by \((\alpha_{n}^{\star},\beta_{n}^{\star})\) the optimizers of \(\Lambda_{\eta,\gamma}^{\star}(F_{n},\mu))\) we obtain that \[[\text{BCP}]\leq\sum_{j=1}^{+\infty}\frac{e^{-n\Lambda_{\eta,\gamma}^{\star}(F _{n},\mu^{\star})}}{1-\alpha_{n}^{\star}(x_{j}+\eta-\mu)-\beta_{n}^{\star}(B+ \gamma-h(|x_{j}-\mu|)}\.\] At this step, we can remark that once \(x_{j}\) is large enough the denominator is decreasing and of order \(h(|x_{j}-\mu|)\). Hence, only a finite number of terms are actually significant and the sum converges. For the lower bound, we select a proper value for \(X_{n+1}\), use exponential tilting to get the term in \(e^{-n\Lambda_{\alpha,\beta}(F_{n},\mu^{\star})}\) from a change of distribution, and then analyze the probability that (8) hold under the new (more favorable) distribution of the weight. The parameter \(\gamma>0\) ensures that even in the least favorable cases this probability is larger than a constant when \(n\) is large enough, which concludes the proof. The result may still hold for \(\gamma=0\), but proving it would require additional technicality. Finally, for \(n\leq n_{\delta}\) our lower bound is simply \(\mathbb{P}\left(w_{n+1}\geq 1-\frac{1}{C_{B,\mu}n}\right)\). We now state the main result of this section, which is a regret upper bound for \(h\)-Npts. **Theorem 6** (Logarithmic regret for \(h\)-Npts): _If \(\mathcal{F}\) is defined by a centered \(h\)-moment condition that satisfy the assumptions of Lemma 3, then for any \(c>0\), \(\gamma>0\) and for any sub-optimal arm \(k\), \(h\)-Npts satisfies_ \[\mathbb{E}[N_{k}(T)]\leq\frac{1+c}{\Lambda_{\gamma}^{\star}(F_{k},\mu^{\star} )}\log(T)+o(\log(T))\.\] We detail the proof in Appendix E.6, where we show that Lemma 5 gives sufficient bounds on the sampling probability of each arm in \(h\)-Npts, and then use Corollary 2. As \(\gamma>0\), the algorithm may not be asymptotically optimal, as \(\mathcal{R}_{2}^{\gamma}\subset\mathcal{R}_{2}^{0}\). However, this is the case only for problems for which arm \(k\) is "far" from being optimal, so the constant before the logarithm is small: if for some distribution \(F\) and \(\mu\in\mathbb{R}\) the optimizers in \(\mathcal{R}_{2}^{0}\) belong to \(\mathcal{R}_{2}^{\gamma}\) then \(\Lambda_{\gamma}^{\star}(F,\mu)=\Lambda_{0}^{\star}(F,\mu)\). Theorem 6 completes the picture that \(h\)-Npts is an easy to implement, computationally efficient, and theoretically sound alternative to MED for distributions satisfying a centered \(h\)-moment condition under the assumptions of Lemma 3. In our opinion, our methodology to derive and analyze this algorithm shows another interest of the general insights provided in this paper: we first tried to find the best function \(D_{\pi}\) for which MED would work for this family, and then provided a TS algorithm in order to approach this MED strategy with sampling. **Remark 7** (Further generalization of h-Npts): _As stated above, the algorithm and its analysis are first easily translated to the uncentered case. Furthermore, we think that the same proof techniques as for Lemma 5 may be used with only minor changes if several linearly independent conditions with functions \(h_{1},\ldots,h_{m}\) and constants \(B_{1},\ldots,B_{m}\) would be considered, for \(m\in\mathbb{N}\)._ ## 6 Conclusion and perspectives In this paper we provided a general recipe to derive regret bounds for randomized bandit algorithms, that allowed us to revisit the Minimum Empirical Divergence (MED) and Thompson Sampling (TS) algorithms for several families of distributions. Guided by our theoretical results, we suggest that TS may be interpreted as a way to approximate MED through sampling, that is appealing when the sampling probabilities of MED are hard to compute. Driven by these new insights, we could study in more details some families of distributions satisfying some \(h\)-moment conditions, for example distributions with a known variance upper bound. We proved the optimality of a MED algorithm in this setting, and proposed the more computationally efficient \(h\)-NPTS. While its analysis is intricate, \(h\)-NPTS is very simple to implement and has (close to optimal) logarithmic regret, making it an appealing solution under such model. An interesting research direction may be to find an equivalent unified proof for deterministic algorithms, that would for instance allow to analyze KL-UCB and IMED under the same framework. In particular, it would be interesting to identify if the same properties (A1)-(A5) are needed, and how Assumption 2.1 translates to concentration inequalities in the deterministic case. ## Acknowledgments We acknowledge the Inria-Japan associate team RELIANT for the funding of Dorian Baudry's visit to Kyoto University.
2310.16373
Pauli resonance states in light nuclei: how they appear and how they can be eliminated
Systematic analysis of parameters and properties of the Pauli resonance states are performed for light nuclei $^{6}$Li, $^{7}$Li, $^{8}$Be, $^{9}$Be and $^{10}$B, which are treated as two-cluster systems. The Pauli resonance states are redundant solutions of the resonating group method appearing when one try use more advanced description of the internal structure of interacting clusters. Our calculations are performed in a standard and advanced versions of the resonating group method. The standard version employs wave functions of many-particle shell model to describe internal motion of nucleons within each cluster. The advanced version is based on three-cluster resonating group method. As in the standard version, the internal wave functions of three clusters are approximated by wave functions of many-particle shell model model. However, in advanced version one of pair of clusters forms a bound state, and third cluster is considered to interact with such state. It is found that the Pauli resonance states in nuclei under consideration have energy between 11 and 46 MeV, and their widths vary from 8 keV to 6.7 MeV. Analysis of wave functions of Pauli resonance states and matrix elements of norm kernel allowed us to formulate an effective method for eliminating Pauli resonance states. It is demonstrated that this method effectively eliminate all determined the Pauli resonance states.
N. Kalzhigitov, V. S. Vasilevsky
2023-10-25T05:22:05Z
http://arxiv.org/abs/2310.16373v3
# Pauli resonance states in light nuclei: how they appear and how they can be eliminated ###### Abstract Systematic analysis of parameters and properties of the Pauli resonance states are performed for light nuclei \({}^{6}\)Li, \({}^{7}\)Li, \({}^{8}\)Be, \({}^{9}\)Be and \({}^{10}\)B, which are treated as two-cluster systems. The Pauli resonance states are redundant solutions of the resonating group method appearing when one try use more advanced description of the internal structure of interacting clusters. Our calculations are performed in a standard and advanced versions of the resonating group method. The standard version employs wave functions of many-particle shell model to describe internal motion of nucleons within each cluster. The advanced version is based on three-cluster resonating group method. As in the standard version, the internal wave functions of three clusters are approximated by wave functions of many-particle shell model model. However, in advanced version one of pair of clusters forms a bound state, and third cluster is considered to interact with such state. It is found that the Pauli resonance states in nuclei under consideration have energy between 11 and 46 MeV, and their widths vary from 8 keV to 6.7 MeV. Analysis of wave functions of Pauli resonance states and matrix elements of norm kernel allowed us to formulate an effective method for eliminating Pauli resonance states. It is demonstrated that this method effectively eliminate all determined the Pauli resonance states. Introduction We are going to study properties of so-called Pauli resonance states, which have been numerously observed in Refs. [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12] and many others. These resonance states appear within the resonating group method (RGM) when one tries to use more realistic description of interacting nuclei (clusters). They have been considered as redundant solutions of equations of the resonating group method. As the Pauli resonance states appear not in all realizations (versions) of the resonating group method, we start with a short classification of main versions of the RGM, which are relevant to the subject of the present paper. The main difference of these methods is in the form of the wave function, which is used to approximate cluster structure of a compound nucleus. The standard version of the RGM suggests the following form of wave function of a two-cluster system of \(A\) nucleons for the partition \(A=A_{1}+A_{2}\) \[\Psi\left(A\right)=\widehat{\mathcal{A}}\left\{\Phi_{1}\left(A_{1},b\right) \Phi_{2}\left(A_{2},b\right)\psi\left(\mathbf{x}\right)\right\}, \tag{1}\] where \(\mathbf{x}\) is a distance between centers of mass of clusters, \(\psi\left(\mathbf{x}\right)\) is a wave function of relative motion of clusters, \(\Phi_{1}\left(A_{1},b\right)\) and \(\Phi_{2}\left(A_{2},b\right)\) are wave functions of the many-particle shell model describing motion of nucleons within the first and second cluster, respectively. They are antisymmetric and translationally invariant. Oscillator length \(b\) determines effective size of clusters. An important component of Eq. (1) is the antisymmetrization operator \(\widehat{\mathcal{A}}\) which makes antisymmetric wave function of a compound system. For the sake of brevity, we omit all quantum numbers. They will be explicitly indicated in Sec. II. In second version which we call improved one wave function is chosen in the form \[\Psi\left(A\right)=\widehat{\mathcal{A}}\left\{\Phi_{1}\left(A_{1},b_{1} \right)\Phi_{2}\left(A_{2},b_{2}\right)\psi\left(\mathbf{x}\right)\right\}, \tag{2}\] where different oscillator length \(b_{1}\) and \(b_{2}\) are used to improve description of the internal structure of each cluster. This version is suitable for clusters with large difference of masses, i.e., for example, when \(A_{1}\gg A_{2}\). The third version is called advanced version of the RGM and related to advanced description of the internal structure of one \[\Psi\left(A\right)=\widehat{\mathcal{A}}\left\{\Phi_{1}\left(A_{1},b_{1} \right)\Psi_{2}\left(A_{2},b\right)\psi\left(\mathbf{x}\right)\right\}, \tag{3}\] or two clusters \[\Psi\left(A\right)=\widehat{\mathcal{A}}\left\{\Psi_{1}\left(A_{1},b\right) \Psi_{2}\left(A_{2},b\right)\psi\left(\mathbf{x}\right)\right\}. \tag{4}\] Contrary to the wave function \(\Phi_{\alpha}\left(A_{\alpha},b\right)\) (\(\alpha\)=1,2), the wave function \(\Psi_{\alpha}\left(A_{\alpha},b\right)\) is a solution of two-cluster Schrodinger equation with clusterization \(A_{\alpha}=A_{\alpha 1}+A_{\alpha 2}\) and is presented in the form similar to (1). This version of the RGM suggests more correct description of compound system and is appropriate when one or both clusters \(A_{1}\) and \(A_{2}\) have an evident two-cluster structure or, in other words, they have weakly bound state(s) and thus can be easily split on two fragments. Many light nuclei, such as \(d\), \({}^{6}\)Li, \({}^{7}\)Li, \({}^{7}\)Be, have such properties as their separation energies are less than 3 MeV. The Pauli resonance states have not seen in the standard version of the RGM. Only shape resonance states were detected within this version in a single-channel approximation. As it is well-known, shape resonances are created by the centrifugal and/or Coulomb barriers, and thus they lie relative close to the threshold of corresponding channel. The Pauli resonance states have been detected in the improved and advanced versions. The most spectacular demonstration of the Pauli resonance states was presented in Refs. [9; 10], where elastic scattering of alpha particles on \({}^{16}\)O has been calculated within standard and improved versions. Set of narrow and wide resonance states was emerged when different oscillator lengths (frequencies) were used for wave functions describing internal structure of \({}^{16}\)O and \({}^{4}\)He nuclei. They spread over a wide energy range from small to relatively high energies above the \({}^{16}\)O+\({}^{4}\)He threshold. In light nuclei, within the advanced version of the RGM [1; 2; 3; 4; 6; 7; 8; 11] the Pauli resonance states have been observed at relatively high energy region \(E>\)15 MeV. It was also noticed in Ref. [2] that the Pauli resonance states manifest themselves in the states with small values of the total orbital momentum \(L\). Despite that the different authors have used different names for such type of resonances, such as "positive energy bound states" [1], "redundant" [5] or "spurious states" [7], it was widely recognized that the correctly treated Pauli principle is the origin of those states. Let us recall the main types of resonance states observed in many-particle and particular in nuclear systems. The first type is shape resonance states, they are created by centrifugal or/and Coulomb barriers. The second type is represented by the Feshbach resonance states. These resonance states appear due to a weak coupling between open and closed channels. There are two necessary conditions for creating the Feshbach resonances. A compound system should have at least two channels with different threshold energies, and there should be at least one bound state in the channel with larger threshold energy, provided that this channel is considered separately from the channel with lowest threshold energy. The phenomenon which is called the Pauli resonance state cannot be explained by two main factors of creating of resonance states and thus cannot be attributed to the first or second types of resonances. It cannot be the Feshbach resonance as such resonance states observed in single-channel cases. It is impossible to relate the Pauli resonances to centrifugal or Coulomb barrier as they appear in states with zero or very small angular momenta, or they require very huge barrier. The Pauli resonance states have been considered as redundant solutions of the RGM equations, and thus one needs to use an algorithm to eliminate these states. They distort real physical quantities, such as phase shifts, cross sections of different processes and so on. To our knowledge, there is only one algorithm for eliminating the Pauli resonance states formulated in Ref. [12] and applied to the \(\alpha\)+\({}^{16}\)O system. We refer to this method as the REV method, which is stand for removing of eigenvalues of the norm kernel. It was suggested in Ref. [12] to omit so-called almost forbidden Pauli states. A criterion was formulated how to distinguish such states from allowed states. This algorithm has eliminated all Pauli resonance states from the elastic scattering of alpha particles on \({}^{16}\)O. In the present paper we are going to examine continuous spectrum states of a set of light nuclei such as \({}^{6}\)Li, \({}^{7}\)Li, \({}^{7}\)Be, \({}^{8}\)Be, \({}^{9}\)Be and \({}^{10}\)B. All these nuclei are considered as a three-cluster configuration and treated within a three-cluster model formulated in Ref. [13]. The three-cluster configuration is than reduced to three (if all three clusters are different) or two (if two of three clusters are identical) binary channels. With such reduction, one pair of clusters forms a bound state, which within our method is described in a two-cluster approximation. To study Pauli resonance states we will first of all analyze matrix of overlap and its eigenvalues for some set of light nuclei. Based on this analysis, we suggest an alternative method for eliminating the Pauli resonance states. This new method we call as the ROF method, it means Removing of Oscillator Functions. We will demonstrate that both method give close results and completely eliminate all Pauli resonance states. The structure of the present paper is the following. In Section II we give a brief introduction to the methods applied to study properties of the Pauli resonances in light nuclei.In Section III the choice of input parameters and details of calculations are discussed. The manifestations of the Pauli resonance states in various two-cluster systems are demonstrated in Section III.1. The analysis of parameters of the Pauli resonance states and their wave func tions is carried out in this section. Then in the Section III.5 we analyze behavior of matrix elements of the norm kernel and eigenvalues of the matrix. In Section IV we briefly explain main ideas of eliminating the Pauli resonance states suggested in Ref. [12]. Here we also demonstrate its efficiency. In Section V we formulate an alternative method for eliminating the Pauli resonance states and demonstrate how it works in two-cluster systems under consideration. Concluding remarks are presented in Section VI. ## II Method In this paper we will use two types of two-cluster functions and thus two realizations of the RGM. The first type of functions realizes the standard form of the resonating group method and the second type realizes the advanced form of the RGM. Wave function of the first type for partition \(A=A_{1}+A_{2}\) are \[\Psi_{E,J}\left(A\right)=\widehat{\mathcal{A}}\left\{\left\{\left[\Phi_{1} \left(A_{1},L_{1},S_{1},b\right)\Phi_{2}\left(A_{2},L_{2},S_{2},b\right) \right]_{S}\psi_{E,l,L,J}\left(x\right)Y_{l}\left(\widehat{\mathbf{x}}\right) \right\}_{L}\right\}_{J} \tag{5}\] and wave functions of the second type for partition \(A=A_{1}+A_{2}=A_{1}+\left(A_{21}+A_{22}\right)\) are \[\Psi_{E,J}\left(A\right)=\widehat{\mathcal{A}}\left\{\left\{\left[\Phi_{1} \left(A_{1},L_{1},S_{1},b\right)\Psi_{2}\left(A_{2},L_{2},S_{2},b\right) \right]_{S}\psi_{E,l,J}\left(x\right)Y_{l}\left(\widehat{\mathbf{x}}\right) \right\}_{L}\right\}_{J}, \tag{6}\] where \(\Psi_{2}\left(A_{2},S_{2},L_{2},b\right)\) is wave function of a bound state of two-cluster subsystem with partition \(\left(A_{21}+A_{22}\right)\) \[\Psi_{2}\left(A_{2},L_{2},S_{2},b\right)=\widehat{\mathcal{A}}\left\{\left[ \Phi_{1}\left(A_{21},S_{21},b\right)\Phi_{2}\left(A_{22},S_{22},b\right) \right]_{S_{2}}g_{\mathcal{E},\lambda,J}\left(y\right)Y_{\lambda}\left( \widehat{\mathbf{y}}\right)\right\}_{J}. \tag{7}\] Recall, that in this paper we use capital letter \(\Phi\) to denote wave functions that are not solutions of corresponding Schrodinger equation, they are wave functions of the many-particle shell model. These functions can be constructed as Slater determinants from a single-particle oscillator orbitals. The capital and small letters \(\Psi\) and \(\psi\) represent solutions of the many-particle Schrodinger equation or corresponding integro-differential Wheeler equation [14; 15]. One can see that we use \(LS\) coupling scheme when the total spin \(S\) is a vector sum of spins of clusters, and total orbital momentum \(L\) is a vector sum of the orbital momenta of both clusters \(L_{1}\) and \(L_{2}\) and orbital moment of relative motion of clusters \(l\). In the present paper we consider special case of the advanced version of the RGM. The usage of the special case is justified by employing of a three-cluster model for investigating of a compound nucleus and a set of reactions proceeding through the nucleus. In this special case only one of two functions \(\Psi_{1}\) and \(\Psi_{2}\) of internal motion of nucleons is the solution of two-cluster Schrodinger equation, and another function is the many-body shell model wave functions. A four-cluster model will allow one to consider a general case with two wave functions \(\Psi_{1}\) and \(\Psi_{2}\) to be solutions of two-cluster Schrodinger equations. To realize the advanced model, we employ of a three-model which was formulated in Refs. [13; 16]. Within this model a three-cluster configuration is transformed into a set of binary channels, i.e. in several pairs of interacting nuclei, and one of the interacting nuclei is considered as a two-cluster system. In Refs. [13; 16] the model has been applied to study nuclei \({}^{7}\)Be and \({}^{7}\)Li with three-cluster configurations \(\alpha+d+n\) and \(\alpha+d+p\) respectively. The structure of the \({}^{10}\)B nucleus has been investigated in Ref. [17] by employing three-cluster configurations \(\alpha+\alpha+d\). Recently, the model which involves two three-cluster configurations \(\alpha+p+n\) and \(t+d+p\) has been used in Ref. [18] to study resonance states of \({}^{6}\)Li in a wide energy range. The model involves the Gaussian basis to determine bound-state wave functions of two-cluster subsystems and oscillator basis to describe scattering of the third cluster on a bound state of two-cluster subsystem. The abbreviation AMGOB is used to distinguish this model. In the AMGOB, two-cluster (7) and three-cluster (6) wave functions are represented as \[\Psi_{2}\left(A_{2},S_{2},L_{2},b\right)=\sum_{\nu=1}^{N_{G}}D_{ \nu}^{E,L,J}\mathcal{\tilde{A}}\left\{\left[\Phi_{1}\left(A_{21},S_{21},b \right)\Phi_{2}\left(A_{22},S_{22},b\right)\right]_{S}G_{L}\left(x,b_{\nu} \right)Y_{L}\left(\mathbf{\hat{x}}\right)\right\}_{J}, \tag{8}\] \[\Psi_{E,J}\left(A\right)=\sum_{n=0}^{N_{O}}C_{nL}^{E,J}\mathcal{ \tilde{A}}\left\{\left[\Phi_{1}\left(A_{1},S_{1},b\right)\Psi_{2}\left(A_{2}, S_{2},L_{2},b\right)\right]_{S,L_{2}}\Phi_{n,L}\left(y,b\right)Y_{L}\left(\mathbf{ \hat{y}}\right)\right\}_{J}, \tag{9}\] where \[G_{L}\left(x,b_{\nu}\right)=\frac{1}{b_{\nu}^{3/2}}\sqrt{\frac{2}{\Gamma\left( L+3/2\right)}}\rho^{L}\exp\left\{-\frac{1}{2}\rho^{2}\right\},\qquad\left(\rho= \frac{x}{b_{\nu}}\right), \tag{10}\] is the Gaussian function and \[\Phi_{n,L}\left(y,b\right) = \left(-1\right)^{n}\mathcal{N}_{nL}\ b^{-3/2}\rho^{L}e^{-\frac{1} {2}\rho^{2}}L_{n}^{L+1/2}\left(\rho^{2}\right), \tag{11}\] \[\rho = \frac{y}{b},\quad\mathcal{N}_{nL}=\sqrt{\frac{2\Gamma\left(n+1 \right)}{\Gamma\left(n+L+3/2\right)}},\] is the oscillator function. In Eqs. (10) and (11), \(b_{\nu}\) and \(b\) denote oscillator lengths. Motivation for usage of these functions can be found in Ref. [13]. The expansion coefficients \(D_{\nu}^{E,L,J}\) and \(C_{nL}^{E,J}\) are solutions of a set of linear equations originated from corresponding Schrodinger equations. This is a system of equations for the expansion coefficients \(D_{\nu}^{E,L,J}\) \[\sum_{\widetilde{\nu}=0}\left[\left\langle\nu,L\left|\widehat{H}^{(2)}\right| \widetilde{\nu},L\right\rangle-E\left\langle\nu,L|\widetilde{\nu},L\right\rangle \right]D_{\widetilde{\nu}}^{E,L,J}=0 \tag{12}\] and here is a system of equations for the expansion coefficients \(C_{nL}^{E,J}\) \[\sum_{\widetilde{n}=0}\left[\left\langle n,L\left|\widehat{H}\right| \widetilde{n},L\right\rangle-E\left\langle n,L|\widetilde{n},L\right\rangle \right]C_{\widetilde{n}L}^{E,J}=0. \tag{13}\] System of equations (12) involves matrix elements of the two-cluster Hamiltonian \[\left\langle\nu,L\left|\widehat{H}^{(2)}\right|\widetilde{\nu},L\right\rangle \tag{14}\] and unit operator (norm kernel) \(\left\langle\nu,L|\widetilde{\nu},L\right\rangle\) between cluster Gaussian functions \[\left|\nu,L\right\rangle=\widehat{\mathcal{A}}\left\{\left[\Phi_{1}\left(A_{ 1},S_{1},b\right)\Psi_{2}\left(A_{2},S_{2},L_{2},b\right)\right]_{S,L_{2}}G_{ L}\left(x,b_{\nu}\right)Y_{L}\left(\widehat{\mathbf{y}}\right)\right\}_{J}, \tag{15}\] while system of equations (13) involves matrix elements of the three-cluster Hamiltonian \(\left\langle n,L\left|\widehat{H}\right|\widetilde{n},L\right\rangle\) and unit operator \(\left\langle n,L|\widetilde{n},L\right\rangle\) between cluster oscillator functions \[\left|n,L\right\rangle=\widehat{\mathcal{A}}\left\{\left[\Phi_{1}\left(A_{1}, S_{1},b\right)\Psi_{2}\left(A_{2},S_{2},L_{2},b\right)\right]_{S,L_{2}}\Phi_{n,L} \left(y,b\right)Y_{L}\left(\widehat{\mathbf{y}}\right)\right\}_{J}. \tag{16}\] We will also use another basis of cluster oscillator functions \[\left|n,L\right\rangle_{0}=\widehat{\mathcal{A}}\left\{\left[\Phi_{1}\left(A_ {1},S_{1},b\right)\Phi_{2}\left(A_{2},S_{2},L_{2},b\right)\right]_{S,L_{2}} \Phi_{n,L}\left(y,b\right)Y_{L}\left(\widehat{\mathbf{y}}\right)\right\}_{J} \tag{17}\] to expand wave functions of two-cluster systems in the standard version of the RGM (5). It is obvious, that the wave functions \(\left|n,L\right\rangle_{0}\) are partial case of wave functions \(\left|n,L\right\rangle\) when the second cluster has the most compact shape. Appearance of the matrix \(\left\|\left\langle n,L|\widetilde{n},L\right\rangle\right\|\) in Eq. (13) indicates that cluster oscillator basis (16) is not orthonormal, despite that all functions to the right of the antisymmetrization operator in Eq. (16) are normalized to unity on the corresponding part of coordinate space. This matrix plays an important role in cluster models. It reflects effects of the Pauli principle. If one neglects the total antisymmetrization by putting \(\widehat{\mathcal{A}}=1\), one obtains unit matrix \(\left\|\left\langle n,L|\widetilde{n},L\right\rangle\right\|\). When effects of the Pauli principle are small, than the diagonal matrix elements are close to unity and off-diagonal matrix elements tend to zero. Such behavior of matrix elements \(\left\langle n,L|\widetilde{n},L\right\rangle\) is observed for large values of \(n\) and \(\widetilde{n}\). This region of quantum numbers \(n\) and \(\widetilde{n}\) corresponds to large distances between clusters and thus is called as asymptotic region. In the standard version of the RGM, the matrix \(\|\langle n,L|\widetilde{n},L\rangle\|\) is diagonal for two \(s\)-clusters. Within advanced version of the RGM, as will be demonstrated later, the matrix \(\|\langle n,L|\widetilde{n},L\rangle\|\) is not diagonal, however, the largest matrix elements are situated on the main diagonal of the matrix. It worthwhile noticing that the wave functions \(\left\{C_{nL}^{E,J}\right\}\) obtained by solving the system of equations (13) are normalized by the conditions \[\sum_{n,n=0}C_{nL}^{E_{\alpha,J}}\left\langle n,L|\widetilde{n},L\right\rangle C _{\widetilde{n}L}^{E_{\alpha,J}}=\delta_{\alpha\beta} \tag{18}\] for states of discrete spectrum and \[\sum_{n,n=0}C_{nL}^{E,J}\left\langle n,L|\widetilde{n},L\right\rangle C_{ \widetilde{n}L}^{\widetilde{E},J}=\delta\left(E-\widetilde{E}\right) \tag{19}\] for continuous spectrum states. An important consequence of Eqs. (18) and (19) is that the value \(\left|C_{nL}^{E,J}\right|^{2}\) does not determine the contribution of the oscillator functions \(|n,L\rangle\) to the norm of a bound state or continuous spectrum state. To solve equation (13) for a finite number of basis functions (\(n\)=0, 1, 2,..., \(N_{O}-1\)), one needs to analyze the \(N_{O}\times N_{O}\) matrix \(\|\langle n,L|\widetilde{n},L\rangle\|\), whether this matrix contains redundant states, which are called the Pauli forbidden states. For this aim the diagonalization procedure is usually employed which yields the eigenvalues \(\Lambda_{\alpha}\) (\(\alpha\)=1, 2,..., \(N_{O}\)) and corresponding eigenfunctions \(\|U_{n}^{\alpha}\|\) of the matrix \(\|\langle n,L|\widetilde{n},L\rangle\|\). Eigenstates with \(\Lambda_{\alpha}=0\) are called the Pauli forbidden states and have to be removed from the space. Eigenstates with small values of \(\Lambda_{\alpha}\) are called the partially or almost forbidden states. There are usually a large number of eigenstates with \(\Lambda_{\alpha}=1\). These states are not affected by the antisymmetrization. Besides, the matrix \(\|\langle n,L|\widetilde{n},L\rangle\|\) can have eigenvalues with \(\Lambda_{\alpha}>1\), they are called the super allowed states. Note that the construction of the Pauli allowed states is a key problem for many-cluster systems. Many algorithms have been formulated (see, for example, [19; 20; 21],) to construct and to select Pauli allowed states. Actually, in our hands we have two different discrete representations of the Schrodinger equation. The first representation is the oscillator basis representation and will be referred as \(n-\)representation. The second representation is formed by eigenvalues of the norm kernel matrix and will be referred as \(\alpha-\)representation. It is necessary to recall that both representations are related by the orthogonal matrix \(\|U_{n}^{\alpha}\|\). In the \(\alpha-\)representation the set of equations (13) is transformed to the form \[\sum_{\widetilde{\alpha}=1}^{N_{O}}\left[\left\langle\alpha,L\left|\widehat{H} \right|\widetilde{\alpha},L\right\rangle-E\Lambda_{\alpha}\delta_{\alpha, \widetilde{\alpha}}\right]C_{\widetilde{\alpha}L}^{E,J}=0, \tag{20}\] where \[\left\langle\alpha,L\left|\widehat{H}\right|\widetilde{\alpha},L\right\rangle =\sum_{n,\widetilde{n}=0}^{N_{O}}U_{n}^{\alpha}\left\langle n,L\left| \widehat{H}\right|\widetilde{n},L\right\rangle U_{\widetilde{n}}^{ \widetilde{\alpha}}. \tag{21}\] If the cluster system under consideration contains no the Pauli forbidden states, then one may use the set of equations (13) or (20), both sets give the same spectrum but different wave functions. One has to use the set of equations (20) when there are one or more the Pauli forbidden states. To study effects of the Pauli principle we will analyze the overlap matrix \(\|\langle n,L|\widetilde{n},L\rangle\|\), we will also analyze the eigenvalues and eigenfunctions of the matrix. ## III Results and discussions As was pointed above, we consider a set of light nuclei. In Table 3 we list these nuclei and present details of the model and calculations. Here 3C stands for a three-cluster configuration which is taken into consideration, BC indicates binary channels which are studied. The Minnesota potential [22] is used as a nucleon-nucleon potential. Oscillator length \(b\) is chosen to minimize energy of three-cluster threshold. The exchange parameter \(u\) of the MP is usually selected to reproduce energy of the ground state of a compound system, accounted from the lowest two- or three-body thresholds. We employ four Gaussian function to obtain the energy and wave functions of two-cluster subsystems, and 100 oscillator functions to describe scattering of the third cluster on two-cluster subsystem. It was checked numerously, that such number of oscillator functions is sufficient to obtain the bound state energies of a compound nucleus and scattering parameters with acceptable precision. To consider properties of the Pauli resonances in more detail, we restricted ourselves to a single-channel approximation. Moreover, we do not consider mixture of states with different values of the total orbital momentum \(L\) and total spin \(S\), thus in our present model \(L\) and \(S\) are additional quantum numbers to the angular momentum \(J\) and parity \(\pi\) of a compound system. In this paper we will not consider many-channel cases. This is a subject for our next investigation. ### Manifestation of the Pauli resonance states In this section we show how the Pauli resonance states manifest themselves in nuclei under consideration. For this aim we consider phase shifts. The most typical picture is shown in Fig. 1 where phase shifts are displayed for four different \(J^{\pi}\) states of the elastic \(\alpha+t\) scattering. These phase shifts exhibit resonance states of two different types. The first type is the shape resonance states, they are created in the states 7/2\({}^{-}\) and 5/2\({}^{-}\). These resonance states are formed by combination of a huge centrifugal barrier at the \(L\)=3 state and Coulomb barrier. The shape resonance states lie close to the \(\alpha+t\) threshold. The second type is the Pauli resonances which exhibit themselves in the states 3/2\({}^{-}\) and 1/2\({}^{-}\). Energies of the Pauli resonances are \(E\)=25.8 MeV for 3/2\({}^{-}\) state and \(E\)=29.0 MeV for 1/2\({}^{-}\) state. The total orbital momentum \(L\)=1 is responsible for these resonance states. It means that in these states the centrifugal barrier is approximately 6 smaller than in the 7/2\({}^{-}\) and 5/2\({}^{-}\) states. Thus the centrifugal barrier cannot be responsible for the Pauli resonance states. The phase shifts displayed in Fig. 1 are similar to the phase shifts which were shown in Fig. 1 of Ref. [1]. The improved version of the RGM has been used in Ref. [1], and two different oscillator lengths were chosen for an alpha particle and a triton. Let us consider the Pauli resonance states in the 1\({}^{+}\) (\(L\)=0, \(S\)=1) and 2\({}^{-}\) (\(L\)=1, \(S\)=1) states of \({}^{6}\)Li, which exhibit themselves in the channels \({}^{4}\)He+\(d\) and \({}^{3}\)He+\(t\). It is necessary \begin{table} \begin{tabular}{c c c c c c} Nucleus & 3C & BC & \(b\) & \(u\) & Source \\ \hline \({}^{6}\)Li & \(\alpha+p+n\) & \(\alpha+d\) & 1.285 & 0.863 & [18] \\ & \(t+d+p\) & \(t\)+\({}^{3}\)He & & & \\ \({}^{7}\)Li & \(\alpha+d+n\) & \(\alpha\)+\({}^{3}\)H, \({}^{6}\)Li+\(n\) & 1.311 & 0.956 & [16] \\ \({}^{7}\)Be & \(\alpha+d+p\) & \(\alpha\)+\({}^{3}\)He, \({}^{6}\)Li+\(p\) & 1.311 & 0.956 & [13] \\ \({}^{8}\)Be & \(\alpha+d+d\) & \({}^{6}\)Li+\(d\) & 1.311 & 0.956 & \\ \({}^{9}\)Be & \(\alpha+t+d\) & \(t\)+\({}^{6}\)Li & 1.285 & 0.950 & [23] \\ \({}^{10}\)B & \(\alpha+\alpha+d\) & \(d\)+\({}^{8}\)Be, \(\alpha\)+\({}^{6}\)Li & 1.298 & 0.900 & [17] \\ \end{tabular} \end{table} Table 1: List of nuclei to be considered, their three-cluster configurations (3C) and binary channels (BC), and input parameters of calculations: oscillator length \(b\), exchange parameter \(u\) of the Minnesota potential to recall, that wave functions of deuteron and triton are obtained in two-cluster (two-body) approximations as \(p+n\) and \(d+n\), respectively. Such advanced description of deuteron and triton stipulate appearance of the Pauli resonance states, shown in Fig. 2. Only one Pauli resonance state is found in each channel. Energies and widths of these resonances depend on the total orbital momentum \(J\). Another example of the Pauli resonance manifestation is shown in Fig. 3 for the \({}^{6}\)Li+\(\alpha\) scattering. This case demonstrates that two-cluster system may have two Pauli resonance states, they are located at energy range 10\(\leq E\leq\)45 MeV. A sharp growing of the 1\({}^{+}\) phase shifts around 13.4 MeV indicates that there is very narrow resonance state with the widths Figure 2: Phase shifts of the elastic \({}^{4}\)He+\(d\) and \({}^{3}\)He+\(t\) calculated for the \(1^{+}\) and \(2^{-}\) states in the advanced version of RGM \(\Gamma\)=56 keV. Other Pauli resonances are significantly wider. In Fig. 4 we show phase shifts of the elastic \({}^{6}\)Li+\(d\) scattering with the total orbital momentum \(L\)=1 and total spin \(S\)=2. Due to the spin-orbital potential we obtain phase shifts for three states of the total angular momentum \(J^{\pi}=\)3\({}^{-}\), 2\({}^{-}\) and 1\({}^{-}\). Thus difference in behavior of phase shifts is totally originated from the spin-orbit potential. Fig. 4 demonstrates that energy and width of the Pauli resonance states depends on the spin-orbit components of nucleon-nucleon interaction. Indeed, we obtained that \(E\)=22.636 MeV, \(\Gamma\)=0.952 MeV for \(J\)=1\({}^{-}\), \(E\)=20.981 MeV, \(\Gamma\)=0.402 MeV for \(J\)=2\({}^{-}\), \(E\)=18.523 MeV, \(\Gamma\)=0.008 MeV for \(J\)=3\({}^{-}\). We see that spin-orbital potential substantially changes energy and width of the Figure 3: Phase shifts of the elastic \({}^{6}\)Li+\(\alpha\) scattering as a function of energy \(E\). Let us consider how central part of the MP affects energy and width of the Pauli resonance states. That can be done by varying the exchange parameter \(u\) of that potential. This parameter affects interaction of nucleons in odd states, they also affects cluster-cluster interaction. The smaller is \(u\), the smaller is interaction between clusters. When the parameter \(u\) approaches to unity, interaction between clusters is increased. Influence of parameter \(u\) variation is carried out for the \(3/2^{-}\) state of \({}^{7}\)Li considered as two-cluster system \(\alpha+t\). Results of variation \(u\) is demonstrated for the ground state energy \(E_{GS}\), and for energy and Figure 4: Phase shifts of the elastic \({}^{6}\)Li+\(d\) scattering as a function of energy. They obtained for \(L\)=1, \(S\)=2, and three values of the total angular momentum \(J\). width of the Pauli resonance. On can see in Fig. 5 that parameter \(u\) changes energy of the ground state. Moreover, when \(u<\)0.86, the nucleus \({}^{7}\)Li has no bound state. By varying parameter \(u\) from 0.86 to 1, we change the ground state energy from -0.038 to -1.78 MeV. However, variation of \(u\) from 0.8 to 1, reduces significantly energy of the Pauli resonance from 30.67 to 24.44 MeV. Such a variation of \(u\) slightly changes width of the Pauli resonance state from 13 to 33 keV. ### Special case for \(d+\alpha\) system Taking into account peculiarities of our model, we decided to carry out an additional investigation of the \(d+\alpha\) system. For this specific case our model allows to realize not only standard and advanced, but also improved version of the RGM. If we take only one Gaussian function in expansion of deuteron wave function, and select parameters \(b_{0}\) (see Eq. (8)) to minimize the bound state energy of a deuteron, we realize therefore the improved version of the RGM. One Gaussian function with the optimal value of \(b_{0}=\)1.512 fm creates bound state of deuteron with energy \(E=\)-0.132 MeV, while four Gaussian functions with optimal values of \(b_{0}\) and \(q\) generates the deuteron bound state with energy \(E=\)-2.020 MeV. To locate the Pauli resonance state in the approximation in the energy range below 50 MeV, we have to change the exchange parameter \(u\) and take \(u\)=1.0. In Fig. 6 we show phase shift of the \(d+\alpha\) scattering in \(L\)=0, \(S\)=1 \(J^{\pi}=1^{+}\) state obtained in three approximations. Standard version of the RGM does not generate the Pauli resonance state in this case. The Pauli resonance states appear in the improved (I) and advanced (A) versions. Parameters of the Pauli resonance states substantially depends on a wave function describing internal structure of deuteron. More realistic wave function significantly increases width of the Pauli resonance (from \(\Gamma=\)0.001 MeV to \(\Gamma=\)1.718 MeV), and dramatically changes the energy (from \(E=\)47.55 MeV to \(E=\)22.49 MeV) of the resonance state in the state \(J^{\pi}=1^{+}\). ### Main properties of the Pauli resonance states In Table 3 we collect information on parameters of the Pauli resonance states detected in nuclei under consideration. It is detected 28 Pauli resonance states. The energy of resonance states is reckoned from the threshold of the channel indicated in column "Channel" Figure 5: Dependence of the ground state energy \(E_{GS}\) of \({}^{7}\)Li, energy and width of the Pauli resonance state in the \(3/2^{-}\) state of the \(\alpha+t\) channel on the exchange parameter \(u\) of the MP of Table 3 and is varying from 11 to 46 MeV. There are 10 narrow resonance states with \(\Gamma<\)1 MeV, six of them are very narrow resonance states with the width \(\Gamma<\)0.1 MeV. The rest 18 resonance states are wider ones, their widths exceed 1 MeV. One can see that in most cases two-cluster system with fixed quantum numbers \(L\), \(S\) and \(J^{\pi}\) has only one Pauli resonance state. However, there are some cases when two Pauli resonance states are observed. The larger is energy of the resonance state, the larger is the total width. Energy of the second resonance state in \({}^{7}\)Li and \({}^{8}\)Be is approximately 15 MeV larger than the energy of the first resonance state. In \({}^{10}\)B the energy difference is more than 25 MeV. Figure 6: Phase shifts of the elastic \(\alpha+d\) scattering in the state \(L=0\), \(S=1\), \(J^{\pi}=\)1\({}^{+}\), obtained in three different approximations of the RGM In Table 3 we present parameters of the Pauli resonance states obtained in different state of \({}^{6}\)Li+\(d\) and \({}^{7}\)Li+\(d\) scattering. By analyzing results presented in Tables 3 and 3, we came to the conclusion that the Pauli resonance states in light nuclei have energy more than 11 MeV, their widths are mainly large (\(\Gamma>\)0.9 MeV), however a few very narrow resonance state were found. The most populated area of resonance states lies in the interval 16\(<E<\)21 MeV, as it is demonstrated in Fig. 7, left panel. Two dense area of widths of resonance states are located in intervals 0.008 \(<\Gamma<\)0.22 MeV and 0.9\(<\Gamma<\)1.2 MeV (Fig. 7, right panel). In many cases only one Pauli resonance state appeared in a binary channel. We also determined several cases with two resonance states. In Fig. 8 we display spectrum of the Pauli resonances of positive parity states with the \begin{table} \begin{tabular}{c c c c c c c} Nucleus & Channel & \(L\) & \(S\) & \(J^{\pi}\) & \(E\), MeV & \(\Gamma\), MeV \\ \hline \({}^{6}\)Li & \(\alpha+d\) & 0 & 1 & 1\({}^{+}\) & 24.218 & 1.165 \\ & & 1 & 1 & 2\({}^{-}\) & 32.370 & 6.755 \\ & \({}^{3}\)He+\(t\) & 0 & 1 & 1\({}^{+}\) & 31.844 & 0.209 \\ & & 1 & 1 & 2\({}^{-}\) & 22.403 & 0.618 \\ \hline \({}^{7}\)Li & \(\alpha+t\) & 1 & 1/2 & 1/2\({}^{-}\) & 29.002 & 2.144 \\ & & 1 & 1/2 & 3/2\({}^{-}\) & 25.810 & 0.027 \\ & & 0 & 1/2 & 1/2\({}^{+}\) & 20.148 & 2.589 \\ & & 0 & 1/2 & 1/2\({}^{+}\) & 34.444 & 4.702 \\ & \({}^{6}\)Li+\(n\) & 0 & 1/2 & 1/2\({}^{+}\) & 12.863 & 3.332 \\ & & 0 & 3/2 & 3/2\({}^{+}\) & 18.895 & 0.196 \\ \hline \({}^{10}\)B & \({}^{6}\)Li+\(\alpha\) & 1 & 1 & 0\({}^{-}\) & 11.090 & 3.198 \\ & & 1 & 1 & 0\({}^{-}\) & 35.834 & 4.600 \\ & & 1 & 1 & 1\({}^{-}\) & 11.098 & 3.424 \\ & & 1 & 1 & 1\({}^{-}\) & 36.167 & 5.105 \\ & & 0 & 1 & 1\({}^{+}\) & 13.427 & 0.056 \\ & & 0 & 1 & 1\({}^{+}\) & 41.144 & 2.751 \\ \end{tabular} \end{table} Table 2: Parameters of the Pauli resonance states in \({}^{6}\)Li, \({}^{7}\)Li and \({}^{10}\)B total orbital momentum \(L=0\). These resonance states emerge in nuclei \({}^{7}\)Li, \({}^{8}\)Be, \({}^{9}\)Be and \({}^{10}\)B with clusterization \({}^{6}\)Li+\(A_{2}\), where \(A_{2}\) stands for neutron, deuteron, triton and alpha particle. This Figure shows that the energies of the first Pauli resonance states are quite close for all nuclei. It also shows that there are two Pauli resonance states in the channel \({}^{6}\)Li+\(d\) with the total spin \(S\)=1 and in the channel \({}^{6}\)Li+\(t\) with the total spins \(S\)=1/2 and \(S\)=3/2. One can see that the larger is the second cluster, the larger is the energy of the highest Pauli resonance state. Indeed, it growths from 19 MeV in \({}^{6}\)Li+\(n\) channel to 41 MeV in the channel \({}^{6}\)Li+\(\alpha\). The Pauli resonance states of negative parity created in the channel \({}^{6}\)Li+\(A_{2}\) (\(A_{2}=\)\(d\), \(t\), \(\alpha\)) with the total orbital momentum \(L\)=1 are show in Fig. 9. We did not find any resonance state in the channel \({}^{6}\)Li+\(n\). In this channel, the Pauli resonance states do not appear neither in states with total spin \(S\)=1/2 nor in the states \(S\)=3/2. Five Pauli resonance states are found in \({}^{8}\)Be and \({}^{9}\)Be, and four resonances are detected in \({}^{10}\)B. Fig. 9 shows that the energy of the lowest Pauli resonance state is deceasing with increasing of mass of the "projectile" \(A_{2}\). It is interesting tendency, as the Coulomb repulsing between \({}^{6}\)Li and \(A_{2}\) is increased \begin{table} \begin{tabular}{c c c c c c} Nucleus & Channel & \(L\) & \(S\) & \(J^{\pi}\) & \(E\), MeV & \(\Gamma\), MeV \\ \hline \({}^{8}\)Be & \({}^{6}\)Li+\(d\) & 0 & 0 & 0\({}^{+}\) & 17.233 & 3.553 \\ & & 0 & 1 & 1\({}^{+}\) & 14.989 & 1.011 \\ & & 0 & 1 & 1\({}^{+}\) & 25.724 & 4.628 \\ & & 0 & 2 & 2\({}^{+}\) & 20.656 & 0.008 \\ & & 1 & 0 & 1\({}^{-}\) & 18.253 & 0.058 \\ & & 1 & 1 & 2\({}^{-}\) & 45.555 & 6.097 \\ & & 1 & 1 & 2\({}^{-}\) & 18.523 & 0.008 \\ & & 1 & 2 & 3\({}^{-}\) & 18.531 & 0.013 \\ & & 1 & 2 & 2\({}^{-}\) & 20.981 & 0.402 \\ \hline \({}^{9}\)Be & \({}^{7}\)Li+\(d\) & 1 & 1/2 & 1/2\({}^{-}\) & 13.733 & 1.003 \\ & & 0 & 1/2 & 1/2\({}^{+}\) & 15.717 & 5.796 \\ & & 0 & 1/2 & 1/2\({}^{+}\) & 27.958 & 1.836 \\ \end{tabular} \end{table} Table 3: Energies and widths of the Pauli resonance states in \({}^{8}\)Be and \({}^{9}\)Be observed in the channels \({}^{6}\)Li+\(d\) and \({}^{7}\)Li+\(d\), respectively with increasing of mass of the second cluster \(A_{2}\). It necessary to underline that the spin-orbit interaction plays an important role in all cases when both the total orbital momentum \(L\) and total spin \(S\) do not equal to zero. #### iii.2.1 Birth of the Pauli resonance state To detect the Pauli and shape resonance states we analyzed behavior of phase shifts as a function of energy. Rapid growth of phase shift was considered as a signal of a resonance state. There is another way for detecting of resonance states of both types. This way applicable for any method which involve a square integrable basis of functions. Unfortunately this method works for relatively narrow resonance states. The narrow resonance states can Figure 7: Density of resonance state energies (left panel) and widths (right panel) of all determined Pauli resonance states be detected by calculating eigenspectrum of a Hamiltonian with different number of basis functions. By displaying eigenenergies as a function of the number of basis functions (we denote them as \(N_{O}\)) involved in calculations, a resonance state will display itself as a plateau or/and as a avoid crossing. Energy of plateau is the energy of a resonance state. Such way of detecting of resonance states is an essential element of the stabilization method (Ref. [24]) and complex scaling method (see definitions of the method and its recent progress in applications to many-cluster systems in Refs. [25; 26; 27]). In Fig. 10 we show dependence of eigenenergies of the \(3/2^{-}\) state in \({}^{7}\)Li=\(\alpha+t\) as a functions of the number of oscillator function \(N_{0}\) used in calculations. We gradually change the number of oscillator from 1 to 100. One can see that it necessary to use at least three oscillator functions to create plateau or, in other words, to obtain eigenvalue with the energy which is very close to the energy of resonance state. Such a plateau unambiguously indicate presence of a narrow resonance state. This result is naturally consistent with the results of phase shift calculations. Besides, the wave functions of the resonance states obtained with 5, 10 and 100 oscillator functions are very close to each other in the range of small values of \(n\), as it demonstrated in Fig. 11. It proves that the narrow \(3/2^{-}\) resonance state is formed by oscillator functions with a very small values of \(n\). Figure 9: Spectrum of the Pauli resonance states of the negative parity in \({}^{8}\)Be, \({}^{9}\)Be and \({}^{10}\)B created in the state with the total orbital momentum \(L\)=1. ### Peculiarities of the Pauli resonance states Let us consider peculiarities of the wave functions of the Pauli resonance state. Analysis of wave functions will allow us to understand nature of the Pauli resonances. Wave functions of resonance and nonresonant states are considered in the oscillator and coordinate representations. In Fig. 12 we show three wave functions of the \(3/2^{-}\) states in \({}^{7}\)Li for clusterization \(\alpha+t\). One of these functions is the wave function of the ground state (GS), the second function Figure 10: Spectrum of the \(3/2^{-}\) states in \({}^{7}\)Li as a function of the number of oscillator functions \(N_{O}\) involved in calculations is the Pauli resonance state (PR) with energy \(E\)=25.810 MeV and the third function is the wave function of nonresonant, elastic \(\alpha+t\) scattering state (SC) (\(E\)=10.1 MeV). Main difference between the Pauli resonance and nonresonant states wave functions is that the contribution of the oscillator function \(|0\rangle\). This function gives the largest contribution to the wave function of the Pauli resonance states, and it has smallest contribution to the wave functions of the ground and continuous spectrum state. In Fig. 13 we demonstrate wave functions of these states in coordinate space. As one can see, the wave function of the \(\alpha+t\) scattering state is the same as the \(\alpha+t\) scattering state. Figure 11: Convergence of wave function of the narrow \(3/2^{-}\) Pauli resonance state in \({}^{7}\)Li in the channel \(\alpha+t\) should expect, nonresonant wave functions have a node at small distances (\(r<\)2.5 fm), while resonance wave function has the first node at relatively large distance (\(r\approx\)5.5fm). Besides, the resonance function has a very large amplitude, at list two times larger than the amplitude of the bound and scattering states. Fig. 14 shows general picture of contribution of oscillator functions with the quantum numbers \(n=0\) and \(n=1\) to wave functions of continuous spectrum states over large energy range. Fig. 14 confirms also that the oscillator wave function with \(n=0\) contribute mainly to the Pauli resonance state and gives a small contribution to other states of the \(\alpha+t\) Figure 12: Wave functions in oscillator representation of the ground state (GS), Pauli resonance state (PR) and scattering state (SS) in the \(3/2^{-}\) state of \({}^{7}\)Li continuous spectrum. Let us now consider a case with two Pauli resonance states. They are detected in the \(1/2^{+}\) state of \({}^{7}\)Li in the channel \(\alpha+t\). Wave functions of two Pauli resonance states are shown in Fig. 15 in oscillator space and in Fig. 16 in coordinate space. Two oscillator functions with \(n=0\) and \(n=1\) give the main contribution to the Pauli resonance functions. As the results, these wave function in coordinate space describe a very compact two-cluster system, main part of which are concentrated at the small distances between clusters, namely \(0\leq r\leq 5\) fm. Amplitudes of the wave functions in this small region are substantially larger Figure 13: Wave functions of \({}^{7}\)Li=\(\alpha+t\) in coordinate space of the \(3/2^{-}\) bound (BS), resonance (PR) and scattering (SC) states. than their amplitude at large distances. ### Overlap As it was widely recognized that mysteries (spurious) resonance states appear due to the Pauli principle, it is then expedient to analyze its effects on the norm kernels. Matrix of norm kernel in general case (for improved and advanced version of the RGM) is nondiagonal, thus we start the analysis with 3D picture of the matrix. In Fig. 17 we display the overlap matrix Figure 14: Contribution of the oscillator wave functions with \(n=0\) and \(n=1\) to wave functions of the continuous spectrum \(3/2^{-}\) states of the \(\alpha+t\) channel \(\|\langle n|m\rangle\|\) for the channel \(\alpha\)+\(t\) in the state \(L\)=0, \(S\)=1/2 and \(J^{\pi}\)=1/2\({}^{+}\). One can see that this matrix is a quasi-diagonal. The largest matrix elements are located on the main diagonal, and the larger is \(m=n\), the close to unity they are. Off-diagonal matrix elements \(\langle n|m\rangle\) are very small. A few diagonal matrix element with small values of \(n\) are also small due to the Pauli principle. One may conclude that the Pauli principle has a short-range nature as it affects a relatively small number of cluster basis functions \(|n\rangle\) (16) and corresponding matrix elements \(\langle n|m\rangle\). Note that Fig. 17 demonstrates a typical behavior of matrix elements of norm kernel in the advanced version of the RGM for all nuclei and all states considered in this paper. Figure 15: Wave functions in oscillator space of two Pauli resonance state in the 1/2\({}^{+}\) state of \({}^{7}\)Li Fig. 17 prompts us to study only diagonal matrix elements of the norm kernel, which completely reflects effects of the Pauli principle. Consequently in this section we discuss the diagonal matrix elements and eigenvalues of the norm kernel. In Fig. 18 we compare diagonal matrix elements of the norm kernel, determined in the standard (S) and advanced (A) version of the RGM, for \({}^{7}\)Li as two-cluster configuration \(\alpha+t\). It is worthwhile recalling that in the standard version, the matrix of the norm kernel is diagonal. This figure demonstrates general features of the quantities \(\langle n|n\rangle\) and \(\Lambda_{\alpha}\) for all two-cluster systems under consideration. As Figure 16: Wave functions in coordinate space of two Pauli resonances in the \(1/2^{+}\) state in the \(\alpha+t\) channel was pointed our in previous paragraph, the major part of diagonal matrix elements are equal unity and only small fraction of them differ from unity, showing effects of the Pauli principle. It is necessary to recall that oscillator wave functions with small values of the quantum number \(n\) describe two clusters at smallest relative distance, thus effects of the Pauli principle for these functions are prominent. One can see that there are two Pauli forbidden states in the \(1/2^{+}\) state and one in the \(3/2^{-}\) state within the standard version. In the advanced versions these basis states, namely, \(|0\rangle\) and \(|1\rangle\) for \(1/2^{+}\) and \(|0\rangle\) for \(3/2^{-}\), the Pauli principle for \(1/2^{+}\) and \(|0\rangle\) for \(3/2^{ can be considered as almost-forbidden Pauli states, as the corresponding diagonal matrix elements are very small (\(\langle n|n\rangle<0.1\)). Fig. 18 demonstrates important features of matrix elements: the number of forbidden states in the standard version coincides with the number of almost-forbidden states in the advanced version. The diagonal matrix elements \(\langle n|n\rangle\) and eigenvalues \(\Lambda_{\alpha}\) of the norm kernel for the \(0^{+}\) and \(1^{-}\) states of two-cluster system \({}^{6}\)Li+\(n\) are shown in Fig. 20. The almost forbidden states are found for the states \(L^{\pi}=0^{+}\) and \(S\)=1/2 and \(S\)=3/2. Comparing Figs. 19 and 20 we see that the larger are interacting clusters, the larger is the region of diagonal matrix elements \(\langle n|n\rangle\) which are affected of the Pauli principle. In Fig. 19 we display the diagonal matrix elements \(\langle n|n\rangle\) and eigenvalues \(\Lambda_{\alpha}\) of the norm kernel for the \(0^{+}\) states of cluster system \({}^{6}\)Li+\(d.\) Diagonal matrix element also show that there are a few almost forbidden states when \(\langle n|n\rangle\) are close to zero. One may observe a set of the super-allowed Pauli states (\(\langle n|n\rangle>1\)) for the total spin \(S\)=1 and \(S\)=2. There are similarities between eigenvalues and the diagonal matrix elements of the norm kernel. The eigenvalues \(\Lambda_{\alpha}\) reveals a few almost forbidden states, two states for \(S\)=1 and one state for the total spin \(S\)=0 and \(S\)=2. Similar to the diagonal matrix elements, the eigenvalues for \(S\)=0 and \(S\)=2 possess the super allowed states. Diagonal matrix elements \(\langle n|n\rangle\) of the norm kernel and its eigenvalues \(\Lambda_{\alpha}\) for the channel \({}^{6}\)Li+\(\alpha\) are displayed in Fig. 21. Two almost forbidden states are demonstrated by both diagonal matrix elements and eigenvalues. They are observed in two states: \(L\)=0, \(S\)=1, \(J^{\pi}\)=1\({}^{+}\) and \(L\)=1, \(S\)=1, \(J^{\pi}\)=1\({}^{-}\). Finishing this subsections, we conclude that the number of almost forbidden states coincides with the number of almost forbidden eigenstates. Almost forbidden states \(|n\rangle\) obey the restriction \(\langle n|n\rangle<0.3\), while almost forbidden eigenstates have \(\Lambda_{\alpha}<0.2\). Comparing results demonstrated in Figs. 17, 18, 19, 20 with results of Tables III.3 and III.3 we came to the conclusion that the number of almost forbidden states equals of the number of the Pauli resonance states. ## IV Method Rev Let us consider main ideas of the REV method formulated in Ref. [12]. The author of Ref. [12] paid attention that a set of new eigenstates of the norm kernel appeared in the case when different oscillator lengths were used for an alpha particle and \({}^{16}\)O. These eigenstates have very small values comparatively to eigenstates with the common oscillator length. For example, the smallest eigenvalue obtained for the total orbital momentum \(L^{\pi}\)=0\({}^{+}\) with the common oscillator length is equal to 0.229, while there are four eigenstates with eigenvalues less than 0.03. Similar picture was also observed for the state \(L^{\pi}\)=1\({}^{-}\). The lowest eigenvalue obtained with the common oscillator length equals 0.344, for different oscillator lengths \(b_{\alpha}\)=1.395 fm and \(b_{O}\)=1.776 fm, four eigenstates emerged with eigenvalues less than 0.04. It was suggested in Ref. [12] to eliminate such eigenvalues and to use smaller set of norm kernel eigenstates. Thus in the case of different oscillator lengths, all eigenstates with eigenvalue smaller than eigenvalues with common oscillator length were treated as the Pauli forbidden states. Actually, the border between the Pauli allowed and Pauli forbidden state the Pauli allowed and Pauli forbidden states in system \(\alpha\)+\({}^{16}\)O was selected to be 0.1. Having applied such restrictions, all Pauli resonance states disappeared. We decided to use this method to eliminate the Pauli resonance states which appear in light nuclei within the advanced resonating group method. Analysis of the eigenvalues of the norm kernel carried out in Section III.5 indicates that we have to redetermine the border between the Pauli allowed and Pauli forbidden states. Efficiency of the REV method will be demonstrated in Section V.1. ## V Method for We suggest another method to struggle with the Pauli resonance states in light nuclei. This method relay on properties of matrix elements of the norm kernel. By analyzing properties of the matrix elements, our attention was attracted by behavior of diagonal matrix elements \(\langle n|n\rangle\). In many cases the matrix element \(\langle 0|0\rangle\) and some times matrix element \(\langle 1|1\rangle\) are very small with respect to other diagonal matrix elements. The analysis also revealed that matrix elements of corresponding row (\(\langle 0|n\rangle\), \(\langle 1|n\rangle\)) and columns (\(\langle n|0\rangle\), \(\langle n|1\rangle\)) are also very small. Besides, it was shown above (Section III.4) that oscillator functions with \(n=0\) and sometimes with \(n=1\) dominate in the wave function of the Pauli resonance states. Thus we suggest to omit those part of the matrix \(\|\langle n|\widetilde{n}\rangle\|\) whose diagonal matrix elements are very small. We also suggest a criteria for smallness of the diagonal matrix elements. Let us introduce minimal value of the diagonal matrix elements \(O_{\min}\) which will mark a border between the Pauli forbidden (or almost forbidden) and Pauli allowed states. Within our method, all diagonal matrix elements which are smaller than \(O_{\text{min}}\) will be omitted with their correspondent rows and columns. Analysis of the diagonal matrix elements of the norm kernel leads us to the conclusion that in many two-cluster cases, considered above, \(O_{\text{min}}\) can be set to 0.2. This can be seen in Figs. 19, 20, 21. Such a value can be also used both for the case of one or two Pauli resonance states. It is important to notice, that from mathematical point of view almost forbidden basis states or eigenstates are allowed states and should not create any problems. The same is true also from computational point of view, as the smallest eigenvalues are much larger than the smallest numerical value (numerical zero) in modern computers. Indeed, almost forbidden states do not create any problem for bound states and their parameters, such as root-mean-square mass and proton radii and so on. Presence of almost forbidden states affects (distorts) only continuous spectrum states. In this respect, both methods suggest re-determination of essentially allowed Pauli states. Both methods determine border between almost forbidden and allowed states. This border is marked by \(\Lambda_{\text{min}}\) and \(O_{\text{min}}\) in the REF and ROF methods, respectively. In general case, one can use \(\Lambda_{\text{min}}\) and \(O_{\text{min}}\) as variational parameters to control the number of eliminated basis states \(|n\rangle\) or eigenstates \(|\alpha\rangle\) and their effects on scattering parameters. Naturally, the main aim of such procedure is to eliminate the Pauli resonance state(s) and to cause minimal effects on bound states and shape resonance states. ### Demonstration of the REV and ROF methods Having analyzed the diagonal matrix elements and eigenvalues of the overlap matrix, we deduced \(O_{\text{min}}\) and \(\Lambda_{\text{min}}\) for all nuclei and for those states \(J^{\pi}\) which have the Pauli resonance states. These quantities are displayed in Table 1. In this table we also indicated the number \(N_{f.s.}\) of eliminated basis functions or eigenfunctions. In Fig. 22 we demonstrate efficiency of the REV and ROF methods for the \(\alpha+t\) scattering in the \(1/2^{+}\) state. Here OA stands for ordinary algorithm of obtaining phase shifts within the advanced version of the RGM. The phase shift in this approach exhibits two Pauli resonance states, parameter of which are shown in Table 3. As we can see both methods remove the Pauli resonance states. They also yield the phase shifts which are close to the standard version at low energy region 0\(\leq E<\)6 MeV. There is very small difference of phase shifts obtained with the REV and ROF methods. We used minimal values of \(\Lambda_{\rm min}=O_{\rm min}=0.2\). This restriction eliminated two functions in both methods. The similar picture is observed for the \(\alpha+d\) scattering in the \(2^{-}\) state, see Fig. 23. Only one Pauli resonance state is generated in this case. Both REV and ROF methods remove that Pauli resonance state and produce the phase shifts with very small differences. In this case we also used minimal values of \(\Lambda_{\rm min}=O_{\rm min}=0.2\). This restriction eliminated only one function in both methods. Phase shifts of the elastic \({}^{6}\)Li+\(d\) scattering obtained within three different approaches are shown in Fig. 24. As one can see, in this case we observe both low-energy shape and high-energy Pauli resonance states. The REV and ROF methods eliminating one eigenfunction and one oscillator function, respectively, remove the Pauli resonance state. They also slightly change parameters of the shape resonance. In the original approach (OA) parameters of the shape resonance are \(E\)=0.153 MeV and \(\Gamma\)=0.013 MeV, while in the REV method they are \(E\)=0.374 MeV and \(\Gamma\)=0.485 MeV and in the ROF method we obtain \(E\)=0.352 MeV and \(\Gamma\)=0.371 MeV. Note that the REV and ROF give almost identical phase shifts of the \({}^{6}\)Li+\(d\) scattering. It means that the eliminated eigenfunction of the norm kernel and the eliminated oscillator function are close to each other. We found several cases when the REV and ROF methods give noticeable different phase shifts. One of such examples is shown in Fig. 25, where phase shifts of the \({}^{6}\)Li+\(\alpha\) scattering in the state \(L=1\), \(S=1\) and \(J^{\pi}=1^{-}\) are drawn. Note that almost the same results are observed for the state \(J^{\pi}=0^{-}\) and \(J^{\pi}=2^{-}\) generated by the coupling of the total orbital momentum \(L=1\) with the total spin \(S=1\). Two Pauli resonance states were removed by eliminating two eigenfunctions of the norm kernel obeying the restriction \(\Lambda_{\alpha}\leq\)0.2, and two oscillator functions with the restriction \(\langle n|n\rangle\leq\)0.3. Noticeable deviation of the phase shifts obtained in the REV and ROF methods is seen at the energy region \(E>\)3 MeV. Such deviation can be explained by structure of the eigenfunctions and their relation to oscillator functions. If an eigenfunction is mainly represented by one oscillator function, one may expect close results of both methods. If eigenfunction is spread over large number of oscillator functions, results obtained with these two method would be different. To prove this statement, we show in Fig. 27 eigenfunctions \(\|U_{n}^{\alpha}\|\)of the norm kernel as a function of \(n\) for two different cases with two Pauli resonance states. We selected cases for elastic \({}^{6}\)Li+\(\alpha\) scattering with quantum numbers \(L=S\)=1, \(J^{\pi}\)=1\({}^{-}\)and \(L=0\), \(S\)=1, \(J^{\pi}\)=1\({}^{+}\). Phase shifts for them are shown in Figs 25 and 26. Fig. 27 demonstrates that for the \(J^{\pi}\)=1\({}^{-}\) state, a large number of oscillator functions participate in formation of eigenfunctions \(U_{n}^{1}\) and \(U_{n}^{2}\). While for the \(J^{\pi}\)=1\({}^{+}\) state, lowest oscillator functions with \(n=0\) and \(n=1\) totally dominates in corresponding eigenfunctions \(U_{n}^{1}\) and \(U_{n}^{2}\). Similar dominance of oscillator function with the quantum number \(n=0\) in the eigenfunction \(U_{n}^{1}\) are observed in all cases, when phase shifts obtained with the REV and ROF are coincide. In Table 5 we demonstrate effects of eliminated eigenfunctions and oscillator functions on parameters of bound and resonance states. These results are obtained for the 1\({}^{+}\) states in \({}^{10}\)B. By increasing \(\Lambda_{\rm min}\) (\(O_{\rm min}\)) from zero to a certain value, indicated in the second column of Table 5, we manage to eliminate one two and three eigenfunctions (oscillator functions). In the fourth column of Table 28, we demonstrate how eliminated eigenfunctions and oscillator functions affect energy of the 1\({}^{+}\) bound state of \({}^{10}\)B. In Fig. 28 we show effects of eliminated functions on the \({}^{6}\)Li+\(\alpha\) phase shift. By eliminating one eigenfunction or one oscillator function, we remove the lowest Pauli resonance state and change position (lower down) of the second resonance on approximately 6.5 MeV. However, the energy of the ground state is slightly changed after removing one function. When we remove two eigenfunctions or two oscillator functions, both Pauli resonance states are disappeared. Two removed eigenfunctions increase energy of the bound state on 0.9 MeV, while two removed oscillator functions increase the energy on \(\approx\)1.3 MeV. As we pointed out above, oscillator functions with small values of the quantum number \(n\) and eigenfunctions with small value of index \(\alpha\) describe the most compact two-cluster configurations. It is interesting to analyze effects of their deletion on energies of bound states and shape resonances, if they appear. For this aim we collected in Table 5 energies of bound and resonance states. #### iv.2.1 Preliminary conclusions At the end of this section we made preliminary conclusions concerning the REV and ROF methods. In all cases, presented above, both methods completely remove all detected Pauli resonance states. In many cases, both methods give close results for phase shifts. In some cases, phase shifts are somewhat different. Such a difference as we demonstrated, appear when eigenfunctions of the norm kernel are spread over a large number of oscillator functions. In other words, removed eigenfunctions and removed oscillator functions are quit different. When results of both method coincide, removed eigenfunctions are presented mainly by removed oscillator functions. We demonstrated that the ROF method, formulated in this paper, is an alternative method to the one suggested by Kruglanski and Baye. Advantage of the ROF is that it does no require an orthogonalization procedure of matrices of norm kernel and then transformation of matrix of Hamiltonian to new representation. This procedure is time consuming when a large number of basis functions are involved. We also demonstrated that oscillator representation is appropriate tool for studying effects of the Pauli principle on kinematic (matrix of norm kernel) and dynamics (matrix of Hamiltonian) of two- and many-cluster systems. ## VI Conclusions Properties of Pauli resonance states in two-body continuum of the light nuclei \({}^{6}\)Li, \({}^{7}\)Li, \({}^{7}\)Be, \({}^{9}\)Be and \({}^{10}\)B have investigated within the advanced version of the resonating group method. The advanced version employs a three-cluster configuration which allows to consider in general case three two-body (binary) channels. One of constituents of a binary channel is considered as a two-cluster subsystem, which provides us with more correct description of the nuclei having distinct two-cluster structure and a small separation energy. Wave functions of two-cluster subsystem are obtained by solving appropriate Schrodinger equation. The advanced version we have employed make use of the square-integrable bases - Gaussian and oscillator bases. Gaussian basis is used to describe relative motion of two clusters in two-cluster subsystem and is very efficient in obtaining wave functions of bound states with minimal number of basis functions. Oscillator basis is used to study interaction of the third cluster with two-cluster subsystem. It allows us to implement proper boundary condition for discrete and continuous spectrum states. It was demonstrated that oscillator basis is suitable tool to study effects of the Pauli principle and to reveal nature of the Pauli resonance states. It was demonstrated that the advanced form of two-cluster subsystem is the origin of the Pauli resonance states. More precisely, an advanced form of wave function of two-cluster subsystem is responsible for appearance of the Pauli resonance states. It has been shown that the Pauli resonance states appear at the relatively high energy \(E>\)11 MeV. Some of these resonance states are very narrow resonance states, however, major part of them are broad resonance states. The most populated area of resonance states lies in the interval 16\(<E<\)21 MeV. Two dense area of widths of resonance states are located in intervals 0.008\(<\Gamma<\)0.22 MeV and 0.9\(<\Gamma<\)1.2 MeV. It was found that the oscillator functions with minimal value of the quantum number \(n\) (the number of radial oscillator quanta) dominates in resonance wave functions. These basis functions yields very small values of the diagonal matrix elements \(\langle n|n\rangle\) of the norm kernel. It was also demonstrated that the very narrow Pauli resonance states can be detected by using a very small number of oscillator functions: from three to five functions. We have established that the Pauli principle predetermine appearance of the Pauli resonance states by creating almost forbidden states, however energies and widths of the Pauli resonance states are mainly formed by nucleon-nucleon forces. We found that the number of Pauli resonance states for the given \(J^{\pi}\) state, discovered within the advanced version of the RGM, coincides with the number of the Pauli forbidden states determined in the standard version of the RGM. One of the main conclusions of the present paper is that one needs to find proper definition of the Pauli forbidden and Pauli allowed (fully or partially) states. Standard or formal definition for Pauli forbidden states is that the eigenvalues for them should be equal zero \(\Lambda_{\alpha}=0\). Then, the Pauli allowed states should have \(\Lambda_{\alpha}>0\). However, the carried out analysis leads us to the conclusion that for light nuclei with two-body clusterization the border between forbidden and allowed states is \(\Lambda_{\rm min}=0.2\). It was also shown that oscillator functions \(|n\rangle\) which generates diagonal matrix elements of the norm kernel \(\langle n|n\rangle\leq O_{\rm min}=0.2\), can be considered as the Pauli forbidden states. By removing of the Pauli forbidden states, one eliminates the Pauli resonance states and causes minor effects on energy of bound states and energy and width of the shape resonance states, if they exist. We have not found universal values of \(\Lambda_{\rm min}\) and \(O_{\rm min}\) for all light nuclei, which have been considered. As for perspective of this work. In the present paper we have restricted ourselves to a single-channel approximation to reveal the Pauli resonance states and find main factors responsible for formation of such states. In the future we are planning to consider appearance of the Pauli resonance states in many-channel systems and how the REF and ROF can help to eliminate them. Many-channel cases are specially interesting since small eigenvalues of the norm kernel can appear due to strong overlap of basis functions belonging to different channels. This strong coupling is not directly related to the Pauli principle. This makes the problem more attractive and challenging. ###### Acknowledgements. We would like to thank K. Kato for stimulating discussions and encouraging support. This work was supported in part by the Program of Fundamental Research of the Physics and Astronomy Department of the National Academy of Sciences of Ukraine (Project No. 0122U000889) and by the Ministry of Education and Science of the Republic of Kazakhstan, Research Grant IRN: AP 09259876. V.V. is grateful to the Simons foundation for financial support.
2310.19040
Intertwining operators between subregular Whittaker modules for $\mathfrak{gl}_N$ and non-standard quantizations
In this paper, we study intertwining operators between subregular Whittaker modules of $\gl_N$ generalizing, on the one hand, the classical exchange construction of dynamical quantum groups, on the other hand, earlier results for principal W-algebras. We explicitly construct them using the generators of W-algebras introduced by Brundan-Kleshchev. We interpret the fusion on intertwining operators in terms of categorical actions and compute the semi-classical limit of the corresponding monoidal isomorphisms which turn out to depend on dynamical-like parameters.
Artem Kalmykov, Brian Li
2023-10-29T15:07:51Z
http://arxiv.org/abs/2310.19040v1
Intertwining operators between subregular Whittaker modules for \(\mathfrak{gl}_{N}\) and non-standard quantizations ###### Abstract. In this paper, we study intertwining operators between subregular Whittaker modules of \(\mathfrak{gl}_{N}\) generalizing, on the one hand, the classical exchange construction of dynamical quantum groups, on the other hand, earlier results for principal W-algebras. We explicitly construct them using the generators of W-algebras introduced by Brundan-Kleshchev. We interpret the fusion on intertwining operators in terms of categorical actions and compute the semi-classical limit of the corresponding monoidal isomorphisms which turn out to depend on dynamical-like parameters. ## 1. Introduction Quantum groups are algebraic objects that arose from the study of the quantum inverse scattering method for solving quantum integrable systems. By now they have become a classical subject with connections to many areas of mathematics: representations in positive characteristic, \(3\)-manifolds and link invariants, \(1\)-dimensional quantum integrable systems. In general, it is hard to construct non-trivial quantum groups. In this paper, we adopt a categorical approach via the _Tannakian reconstruction_[1, 2, 3]: essentially, a quantum group is defined by the collection of its representations and by how one can "multiply" them; mathematically, it is formalized by the notion of a _tensor category_ with a tensor structure on the forgetful functor to vector spaces which is equivalent to a collection of isomorphisms \[J_{UV}\colon U\otimes V\to U\otimes V, \tag{1}\] satisfying the _twist equation_ (see [1]), for all representations \(U,V\). The "quantum" part of quantum groups usually means that these isomorphisms come in families depending on a quantization parameter, and its first order defines a Poisson-Lie structure, see [1]. One example of such an approach is the _exchange construction_ of Etingof-Varchenko [1] that gives rise to _dynamical quantum groups_ which is a certain variant of a quantum group depending on additional parameters. Roughly speaking, the authors of _loc. cit._ consider a finite-dimensional analog of _vertex operators_ parameterized by a representation of a reductive Lie algebra: on the one hand, such operators have a natural multiplicative structure given by composition, on the other hand, following the state-field correspondence philosophy, they are in bijection with the elements of the representation itself. In particular, the former induces a non-trivial tensor structure on the collection of all the representations. Finite-dimensional vertex operators are essentially maps between _Verma modules_; the latter are defined by a highest-weight vector. It was observed in [1] that in the case of the general linear group \(\mathrm{GL}_{N}\), there is a version of the state-field correspondence for the _Whittaker_ modules generated by Whittaker vectors. In particular, we can apply a similar exchange construction; it turns out that it produces a certain _non-standard_ quantum group. In fact, the corresponding solution to the _quantum Yang-Baxter equation_, which is closely related to quantum groups, was obtained by Cremmer-Gervais [2] via studying the exchange algebra of the Toda field theory that can be formulated using _affine W-algebras_; at the same time, the natural setup for Whittaker modules is a _(finite) principal W-algebra_. In general, a W-algebra is associated to a pair of a reductive Lie algebra and a nilpotent element in it, see [1] for introduction to the subject. So, one may ask: is there a similar result for other W-algebras? In this paper, we study the exchange construction for the so-called _subregular_ nilpotent element \(e\) and the corresponding _subregular_ W-algebra \(\mathcal{W}\); we recall the definition in Section 5. We establish a state-field correspondence in this case in Subsection 5.4 using remarkable generators of W-algebras for \(\mathfrak{gl}_{N}\) introduced by Brundan-Kleshchev [10]; we recall the construction of _loc. cit._ in Section 3. Unfortunately, in the subregular case, the exchange construction does _not_ give a quantum group. However, there is still a non-trivial tensor structure analogous to (1). It turns out that the appropriate categorical setup for it is that of _module categories_, see Section 2. More precisely, as in [11] and [12], we formulate the exchange construction in terms of the category of _Harish-Chandra bimodules_ and the _finite Drinfeld-Sokolov reduction_ functor, see Section 4. The finite Drinfeld-Sokolov reduction functor is a direct generalization of the definition of a W-algebra via quantum Hamiltonian reduction to the category of Harish-Chandra bimodules and refers to the quantum Drinfeld-Sokolov reduction, see [10]. Then, the subregular analog of the tensor isomorphisms (1) comes from the induced categorical action of the category of \(\mathrm{GL}_{N}\)-representations \(\mathrm{Rep}(\mathrm{GL}_{N})\) on the category of right \(\mathcal{W}\)-modules, see Theorem 5.9. Unlike in the regular case of [12], they depend on a dynamical-like parameters lying on a _non-abelian_ Lie subalgebra of \(\mathfrak{gl}_{N}\), and take the form \[J_{UV}\colon U\otimes V\otimes\mathcal{W}\to U\otimes V\otimes\mathcal{W}. \tag{2}\] for \(U,V\) representations of \(\mathrm{GL}_{N}\). The main results of the paper are contained in Subsection 5.5. For instance, we provide an algorithm to compute the monoidal isomorphisms (1). Finally, we compute its semi-classical limit by explicitly constructing the state-field correspondence for the vector representation \(V=\mathbf{C}^{N}\) using the W-algebra generators from Brundan-Kleshchev, refer to Theorem 5.12 and Theorem 5.13. Denote by \(E_{i,j}\in\mathfrak{gl}_{N}\) the matrix units. Consider the two-dimensional subalgebra \(\mathfrak{l}=\mathrm{span}(E_{2,1},E_{1,1})\). Let \(x_{21},x_{11}\in\mathfrak{l}\) be functions on \(\mathfrak{l}^{*}\) corresponding to \(E_{2,1},E_{1,1}\). The main result of the paper is Theorem 5.23. **Theorem**.: _The semi-classical limit \(\mathbf{j}\) of \(J_{UV}\) from (2) is_ \[\mathbf{j}=\mathbf{j}_{c}+\sum_{j=2}^{N-2}\sum_{i=j+2}^{N}\sum_{r=2}^{i-j}(-1) ^{i-j-r}x_{21}x_{11}^{i-j-r}E_{1,r}\otimes E_{i,j}+\sum_{i=4}^{N}\sum_{r=2}^{i -2}(-1)^{i-r}x_{11}^{i-r-1}E_{1,r}\otimes E_{i,1}.\] Here \(\mathbf{j}_{c}\) is constant, and it comes from a Frobenius-like structure given by the trace pairing with the subregular nilpotent element \(e\) on a certain subspace of \(\mathfrak{gl}_{N}\) which we call the _(subregular) wonderbolic_ subspace: \[\mathfrak{w}=\begin{pmatrix}0&*&\dots&*&0\\ 0&*&\dots&*&0\\ *&*&\dots&*&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ *&*&\dots&*&0\end{pmatrix}.\] See Subsection 5.2 for the explicit form for \(\mathbf{j}_{c}\). The wonderbolic subspace plays a similar role to that of the _miabolic subalgebra_ in the regular case, which was the main motivation for the name. It would be interesting to obtain an invariant description of the "dynamical" part similar to the constant one. ### Organization of the paper In Section 2, we introduce general notions, including that of a module category, that we will use in the paper. In Section 3, we recall the Brundan-Kleshchev's construction [10] of W-algebras for \(\mathfrak{gl}_{N}\) in terms of pyramids. In Section 4, we present the exchange construction in categorical terms as a finite Drinfeld-Sokolov reduction functor from the category of Harish-Chandra bimodules. In Section 5, we specialize to the case of subregular W-algebras: in Subsection 5.4, we compute Whittaker vectors for the vector representation of \(\mathfrak{gl}_{N}\), and in Subsection 5.5, we compute the corresponding monoidal structure. ### Acknowledgments This research was conducted while B.L. was a participant in the MIT PRIMES program; we would like to thank the organizers for making this research opportunity possible. A.K. would like to thank the Department of Mathematics of Massachusetts Institute of Technology for hospitality. ## 2. Background ### General setup In this section, we introduce general notations that we use in the paper. We work over the field of complex numbers \(\mathbf{C}\). Throughout the paper, we use \(\hbar\)-versions of constructions in questions. To avoid categorical complications, we treat \(\hbar\) as a _non-formal_ parameter, i.e. \(\hbar\in\mathbf{C}^{\times}\) (for instance, we can still deal with \(\mathbf{C}\)-linear categories). The reader may safely assume that \(\hbar=1\). The only purpose of introducing \(\hbar\) is to compute classical limits of certain formulas, and it will be clear from the context how to make sense of the corresponding \(\hbar\)-family over \(\mathbb{A}^{1}\). Let \(\mathfrak{g}\) be a Lie algebra. **Definition 2.1**.: The _asymptotic universal enveloping algebra_\(\mathrm{U}_{\hbar}(\mathfrak{g})\) of \(\mathfrak{g}\) is a tensor algebra over \(\mathbf{C}\), generated by the vector space \(\mathfrak{g}\), with the relations \[xy-yx=\hbar[x,y],\ x,y\in\mathfrak{g}.\] For any \(x,y\in\mathrm{U}_{\hbar}(\mathfrak{g})\), the _commutator_\([x,y]\) is \[[x,y]:=\frac{xy-yx}{\hbar}. \tag{2.1}\] Observe that it is well-defined over \(\mathbf{C}[\hbar]\). _Remark 2.2_.: Usually, asymptotic universal enveloping algebras are defined over the polynomial ring \(\mathbf{C}[\hbar]\). Here, as we mentioned at the beginning of the section, we treat \(\hbar\) just as a complex number. In this paper, we will be dealing with a general linear Lie algebra. Let \(\mathrm{GL}_{N}\) be the group of invertible \(N\times N\)-matrices and \(\mathfrak{gl}_{N}\) be its Lie algebra identified with the space of \(N\times N\)-matrices. We choose a natural basis \(\{E_{i,j}|1\leq i,j\leq N\}\) of matrix units with the commutator \[[E_{i,j},E_{k,l}]=\delta_{jk}E_{i,l}-\delta_{li}E_{k,j}.\] ### Module categories In this subsection, we recall the notion of a module category over a tensor category, for instance, see [15, Chapter 7]. Recall that a monoidal category is a plain category \(\mathcal{C}\) equipped with a bifunctor \(\otimes\colon\mathcal{C}\times\mathcal{C}\to\mathcal{C}\) together with a unit object \(\mathbf{1}_{\mathcal{C}}\) and a natural isomorphism (associativity constraint) \[a_{X,Y,Z}\colon(X\otimes Y)\otimes Z\xrightarrow{\sim}X\otimes(Y\otimes Z),\] satisfying the _unit_ and _pentagon_ axioms, see [15, Chapter 2]. Monoidal categories are categorical analogs of algebras. Likewise, there is a categorical analog of modules over an algebra. **Definition 2.3**.: Let \(\mathcal{C}\) be a monoidal category. A _(right) module category_ over \(\mathcal{C}\) is a plain category \(\mathcal{M}\) equipped with a bifunctor \[\otimes\colon\mathcal{M}\times\mathcal{C}\to\mathcal{M}\] with natural isomorphisms \[m_{M,X,Y}\colon M\otimes(X\otimes Y)\xrightarrow{\sim}(M\otimes X)\otimes Y,\] for all \(M\in\mathcal{M},\ X,Y\in\mathcal{C}\), such that the functor \(M\otimes\mathbf{1}_{\mathcal{C}}\mapsto M\) is an autoequivalence of \(\mathcal{M}\) and the associativity constraint \(m\) satisfies the _pentagon axiom_, shown below. _Example 2.4_.: Tautologically, any monoidal category \(\mathcal{C}\) is a module category over itself. There is also a generalization of a homomorphism between algebra modules. **Definition 2.5**.: Let \(\mathcal{C}\) be a monoidal category and \(\mathcal{M}_{1},\mathcal{M}_{2}\) be two module categories over \(\mathcal{C}\). A _functor of \(\mathcal{C}\)-module categories_ is a plain functor \(F\colon\mathcal{M}_{1}\to\mathcal{M}_{2}\) with a collection of natural isomorphisms \[J_{M,X}\colon F(M\otimes X)\to F(M)\otimes X \tag{2.2}\] for all \(M\in\mathcal{M},\ X\in\mathcal{C}\), satisfying the compatibility condition and where the diagonal arrows come from the corresponding unit autoequivalences. ## 3. W-algebras for \(\mathfrak{gl}_{N}\) In general, a _finite W-algebra_ is associated to a pair \((\mathfrak{g},e)\) of a reductive Lie algebra \(\mathfrak{g}\) and a nilpotent element \(e\in\mathfrak{g}\), see [10] for a survey of the subject. From now on, we will be interested in the case \(\mathfrak{g}=\mathfrak{gl}_{N}\); according to [1], the W-algebras in this case admit a description in terms of combinatorial objects called _pyramids_. In what follows, we recall this description; for the details and proofs, we refer the reader to _loc. cit._ **Definition 3.1**.: [1, Section 7] A _pyramid_\(\pi\) is a sequence of positive numbers \((q_{1},\dots,q_{l})\), the _column heights_, such that \(\sum\limits_{i=1}^{l}q_{i}=N\) and \[0<q_{1}\leq\dots\leq q_{k},\quad q_{k+1}\geq\dots\geq q_{l}>0\] for some \(k\leq l\). The _maximal height_\(n\) is \(\max(q_{1},\dots,q_{l})\). We number the blocks starting from top to bottom and from left to right; for each \(i\), we denote by \(\operatorname{col}(i)\) (resp. \(\operatorname{row}(i)\)) the corresponding column (resp. row) of \(i\) counted from left to right (resp. from top to bottom). Here is an example of a pyramid \[\pi=\begin{array}{|c|c|c|c|}\cline{3-3}\cline{5-3}2&\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit for \(\pi\) as above. Introduce a \(\mathbf{Z}\)-grading on \(\mathfrak{g}=\bigoplus_{j\in\mathbf{Z}}\mathfrak{g}_{j}\) by declaring that \(\deg(E_{ij})=\operatorname{col}(j)-\operatorname{col}(i)\). Let \[\mathfrak{p}=\bigoplus_{j\geq 0}\mathfrak{g}_{j},\qquad\mathfrak{m}=\bigoplus_{j<0 }\mathfrak{g}_{j}. \tag{3.1}\] Similarly to Definition 3.2, we will use an inductive structure on the algebras. **Definition 3.3**.: The _\(\boldsymbol{k}\)-th truncated nilpotent subalgebra_\({}_{k}\mathfrak{m}\) (resp. _\(\boldsymbol{k}\)-th truncated parabolic subalgebra_) is the nilpotent subalgebra (resp. the parabolic subalgebra) associated to the truncated pyramid \({}_{k}\pi\). Alternatively, we denote them by \(\mathfrak{m}_{{}_{k}N}\) (resp. \(\mathfrak{p}_{{}_{k}N}\)) if the truncation is clear from the context. To a pyramid \(\pi\), we assign a nilpotent element \(e\) defined by \[e:=\sum_{\begin{subarray}{c}1\leq i,j\leq N\\ \text{row}(i)=\text{row}(j)\\ \text{col}(i)=\text{col}(j)-1\end{subarray}}E_{i,j}. \tag{3.2}\] For instance, for pyramid (3), we have \(e=E_{3,5}+E_{1,4}+E_{4,6}+E_{6,7}\). The nilpotent element \(e\) defines a character \(\psi\) of \(\mathfrak{m}\): \[\psi\colon\mathfrak{m}\to\mathbf{C},\qquad x\mapsto\operatorname{Tr}(ex).\] Denote by \[Q:=\operatorname{U}_{h}(\mathfrak{g})\otimes_{\operatorname{U}_{h}(\mathfrak{ m})}\mathbf{C}^{\psi}, \tag{3.3}\] where \(\mathfrak{m}\) acts on \(\mathbf{C}^{\psi}\) via the character \(\psi\). It is naturally a left \(\operatorname{U}_{h}(\mathfrak{g})\)-module. As a vector space, we can identify \[Q\cong\operatorname{U}_{h}(\mathfrak{p}) \tag{3.4}\] by the PBW theorem. For any \(\xi\in\mathfrak{m}\), denote by \(\xi^{\psi}=\xi-\psi(\xi)\), and define the shift \[\mathfrak{m}^{\psi}=\operatorname{span}(\xi^{\psi}|\xi\in\mathfrak{m})\subset \operatorname{U}_{h}(\mathfrak{g}). \tag{3.5}\] **Definition 3.4**.: _[_2_, Section 8]_ A _finite W-algebra_\(\mathcal{W}\), associated to the nilpotent element \(e\) (3.2), is the space \[\mathcal{W}:=Q^{\mathfrak{m}^{\psi}}=\{w\in Q|\xi^{\psi}w=0\ \forall\xi\in \mathfrak{m}\}\] of \(\mathfrak{m}^{\psi}\)-invariant vectors in \(Q\). In [2], the authors introduced explicit generators of \(\mathcal{W}\) whose construction we recall now. Observe that in _loc. cit._, the authors use the version with \(\hbar=1\); to pass to the "asymptotic" version, one can use the Rees construction with respect to the _Kazhdan filtration_, see [2, (8.3)]. We will explicitly indicate how to modify the corresponding definitions and statements. Let \(\rho_{\pi,r}=n-\sum\limits_{k=r}^{l}q_{k}\). Introduce modified generators \[\widetilde{E}_{i,j}=(-1)^{\operatorname{col}(j)-\operatorname{col}(i)}(E_{i,j }+\delta_{ij}\hbar\rho_{\pi,\operatorname{col}(i)}) \tag{3.6}\] for all \(1\leq i,j\leq n\). **Definition 3.5**.: The _Kazhdan filtration_ on \(\operatorname{U}_{h}(\mathfrak{g})\) is defined by declaring that \(\deg(E_{i,j})=\operatorname{col}(j)-\operatorname{col}(i)+1\). In particular, assigning \(\deg(\hbar)=1\), we see that this modification preserves the filtration. Let \(1\leq x\leq n\). **Definition 3.6**.: _[_2_, Section 9]_ _Consider the set of signs \(\sigma_{1}=\sigma_{2}=\ldots=\sigma_{x}=-,\sigma_{x+1}=\ldots=\sigma_{n}=+\). For any \(1\leq i,j\leq N\), define \(T^{(0)}_{ij,x}:=\delta_{ij}\sigma_{i}\) and, for \(r>0\),_ \[T^{(r)}_{ij;x}=\sum_{s=1}^{r}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{s}\\ j_{1},\ldots,j_{s}\end{subarray}}\sigma_{\operatorname{row}(j_{1})}\cdots \sigma_{\operatorname{row}(j_{s-1})}\widetilde{E}_{i_{1},j_{1}}\cdots \widetilde{E}_{i_{s},j_{s}}, \tag{3.7}\] with the following conditions on \(1\leq i_{1},\ldots,i_{s},j_{1},\ldots,j_{s}\leq N\): 1. \(\operatorname{row}(i_{1})=i,\operatorname{row}(j_{s})=j\) 2. \(\operatorname{row}(j_{k})=\operatorname{row}(i_{k+1})\) for all \(1\leq k\leq s-1\) 3. \(\operatorname{col}(i_{k})\leq\operatorname{col}(j_{k})\) for all \(1\leq k\leq s\) 4. \(\sum\limits_{k=1}^{s}(\operatorname{col}(j_{k})-\operatorname{col}(i_{k})+1)=r\) 5. If \(\sigma_{\operatorname{row}(j_{k})}=+\), then \(\operatorname{col}(j_{k})<\operatorname{col}(i_{k+1})\) for all \(1\leq k\leq s-1\). 6. If \(\sigma_{\operatorname{row}(j_{k})}=-\), then \(\operatorname{col}(j_{k})\geq\operatorname{col}(i_{k+1})\) for all \(1\leq k\leq s-1\). It follows from condition (3) that \(T^{(r)}_{ij;x}\in\operatorname{U}_{\hbar}(\mathfrak{p})\) by (3.4). Condition (4) can be equivalently reformulated that the degree of \(T^{(r)}_{ij;x}\) with respect to the Kazhdan filtration is \(r\). For instance, when \(r=1\), it can be shown that \[T^{(1)}_{ij;x}=\sum\limits_{\begin{subarray}{c}1\leq h,k\leq n\\ \operatorname{col}(h)=\operatorname{col}(k)\\ \operatorname{row}(h)=i,\operatorname{row}(k)=j\end{subarray}}\widetilde{E}_{h,k}. \tag{3.8}\] Using these elements, the authors of [1] constructed generators of a W-algebra corresponding to a pyramid \(\pi\). We will give a precise statement for the subregular case in Section 5. We will also need an inductive structure on these elements. Recall Definition 3.2. Let \(\mathfrak{gl}_{{}_{k}N}\) be the Lie algebra corresponding to the truncated pyramid \({}_{k}\pi\). Denote by \({}_{k}\widetilde{E}_{i,j}\) the corresponding modified generators (3.6). Consider a (non-standard) embedding \[\iota\colon\operatorname{U}_{\hbar}(\mathfrak{gl}_{{}_{k}N})\to \operatorname{U}_{\hbar}(\mathfrak{gl}_{N}),\ \iota({}_{k}\widetilde{E}_{i,j})=\widetilde{E}_{i,j}. \tag{3.9}\] Define the truncated analog of elements (3.7) as \[{}_{k}T^{(r)}_{ij;x}:=\iota(T^{(r)}_{ij;x}). \tag{3.10}\] ## 4. Finite Drinfeld-Sokolov reduction ### Harish-Chandra bimodules Let \(G\) be an affine algebraic group over \(\mathbf{C}\) and \(\mathfrak{g}\) be its Lie algebra. Denote by \(\operatorname{U}_{\hbar}(\mathfrak{g})\) the universal enveloping algebra of \(\mathfrak{g}\) as in Definition 2.1. Let \(\operatorname{Rep}(G)\) be the category of \(G\)-representations. Naturally, \(\operatorname{U}_{\hbar}(\mathfrak{g})\) is an object in \(\operatorname{Rep}(G)\). **Definition 4.1**.: A _Harish-Chandra bimodule_ is a left \(\operatorname{U}_{\hbar}(\mathfrak{g})\)-module \(X\) in the category \(\operatorname{Rep}(G)\). In other words, it has a structure of a \(G\)-representation and a left \(\operatorname{U}_{\hbar}(\mathfrak{g})\)-module such that the action morphism \[\operatorname{U}_{\hbar}(\mathfrak{g})\otimes X\to X\] is a homomorphism of \(G\)-representations. The category of Harish-Chandra bimodules is denoted by \(\operatorname{HC}_{\hbar}(G)\). There is a natural right \(\operatorname{U}_{\hbar}(\mathfrak{g})\)-module structure on any Harish-Chandra bimodule \(X\) (justifying the name). Namely, for \(\xi\in\mathfrak{g}\), denote by \(\operatorname{ad}_{\xi}\colon X\to X\) the derivative of the \(G\)-action on \(X\) along \(\xi\). Then we can define \[x\xi:=\xi x-\hbar\operatorname{ad}_{\xi}(x),\ x\in X, \tag{4.1}\] and extend it to a right \(\operatorname{U}_{\hbar}(\mathfrak{g})\)-action. Therefore, the category \(\operatorname{HC}_{\hbar}(G)\) is a subcategory of \(\operatorname{U}_{\hbar}(\mathfrak{g})\)-bimodules, hence is equipped with a tensor structure: \[X\otimes^{\operatorname{HC}_{\hbar}(G)}Y:=X\otimes_{\operatorname{U}_{\hbar}( \mathfrak{g})}Y.\] There is a natural functor of the so-called _free Harish-Chandra bimodules_: \[\operatorname{free}\colon\operatorname{Rep}(G)\to\operatorname{HC}_{\hbar}(G),\qquad V\mapsto\operatorname{U}_{\hbar}(\mathfrak{g})\otimes V. \tag{4.2}\] One can check that this functor is monoidal. In fact, all Harish-Chandra bimodules can be "constructed" from the free ones. **Proposition 4.2**.: _[_1_, Proposition 2.7]_ _The category \(\operatorname{HC}_{\hbar}(G)\) is generated by \(\operatorname{free}(V)\) for \(V\in\operatorname{Rep}(G)\)._ ### Drinfeld-Sokolov reduction Now let restrict to the case \(G=\operatorname{GL}_{N}\) and \(\mathfrak{g}=\mathfrak{gl}_{N}\). We use the notations from Section 3, in particular, we fix a pyramid \(\pi\) and consider the corresponding nilpotent subalgebra \(\mathfrak{m}\) with a character \(\psi\in\mathfrak{m}^{*}\). **Definition 4.3**.: A _Whittaker module_ is a left \(\operatorname{U}_{\hbar}(\mathfrak{g})\)-module \(M\) such that the action of \(\mathfrak{m}^{\psi}\) from (3.5) is locally nilpotent. A _Whittaker vector_ is an \(\mathfrak{m}^{\psi}\)-invariant vector in \(m\in M\), i.e. satisfying \[\xi^{\psi}m=0\ \text{ for all }\xi\in\mathfrak{m}.\] The space of Whittaker vectors is denoted by \(M^{\mathfrak{m}^{\psi}}\). _Example 4.4_.: In \(\mathfrak{gl}_{2}\), the series \[P^{\psi}=\sum_{k=0}^{\infty}(-1)^{k}\frac{E_{1,1}(E_{1,1}+1)\cdots(E_{1,1}+k-1 )}{k!}(E_{2,1}-1)^{k}\] is \((E_{21}-1)\)-invariant on the left action and generates the Whittaker vectors; see [11] for a version in the left quotient. Denote by \(\operatorname{Wh}_{\hbar}\) the category of \((\operatorname{U}_{\hbar}(\mathfrak{g}),\mathcal{W})\)-bimodules that are Whittaker with respect to the \(\operatorname{U}_{\hbar}(\mathfrak{g})\)-action. Naturally, the quotient \(Q\) from (3.3) is an object of \(\operatorname{Wh}_{\hbar}\). In particular, it defines an action functor \[\operatorname{act}_{\mathfrak{g}}^{\psi}\colon\operatorname{HC}_{\hbar}(G) \to\operatorname{Wh}_{\hbar},\qquad X\mapsto X\otimes_{\operatorname{U}_{ \hbar}(\mathfrak{g})}Q.\] Likewise, there is an action \[\operatorname{act}_{\mathcal{W}}\colon\operatorname{\mathcal{W}}\operatorname {BiMod}_{\mathcal{W}}\to\operatorname{Wh}_{\hbar},\qquad Y\mapsto Q\otimes_{ \mathcal{W}}Y. \tag{4.3}\] Consider the functor \[(-)^{\mathfrak{m}^{\psi}}\colon\operatorname{Wh}_{\hbar}\to\operatorname{ \mathcal{W}}\operatorname{BiMod}_{\mathcal{W}}\] of Whittaker invariants sending a Whittaker module to its space of Whittaker vectors. The following result is a direct consequence of _Skryabin's equivalence_[13]. **Theorem 4.5**.: _The functor \((-)^{\mathfrak{m}^{\psi}}\) is an equivalence._ This motivates the following definition. **Definition 4.6**.: The _(finite) Drinfeld-Sokolov reduction_ is the functor \[\operatorname{res}^{\psi}\colon\operatorname{HC}_{\hbar}(\operatorname{GL}_{N} )\to\operatorname{\mathcal{W}}\operatorname{BiMod}_{\mathcal{W}},\qquad X \mapsto(X\otimes_{\operatorname{U}_{\hbar}(\mathfrak{g})}Q)^{\mathfrak{m}^{ \psi}}.\] In what follows, for any Harish-Chandra bimodule \(X\), we denote \[X/\mathfrak{m}^{\psi}:=X\otimes_{\operatorname{U}_{\hbar}(\mathfrak{g})}Q.\] _Remark 4.7_.: There is an equivalent presentation of the Drinfeld-Sokolov reduction that we will use later in the paper. Namely, recall the adjoint \(\mathfrak{gl}_{N}\)-action from Subsection 4.1. For any Harish-Chandra bimodule \(X\), define \[\operatorname{ad}_{m}([x]):=[\operatorname{ad}_{m}(x)]\in X/\mathfrak{m}^{ \psi},\ m\in\mathfrak{m},[x]\in X/\mathfrak{m}^{\psi}.\] We will also use the notation \[[m,x]:=\operatorname{ad}_{m}([x]),\ m\in\mathfrak{m},x\in X,\] if the quotient is clear from the context. Since \(\psi\) is a character, this action is well-defined. One can easily see that \[\hbar\cdot\operatorname{ad}_{m}([x])=m^{\psi}\cdot[x].\] In particular, the space of Whittaker vectors in \(X/\mathfrak{m}^{\psi}\) can be identified with the space of \(\operatorname{ad}_{\mathfrak{m}}\)-invariant vectors in \(X/\mathfrak{m}^{\psi}\). As in [16, Corollary 4.18], we obtain the following. **Theorem 4.8**.: _The Drinfeld-Sokolov reduction is colimit-preserving and monoidal._ Explicitly, the monoidal structure is given by the usual product on quantum Hamiltonian reductions: \[(X/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}}\otimes_{\mathcal{W}}(Y/\mathfrak{m} ^{\psi})^{\mathfrak{m}^{\psi}}\xrightarrow{\sim}(X\otimes_{\mathrm{U}_{ \mathfrak{h}}(\mathfrak{g})}Y/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}},\ [x]\otimes[y]\mapsto[x \otimes y]. \tag{4.4}\] In particular, composing with the monoidal functor of free Harish-Chandra bimodules (4.2), we get a monoidal functor: \[\mathrm{Rep}(G)\to{}_{\mathcal{W}}\mathrm{BiMod}{}_{\mathcal{W}}. \tag{4.5}\] We study its properties in the next section. ## 5. Subregular case In this section, we apply the finite Drinfeld-Sokolov reduction to subregular W-algebras and study its tensor properties. ### Pyramid Recall from Section 3 that W-algebras for \(\mathfrak{gl}_{N}\) are described by pyramids. In the subregular case, the corresponding pyramid is \[\pi=\begin{array}{|c|c|c|c|}\hline 1&\\ \hline 2&3&\ldots&N-1&N\end{array} \tag{5.1}\] and by (3.2), the subregular nilpotent is given by \[e=E_{2,3}+\ldots+E_{N-1,N}. \tag{5.2}\] The nilpotent algebra \(\mathfrak{m}\) is \[\mathfrak{m}=\mathrm{span}(E_{i,j}|3\leq i\leq N,j<i),\] namely, \[\mathfrak{m}=\begin{pmatrix}0&0&0&\ldots&0&0\\ 0&0&0&\ldots&0&0\\ *&*&0&\ldots&0&0\\ *&*&*&\ldots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ *&*&*&\ldots&*&0\end{pmatrix}. \tag{5.3}\] The parabolic subalgebra \(\mathfrak{p}\) is \[\mathfrak{p}=\begin{pmatrix}*&*&*&\ldots&*&*\\ *&*&*&\ldots&*&*\\ 0&0&*&\ldots&*&*\\ 0&0&0&\ldots&*&*\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\ldots&0&*\end{pmatrix}. \tag{5.4}\] ### Semi-classical limit It turns out that the semi-classical limits of the tensor structure on (4.5) is intrinsically related not to the whole Lie algebra \(\mathfrak{gl}_{N}\), but to its subspace of a certain almost parabolic subalgebra. **Definition 5.1**.: The _subregular wonderbolic subspace_\(\mathfrak{w}\) (for the rest of the paper, simply _wonderbolic subspace_) is the subspace of matrices of the form \[\mathfrak{w}=\begin{pmatrix}0&*&\ldots&*&0\\ 0&*&\ldots&*&0\\ *&*&\ldots&*&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ *&*&\ldots&*&0\end{pmatrix}.\] While the subregular nilpotent element \(e\) from (5.2) does not lie in \(\mathfrak{w}\), it defines the following 2-form on \(\mathfrak{w}\): \[\omega\colon\mathfrak{w}\wedge\mathfrak{w}\to\mathbf{C},\ x\wedge y\mapsto \operatorname{Tr}(e\cdot[x,y]).\] Observe that the nilpotent subalgebra \(\mathfrak{m}\) from (5.3) lies in \(\mathfrak{w}\), moreover, it is isotropic with respect to \(\omega\). A natural complement is given by the Borel subalgebra \[\mathfrak{b}=\operatorname{span}(E_{k,l}|1\leq k\leq l\leq N-1,2\leq l).\] Namely, \[\mathfrak{b}=\begin{pmatrix}0&*&*&\dots&*&0\\ 0&*&*&\dots&*&0\\ 0&0&*&\dots&*&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\dots&*&0\\ 0&0&0&\dots&0&0\end{pmatrix}. \tag{5.5}\] Similarly to Definition 3.3, we will use the following. **Definition 5.2**.: The _\(\boldsymbol{k}\)-th truncated Borel subalgebra_\({}_{k}\mathfrak{b}\) is the Borel subalgebra as in (5.5) associated to the truncated pyramid \({}_{k}\pi\) from Definition 3.2. Alternatively, we denote it by \(\mathfrak{b}_{{}_{k}N}\), if the truncation is clear from the context. It turns out that \(\omega\) is symplectic and both spaces are Lagrangian. **Proposition 5.3**.: _The form \(\omega\) is non-degenerate with inverse \(r_{\mathfrak{w}}=\mathbf{j}_{c}-\mathbf{j}_{c}^{21}\), where \(\mathbf{j}_{c}^{21}\) is uniquely defined by_ \[\mathbf{j}_{c}^{21}(E_{i,j}^{*})=\begin{cases}\delta_{i>2}E_{j,i-1},\ j=1,2\\ E_{j,i-1}-\mathbf{j}_{c}^{21}(E_{i-1,j-1}^{*}),\ j\geq 3\end{cases} \tag{5.6}\] _for any \(E_{i,j}^{*}\in\mathfrak{m}^{*}\), where \(E_{i,j}^{*}\) is the dual basis and we consider \(\mathbf{j}_{c}^{21}\) as a map \(\mathfrak{m}^{*}\to\mathfrak{b}\)._ Proof.: Observe that both \(\mathfrak{m}\) and \(\mathfrak{b}\) are isotropic subspaces with respect to \(\omega\). Since the latter is skew-symmetric, it is enough to construct an inverse \(-\mathbf{j}_{c}^{21}\) only of one map, say \(\omega\colon\mathfrak{b}\to\mathfrak{m}\). Observe that \[\omega(E_{j,i-1})=\begin{cases}-\delta_{i>2}E_{i,j}^{*},\ j=1,2\\ -E_{i,j}^{*}+E_{i-1,j-1}^{*},\ j\geq 3\end{cases}\] for \(i\geq j+1\). Then (5.6) follows. Note that these equations allow to construct \(\mathbf{j}_{c}^{21}\) inductively, starting from \(j=1\) and \(j=2\). In particular, they define the inverse. _Remark 5.4_.: The subregular wonderbolic subspace is an analog of the _miabolic subalgebra_ in the case of the regular nilpotent element in the same way \(r_{\mathfrak{w}}\) is an analog of the rational Cremmer-Gervais \(r\)-matrix, see [11]. One main difference is that it is not a subalgebra, thus \(r_{\mathcal{W}}\) does not satisfy the classical Yang-Baxter equation. However, it turns out \(\mathbf{j}_{c}\) is the constant part of the semi-classical limit of the tensor structure on Whittaker vectors. As the reader will see in Subsection 5.5, in addition to the constant part \(r_{\mathfrak{w}}\), the semi-classical limit of the tensor structure also involves certain "dynamical" parameters lying on the subalgebra spanned by \(\{E_{11},E_{21}\}\). ### Whittaker vectors: general setup In this subsection, we show that the Drinfeld-Sokolov reduction functor (4.5) admits a canonical "trivialization." Recall the elements \(T^{(r)}_{ij;x}\) from (3.7) and their truncated analogs \({}_{k}T^{(r)}_{ij;x}\) from (3.10). As we mentioned in Section 3, the authors of [1] considered the case \(\hbar=1\), however, all the proofs can be translated _mutatis mutandis_ to their \(\hbar\)-versions and will not be mentioned explicitly here. Recall also that a W-algebra is defined as the quantum Hamiltonian reduction \((\operatorname{U}_{\hbar}(\mathfrak{g})/\mathfrak{m}^{\psi})^{\mathfrak{m}^{ \psi}}\). Since \(T^{(r)}_{ij;x}\in\operatorname{U}_{\hbar}(\mathfrak{p})\) by construction, we may treat them as elements in the quotient \(\operatorname{U}_{\hbar}(\mathfrak{g})/\mathfrak{m}^{\psi}\). By combination of a particular case of the fundamental result [1, Theorem 10.1], identifying W-algebras with _truncated shifted Yangians_, and [1, Corollary 6.3], subregular W-algebras admit an explicit presentation. **Theorem 5.5**.: _The monomials in the elements_ \[T^{(1)}_{11;0},\qquad T^{(1)}_{21;1},\qquad T^{(N-1)}_{12;1},\qquad\{T^{(r)}_{22;1 }\}_{1\leq r\leq N-1},\] _taken in any fixed order, form a basis of the subregular W-algebra \(\mathcal{W}\)._ Observe that \(T^{(1)}_{11;0}=E_{1,1}-(N-2)\hbar\) and \(T^{(1)}_{21;1}=-E_{2,1}\). We will consider the following subalgebra of \(\mathfrak{p}\): \[\mathfrak{l}:=\operatorname{span}(E_{2,1},E_{1,1}). \tag{5.7}\] For any left \(\operatorname{U}_{\hbar}(\mathfrak{g})\)-module \(X\), denote by \(\mathfrak{b}\backslash X:=\mathbf{C}\otimes_{\operatorname{U}_{\hbar}( \mathfrak{b})}\operatorname{U}_{\hbar}(\mathfrak{g})\), where \(\mathfrak{b}\) acts on \(\mathbf{C}\) trivially. Consider the composition \[\mathcal{W}=(\operatorname{U}_{\hbar}(\mathfrak{g})/\mathfrak{m}^{\psi})^{ \mathfrak{m}^{\psi}}\hookrightarrow\operatorname{U}_{\hbar}(\mathfrak{g})/ \mathfrak{m}^{\psi}\to\mathfrak{b}\backslash\operatorname{U}_{\hbar}( \mathfrak{g})/\mathfrak{m}^{\psi}. \tag{5.8}\] **Proposition 5.6**.: _The map (5.8) is an isomorphism of right \(\mathcal{W}\)-modules._ Proof.: Consider the map between the associated graded spaces with respect to the filtration induced from the Kazhdan grading on both sides (recall that \(\hbar\in\mathbf{C}^{*}\)). It is clear from the formula (3.7) that it sends \[T^{(1)}_{11;0}\mapsto E_{1,1},\qquad T^{(1)}_{21;1}\mapsto E_{2,1},\qquad T^{ (N-1)}_{22;1}\mapsto E_{N,N}.\] It is also clear that it sends \[T^{(r)}_{22;1}\mapsto E_{r+1,N}+x,\ 1\leq r<N-1,\] where \(x\) is expressible in terms of \(E_{11},E_{21},E_{s+1,N}\) for \(s>r\). Likewise, \[T^{(N-1)}_{12;1}=E_{1,N}+x,\] where \(x\) is expressible in terms of \(E_{1,1},E_{2,1},E_{s+1,N}\) for \(s\geq 1\). In particular, we see that this map sends generators to generators. Since this is an algebra homomorphism, we conclude by Theorem 5.5 that it is an isomorphism. In particular, the map (5.8) is an isomorphism as well. Recall the setting of Section 4. **Corollary 5.7**.: _For any Harish-Chandra bimodule \(X\), there is a natural isomorphism of right \(\mathcal{W}\)-modules_ \[\operatorname{res}^{\psi}(X)=(X/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}} \xrightarrow{\sim}\mathfrak{b}\backslash X/\mathfrak{m}^{\psi}.\] Proof.: Recall that by Skryabin's theorem, the natural action map \[\operatorname{U}_{\hbar}(\mathfrak{g})/\mathfrak{m}^{\psi}\otimes_{\mathcal{ W}}(X/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}}\to X/\mathfrak{m}^{\psi}\] is an isomorphism. Therefore, by Proposition 5.6 we have \[\mathfrak{b}\backslash X/\mathfrak{m}^{\psi}\cong\mathfrak{b}\backslash \operatorname{U}_{\hbar}(\mathfrak{g})/\mathfrak{m}^{\psi}\otimes_{\mathcal{ W}}(X/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}}\cong\mathcal{W}\otimes_{ \mathcal{W}}(X/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}}=(X/\mathfrak{m}^{ \psi})^{\mathfrak{m}^{\psi}}\] as required. In particular, it implies that we can "trivialize" the Drinfeld-Sokolov reduction (4.5) on free Harish-Chandra bimodules. **Proposition 5.8**.: _For any \(V\in\operatorname{Rep}(G)\), there is a natural isomorphism of right \(\mathcal{W}\)-modules_ \[\operatorname{triv}_{V}\colon V\otimes\mathcal{W}\xrightarrow{\sim}\mathfrak{ b}\backslash\operatorname{U}_{\hbar}(\mathfrak{g})\otimes V/\mathfrak{m}^{\psi}, \tag{5.9}\] _i.e. for every \(v\in V\), there exists a unique Whittaker vector \(v^{\psi}\) of the form_ \[v^{\psi}=1\otimes v+\sum x_{i}\otimes v_{i},\ x_{i}\in\mathfrak{b}\cdot \operatorname{U}_{\hbar}(\mathfrak{g}). \tag{5.10}\] _In particular, together with the isomorphism_ \[\mathfrak{b}\backslash\operatorname{U}_{\hbar}(\mathfrak{g})/\mathfrak{m}^{ \psi}\to\operatorname{res}^{\psi}(\operatorname{U}_{\hbar}(\mathfrak{g}) \otimes V),\] _we have a commutative diagram_ \[\operatorname{Rep}(G)\xrightarrow{\operatorname{res}^{\psi}\operatorname{ office}}_{\mathcal{W}}\operatorname{BiMod}_{\mathcal{W}}\] \[\operatorname{RMod}_{\mathcal{W}}\] _where \(\mathrm{RMod}_{\mathcal{W}}\) is the category of right \(\mathcal{W}\)-modules, the vertical arrow is the forgetful functor, and_ \[\mathrm{free}_{\mathcal{W}}\colon\operatorname{Rep}(G)\to\mathrm{RMod}_{ \mathcal{W}},\qquad V\mapsto V\otimes\mathcal{W}\] _is the functor of free right \(\mathcal{W}\)-modules._ Proof.: Follows from the PBW theorem and Proposition 5.6 Observe that \(\mathrm{RMod}_{\mathcal{W}}\) is naturally a right module category over \({}_{\mathcal{W}}\mathrm{BiMod}_{\mathcal{W}}\), see Definition 2.3. Since \(\mathrm{res}^{\psi}\) is a monoidal functor, it becomes a right module category over \(\operatorname{Rep}(G)\) as well. Likewise, the category \(\operatorname{Rep}(G)\) is tautologically a right module category over itself. Also, using Skryabin's theorem, we obtain a natural isomorphism \[X/\mathfrak{m}^{\psi}\otimes_{\mathcal{W}}(Y/\mathfrak{m}^{\psi})^{\mathfrak{ m}^{\psi}}\cong X\otimes_{\mathrm{U}_{\hbar}(\mathfrak{g})}\mathrm{U}_{\hbar}( \mathfrak{g})/\mathfrak{m}^{\psi}\otimes_{\mathcal{W}}(Y/\mathfrak{m}^{\psi} )^{\mathfrak{m}^{\psi}}\xrightarrow{\sim}X\otimes_{\mathrm{U}_{\hbar}( \mathfrak{g})}Y/\mathfrak{m}^{\psi}.\] In particular, for every \(U,V\in\operatorname{Rep}(G)\), we have \[\mathrm{free}_{\mathcal{W}}(U)\otimes_{\mathcal{W}}\mathrm{res}^{\psi}( \mathrm{U}_{\hbar}(\mathfrak{g})\otimes V)\xrightarrow{\mathrm{triv}_{U} \otimes\mathrm{id}}\mathfrak{b}\backslash(\mathrm{U}_{\hbar}(\mathfrak{g}) \otimes U)/\mathfrak{m}^{\psi}\otimes_{\mathcal{W}}(\mathrm{U}_{\hbar}( \mathfrak{g})\otimes V/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}}\cong\] canonically. At the same time, since \(\mathrm{free}_{\mathcal{W}}(U)\) is a free \(\mathcal{W}\)-module, we also have a canonical isomorphism \[\mathrm{free}_{\mathcal{W}}(U)\otimes_{\mathcal{W}}\mathrm{res}^{\psi}( \mathrm{U}_{\hbar}(\mathfrak{g})\otimes V)=(U\otimes\mathcal{W})\otimes_{ \mathcal{W}}\mathrm{res}^{\psi}(\mathrm{U}_{\hbar}(\mathfrak{g})\otimes V) \cong U\otimes V\otimes\mathcal{W}\] of right \(\mathcal{W}\)-modules. Combining it with Proposition 5.8 and Theorem 4.8, we get a "matrix" form of the monoidal structure on the Drinfeld-Sokolov reduction. **Theorem 5.9**.: _The functor \(\mathrm{free}_{\mathcal{W}}\colon\operatorname{Rep}(G)\to\mathrm{RMod}_{ \mathcal{W}}\) is a functor of right \(\operatorname{Rep}(G)\)-module categories in the sense of Definition 2.5. In particular, there is a collection of natural isomorphisms_ \[J_{UV}\colon U\otimes V\otimes\mathcal{W}\to U\otimes V\otimes\mathcal{W},\] _for all \(U,V\in\operatorname{Rep}(G)\)._ In what follows, we will compute its semi-classical limit. ### Whittaker vectors: vector representation We explicitly compute the generating Whittaker vectors for \(\mathrm{U}_{\hbar}(\mathfrak{g})\otimes\mathbf{C}^{N}/\mathfrak{m}^{\psi}\), where \[\mathbf{C}^{N}=\mathrm{span}(v_{i}|1\leq i\leq N),\qquad\mathrm{ad}_{E_{ij}}(v _{k})=\delta_{jk}v_{i}\] (we use the notation \(\mathrm{ad}_{E_{ij}}\) from Subsection 4.1). Recall the truncated generators (3.10). The next proposition gives a relation between \({}_{k}T\) for different values of \(k\). **Proposition 5.10**.: _[_2_, Lemma 10.4]_ _Suppose that \(r>0\). Then_ \[{}_{1}T^{(r)}_{i,2;1}={}_{2}T^{(r)}_{i,2;1}+{}_{2}T^{(r-1)}_{i,2;1}\widetilde{ E}_{N-1,N-1}+[{}_{2}T^{(r-1)}_{i,2;1},\widetilde{E}_{N-2,N-1}] \tag{5.11}\] _for \(i=1,2\), where \([,]\) refers to the adjoint action from Remark 4.7._ We will need the following lemma. **Lemma 5.11**.: _For \(i=1,2\), we have_ \[[E_{N,N-1},{}_{1}T^{(r)}_{i,2;1}]={}_{2}T^{(r-1)}_{i,2;1}. \tag{5.12}\] Proof.: By Proposition 5.10, it suffices to compute \[\left[E_{N,N-1},\ {}_{2}T^{(r)}_{i2;1}+{}_{2}T^{(r-1)}_{i2;1}\widetilde{E}_{N-1,N-1}+[{}_{2 }T^{(r-1)}_{i2;1},\widetilde{E}_{N-2,N-1}]\right]. \tag{5.13}\] Since \({}_{2}T^{(s)}_{i,2;x}\in\mathfrak{gl}_{{}_{2}N}\) and \({}_{2}N<N-1\), we have \([E_{N,N-1},{}_{2}T^{(s)}_{i,2;x}]=0\) for any \(s\). Thus, (5.13) becomes \[\left[E_{N,N-1},\ {}_{2}T^{(r-1)}_{i,2;1}\widetilde{E}_{N-1,N-1}+[{}_{2}T^{(r-1 )}_{i,2;1},\widetilde{E}_{N-2,N-1}]\right]={}_{2}T^{(r-1)}_{i,2;1}+\left[E_{N,N-1 },[{}_{2}T^{(r-1)}_{i,2;1},\widetilde{E}_{N-2,N-1}]\right]\] since \(\widetilde{E}_{N-1,N-1}=E_{N-1,N-1}+\hbar\cdot c\) for some constant \(c\) by (3.6). From Jacobi's identity, we have \[\left[E_{N,N-1},[2\imath_{1,2;1}^{(r-1)},\widetilde{E}_{N-2,N-1}]\right]=-\left[ 2\imath_{1,2;1}^{(r-1)},[E_{N,N-1},\tilde{E}_{N-2,N-1}]\right]-\left[\tilde{E} _{N-2,N-1},[E_{N,N-1},{}_{2}T_{i,2;1}^{(r-1)}]\right]=0,\] which proves the proposition. We go on to the main theorem. **Theorem 5.12**.: _For \(N-j\neq 1\), the following vectors in \(\mathrm{U}_{\hbar}(\mathfrak{g})\otimes\mathbf{C}^{N}/\mathfrak{m}^{\psi}\)_ \[\tilde{v}_{N-j}^{\psi}=1\otimes v_{N-j}+\sum_{i=0}^{j-1}(-1)^{j-i}{}_{i+1}T_{ 22;1}^{(j-i)}\otimes v_{N-i} \tag{5.14}\] _are Whittaker._ Proof.: We proceed with strong induction on the subregular pyramid \(\pi\) from (5.1). The base case is \[\pi=\begin{array}{c}\hline 1\\ \hline 2\end{array}\] and the corresponding nilpotent element with the nilpotent subalgebra are trivial. In particular, \[1\otimes v_{2}\in\mathrm{U}_{\hbar}(\mathfrak{g}\mathfrak{l}_{2})\otimes \mathbf{C}^{2}\] is automatically Whittaker. For the step, let assume that \(N>2\) and we proved the statement for the truncated pyramid \({}_{1}\pi\). It is clear that \(1\otimes v_{N}\) is Whittaker. Also, the vector \[{}_{1}\tilde{v}_{N-j}^{\psi}=1\otimes v_{N-j}+\sum_{i=1}^{j-1}(-1)^{j-i}{}_{i +1}T_{22;1}^{(j-i)}\otimes v_{N-i} \tag{5.15}\] is invariant under the truncated subalgebra \({}_{1}\mathfrak{m}\), recall Definition 3.3. Indeed: while the coefficients of (5.15) are different from the ones of (5.14) for \(\mathfrak{g}\mathfrak{l}_{\,1}\) by definition of the truncated generators (3.10), the non-standard embeddings \(\mathrm{U}_{\hbar}(\mathfrak{g}\mathfrak{l}_{\,k})\to\mathrm{U}_{\hbar}( \mathfrak{g}\mathfrak{l}_{N})\) from (3.9) are homomorphisms for all \(1\leq k\leq N-1\), so the Whittaker property is preserved. Hence, let us rewrite equation (5.14) in a recursive form: \[\tilde{v}_{N-j}^{\psi}={}_{1}\tilde{v}_{N-j}^{\psi}+(-1)^{j}\cdot{}_{1}T_{22; 1}^{(j)}\otimes v_{N}. \tag{5.16}\] To show that this vector is Whittaker, it suffices to check \[(E_{N,N-1}-1)\cdot\tilde{v}_{N-j}^{\psi}=\hbar[E_{N,N-1},\tilde{v}_{N-j}^{ \psi}]=0.\] (recall Remark 4.7). Indeed, * For all \(x\in{}_{1}\mathfrak{m}\), we have \[x^{\psi}\cdot{}_{1}\tilde{v}_{N-j}^{\psi}=0\] by induction hypothesis, and \[x^{\psi}\cdot{}_{1}T_{22;1}^{(j)}\otimes v_{N}=0\] by Theorem 5.5 and because any element of \({}_{1}\mathfrak{m}\) commutes with \(v_{N}\). * For any \(1\leq k<N-1\), there exists \(x\in{}_{1}\mathfrak{m}\), such that \(E_{N,N-k}=[E_{N,N-1}^{\psi},x^{\psi}]\). Therefore, assuming we proved invariance under \(E_{N,N-1}^{\psi}\), we have \[E_{N,N-k}^{\psi}\cdot\tilde{v}_{N-j}^{\psi}=\hbar^{-1}(E_{N,N-1}^{\psi}x^{\psi }-x^{\psi}E_{N,N-1}^{\psi})\tilde{v}_{N-j}^{\psi}=0.\] By construction, \[{}_{1}\tilde{v}_{N-j}^{\psi}={}_{2}\tilde{v}_{N-j}^{\psi}+(-1)^{j-1}\cdot{}_{2 }T_{22;1}^{(j-1)}\otimes v_{N-1}.\] Since \({}_{2}\tilde{v}_{N-j}^{\psi}\in\mathrm{U}_{\hbar}(\mathfrak{g}\mathfrak{l}_{N -2})\otimes\mathbf{C}^{N-2}\), we have \([E_{N,N-1},{}_{2}\tilde{v}_{N-j}^{\psi}]=0\). Likewise, \[[E_{N,N-1},{}_{2}T_{22;1}^{(j-1)}\otimes v_{N-1}]={}_{2}T_{22;1}^{(j-1)} \otimes v_{N},\] and therefore, \[[E_{N,N-1},{}_{1}\tilde{v}_{N-j}^{\psi}]=(-1)^{j-1}\cdot{}_{2}T_{22;1}^{(j-1) }\otimes v_{N}.\] By Lemma 5.11, we get \[[E_{N,N-1},(-1)^{j}\cdot{}_{1}T^{(j)}_{22;1}\otimes v_{N}]=(-1)^{j}{}_{2}T^{(j-1) }_{22;1}\otimes v_{N}.\] Summing up these equalities and recalling (5.16), we conclude that \[[E_{N,N-1},\widetilde{v}^{\psi}_{N-j}]=0,\] and the induction is complete. **Theorem 5.13**.: _The remaining vector_ \[\tilde{v}^{\psi}_{1}=1\otimes v_{1}+\sum_{i=0}^{N-3}(-1)^{N-i-2}\cdot{}_{i+1}T ^{(N-i-2)}_{12;1}\otimes v_{N-i} \tag{5.17}\] _is also Whittaker in \(\mathrm{U}_{\hbar}(\mathfrak{g})\otimes\mathbf{C}^{N}/\mathfrak{m}^{\psi}\)._ Proof.: Similar to Theorem 5.12. Observe that these vectors do not quite satisfy the assumption of (5.10). However, they are not far away from the canonical form. Recall the definition of the algebra \(\mathfrak{l}\) from (5.7). Observe that \(\mathfrak{l}\subset\mathcal{W}\). **Definition 5.14**.: Consider the natural PBW basis of \(\mathrm{U}_{\hbar}(\mathfrak{g})\) induced from the basis \(\{E_{ij}\}\) of \(\mathfrak{g}\). An element \(x\in\mathrm{U}_{\hbar}(\mathfrak{g})\) is called \(\mathfrak{l}\)-_constant_, if \(x\in\mathrm{U}_{\hbar}(\mathfrak{l})\). Thanks to upper-triangular form of (5.14) and (5.17), we can apply some strictly upper-triangular (hence invertible) matrix with coefficients in \(\mathrm{U}_{\hbar}(\mathfrak{l})\) to the constructed generators to bring it to the necessary form. Namely, denote by \(c^{N-i}_{N-j}\) the \(\mathfrak{l}\)-constant term of \((-1)^{j-i}\cdot{}_{i+1}T^{(j-i)}_{22;1}\). Then we can perform the following inductive operation: \[\widehat{v}^{\psi}_{N-j}\mapsto\widetilde{v}^{\psi}_{N-j}-\sum_{i=0}^{j-1}v^ {\psi}_{N-i}c^{N-i}_{N-j}. \tag{5.18}\] Removing step-by-step all the \(\mathfrak{l}\)-constant terms, we eventually get the canonical generators, that we denote by \[v^{\psi}_{i}=1\otimes v_{i}+\sum_{j>i}x^{j}_{i}\otimes v_{j},\ x^{j}_{i}\in \mathfrak{b}\cdot\mathrm{U}_{\hbar}(\mathfrak{g}), \tag{5.19}\] for \(1\leq i\leq N\). Moreover, this form is actually more refined: we have \[v^{\psi}_{i}=1\otimes v_{i}+\sum_{j>i}x^{j}_{i}\otimes v_{j},\ x^{j}_{i}\in \mathfrak{b}_{j-1}\cdot\mathrm{U}_{\hbar}(\mathfrak{g}), \tag{5.20}\] where \(\mathfrak{b}_{j-1}\subset\mathfrak{gl}_{j-1}\) is the truncated Borel subalgebra as in Definition 5.2. ### Tensor structure The goal of this section is to compute the semi-classical limit of the monoidal isomorphism from Theorem 5.9 for the tensor product of \(\mathbf{C}^{N}\) with itself. From now on, we will treat \(\hbar\) as a _variable_, in particular, we consider the asymptotic universal enveloping algebra \(\mathrm{U}_{\hbar}(\mathfrak{gl}_{N})\) over \(\mathbf{C}[\hbar]\). We will need some definitions regarding "asymptotic" behavior of elements of \(\mathrm{U}_{\hbar}(\mathfrak{g})\). **Definition 5.15**.: Consider the natural PBW basis of \(\mathrm{U}_{\hbar}(\mathfrak{g})\) induced from the basis \(\{E_{ij}\}\) of \(\mathfrak{g}\). We call an element \(x\in\mathrm{U}_{\hbar}(\mathfrak{g})\)_constant_ if it has degree zero with respect to this basis. It is called _asymptotically linear_ if the PBW degree of \(x\) is one and it is constant in \(\hbar\). We call \(x\)_asymptotically \(\mathfrak{l}\)-_linear_ if it constant in \(\hbar\) and has the form \(x\in y\cdot\mathrm{U}_{\hbar}(\mathfrak{l})^{>0}\), where \(y\in\mathfrak{b}\). Theorem 5.9 in this particular case can be reformulated as follows. Let \(\{v_{i}\otimes v_{j}\}\) be a natural basis of \(\mathbf{C}^{N}\otimes\mathbf{C}^{N}\). We have two natural choices of generating vectors in \((\mathrm{U}_{\hbar}(\mathfrak{g})\otimes\mathbf{C}^{N}\otimes\mathbf{C}^{N}/ \mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}}\): one is provided by Proposition 5.8, and we denote it by \[(v_{i}\otimes v_{j})^{\psi}=v_{i}\otimes v_{j}+\sum_{k,l}x^{kl}_{ij}\otimes v_ {k}\otimes v_{l},\ x^{kl}_{ij}\in\mathfrak{b}\cdot\mathrm{U}_{\hbar}( \mathfrak{p}). \tag{5.21}\] Another is given by the monoidal structure (4.4) on the Drinfeld-Sokolov reduction: under canonical trivialization (5.19), we set \[v_{i}^{\psi}\otimes v_{j}^{\psi}:=v_{i}^{\psi}\otimes_{\mathrm{U}_{ \hbar}(\mathfrak{g})}v_{j}^{\psi}\in (\mathrm{U}_{\hbar}(\mathfrak{g})\otimes\mathbf{C}^{N}/\mathfrak{m} ^{\psi})^{\mathfrak{m}^{\psi}}\otimes_{\mathcal{W}}(\mathrm{U}_{\hbar}( \mathfrak{g})\otimes\mathbf{C}^{N}/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}}\cong\] \[\cong (\mathrm{U}_{\hbar}(\mathfrak{g})\otimes\mathbf{C}^{N}\otimes \mathbf{C}^{N}/\mathfrak{m}^{\psi})^{\mathfrak{m}^{\psi}}.\] **Proposition 5.16**.: _The monoidal isomorphisms \(J_{\mathbf{C}^{N},\mathbf{C}^{N}}\) from Theorem 5.9 are of the form_ \[J_{\mathbf{C}^{N},\mathbf{C}^{N}}\in\mathrm{id}_{\mathbf{C}^{N}\otimes\mathbf{ C}^{N}\otimes\mathcal{W}}+\hbar\mathrm{U}_{\hbar}(\mathfrak{b})^{>0}\otimes \mathrm{U}_{\hbar}(\mathfrak{m})^{>0}\otimes\mathrm{U}_{\hbar}(\mathfrak{l}),\] _where \(\mathrm{U}_{\hbar}(\mathfrak{l})\subset\mathcal{W}\)._ Proof.: Recall that under identification \(\mathrm{U}_{\hbar}(\mathfrak{g})/\mathfrak{m}^{\psi}\cong\mathrm{U}_{\hbar}( \mathfrak{p})\), the generating vectors have the form (5.19), so, \[v_{i}^{\psi}\otimes v_{j}^{\psi}=\left(1\otimes v_{i}+\sum_{k>i}x_{i}^{k} \otimes v_{k}\right)\otimes\left(1\otimes v_{j}+\sum_{l>j}x_{j}^{l}\otimes v_{ l}\right). \tag{5.22}\] It follows from construction that for all \(j,l\), \[x_{j}^{l}=(x_{(1)})_{j}^{l}\cdot(x_{(2)})_{j}^{l}\] for some \((x_{(1)})_{j}^{l}\in\mathrm{U}_{\hbar}(\mathfrak{b})\) and \((x_{(2)})_{j}^{l}\in\mathrm{U}_{\hbar}(\mathfrak{l})\) (here, we use Sweedler's sum notation). Moreover, observe that \[(x_{(2)})_{j}^{l}\otimes v_{l}=(1\otimes v_{l})(x_{(2)})_{j}^{l}\] (recall the right action from Subsection 4.1). In particular, we get \[(1\otimes v_{k})\cdot x_{j}^{l}=\sum_{a\leq k}((x_{(1)})_{jk}^{la}\otimes v_{a })(x_{(2)})_{j}^{l}\] for some \((x_{(1)})_{jk}^{la}\in\mathrm{U}_{\hbar}(\mathfrak{b})\). Therefore, \[v_{i}^{\psi}\otimes v_{j}^{\psi}=1\otimes v_{i}\otimes v_{j}+ \sum_{k>i}x_{i}^{k}\otimes v_{k}\otimes v_{j}+\sum_{\begin{subarray}{c}l>j\\ k>i\end{subarray}}\sum_{b\leq k}(x_{i}^{k}(x_{(1)})_{jk}^{lb}\otimes v_{b} \otimes v_{l})(x_{(2)})_{j}^{l}+\] \[+\sum_{\begin{subarray}{c}a\leq i\\ l>j\end{subarray}}((x_{(1)})_{ji}^{la}\otimes v_{a}\otimes v_{l})(x_{(2)})_{j}^ {l}. \tag{5.23}\] Observe that the second line already has the form (5.10), and the third line almost satisfies this condition as well except for the case when \((x_{(1)})_{ji}^{la}\) is constant; in this case, denote \((x_{(1)})_{ji}^{la}\cdot(x_{(2)})_{j}^{l}=:c_{ij}^{al}\in\mathrm{U}_{\hbar}( \mathfrak{l})\). We see that the map \[v_{i}\otimes v_{j}\otimes 1\mapsto v_{i}\otimes v_{j}\otimes 1+\sum_{ \begin{subarray}{c}a\leq i\\ l>j\end{subarray}}v_{a}\otimes v_{l}\otimes c_{ij}^{al}\] is strictly upper-triangular. In particular, the canonical generators \((v_{i}\otimes v_{j})^{\psi}\) of (5.21) can be constructed inductively by taking \[(v_{i}\otimes v_{j})^{\psi}\otimes 1:=v_{i}^{\psi}\otimes v_{j}^{\psi}\otimes 1- \sum(v_{a}\otimes v_{l})^{\psi}\otimes c_{ij}^{al}.\] Then the tensor isomorphism is given by a \(\mathrm{U}_{\hbar}(\mathfrak{l})\)-valued matrix \[J_{\mathbf{C}^{N},\mathbf{C}^{N}}=\mathrm{id}_{\mathbf{C}^{N}\otimes\mathbf{C}^ {N}\otimes\mathcal{W}}+(c_{ij}^{al}). \tag{5.24}\] Moreover, since every commutation produces a power of \(\hbar\) by (4.1), the second part of the theorem follows. We will compute the first \(\hbar\)-power \(\mathbf{j}\) of the matrix (5.24). In other words, we need to compute the first \(\hbar\)-powers of the coefficients \((x_{j}^{l})_{i}^{a}\) of (5.23). **Proposition 5.17**.: _The only non-trivial contribution to the first \(\hbar\)-power from \((x_{j}^{l})_{i}^{a}\) comes from asymptotically linear and \(\mathfrak{l}\)-linear terms of \(x_{j}^{l}\) in (5.22)._ Proof.: Indeed: in a PBW basis, the element \(x_{j}^{l}\) is the sum of products of the form \(\hbar^{b}y_{1}\cdots y_{a}\cdot x\) with \(a\geq 1\) for some \(\{y_{m}\}\subset\mathfrak{g}\), some \(\hbar\)-power \(b\), and \(x\in\mathrm{U}_{\hbar}(\mathfrak{l})\). If it is not asymptotically (I)-linear, then there are two cases: 1. It is not linear, i.e. \(a\geq 2\); for simplicity, we demonstrate it for \(a=2\), but the general argument is the same: \[(1\otimes v_{i})\cdot\hbar^{b}y_{1}y_{2}x=\hbar^{b}(y_{1}\otimes v_{i})y_{2}x -(\hbar^{b+1}\otimes[y_{1},v_{i}])y_{2}x\] \[=\hbar^{b}(y_{2}y_{1}\otimes v_{i})x-\hbar^{b+1}([y_{2},y_{1}] \otimes v_{i})x-\hbar^{b+1}(y_{1}\otimes[y_{2},v_{i}])x-\hbar^{b+1}(y_{2} \otimes[y_{1},v_{i}])x+(\hbar^{b+2}\otimes[y_{2},[y_{1},v_{i}]])x.\] So, we see that there is no contribution to the first \(\hbar\)-power of constant terms for \(a\geq 2\). 2. It is linear, but divisible by \(\hbar\), i.e. \(b\geq 1\). Then \[(1\otimes v_{i})\hbar^{b}y_{1}x=\hbar^{b}(y_{1}\otimes v_{i})x-(\hbar^{b+1} \otimes[y_{1},v_{i}])x,\] and there is also no contribution to the first \(\hbar\)-power of constant terms. Thus, if \(x_{j}^{l}\) is not asymptotically (I)-linear, then it must be asymptotically linear for there to be non-trivial contribution to the first \(\hbar\) power. At the same time, the calculation above shows that for \(b=0\), I-linear terms may contribute to the first power, and the proposition follows. Unfortunately, the only explicit form of Whittaker vectors in \(\mathrm{U}_{\hbar}(\mathfrak{g})\otimes\mathbf{C}^{N}/\mathbf{m}^{\psi}\) available so far is (5.14) or (5.17) which is not canonical; however, thanks to the next lemmas, it does not affect calculations too much. **Lemma 5.18**.: _For any parameters, the \(\mathfrak{l}\)-constant part of \({}_{a}T_{ij;x}^{(r)}\) is divisible by \(\hbar\)._ Proof.: From Definition 3.6, we note that a constant term exists in \(T_{ij;x}^{(r)}\) if and only if \(i_{k}=j_{k}\) for all \(1\leq k\leq s\). But from condition (4), we note that \(\mathrm{col}(j_{k})-\mathrm{col}(i_{k})+1=1\) for all \(k\). Thus, \(s=r\) and constant terms only come from summands of the form \[\widetilde{E}_{i_{1}i_{1}}\cdot\ldots\cdot\widetilde{E}_{i_{r}i_{r}}.\] By (3.6), it is clear that the constant term is proportional to \(\hbar^{r}\). As for general \(\mathfrak{l}\)-constant terms, it follows from the formula (3.7) that \(T_{22;1}^{(r)}\) is the sum of elements of the form \[\widetilde{E}_{2,1}\widetilde{E}_{1,1}^{k}\cdot x,\ x\in\mathrm{U}_{\hbar}( \mathfrak{b})\] for some \(k\). Note \(x\) must commute with \(E_{11}\) from condition (2). Thus, commuting \(x\) to the left produces some elements of \(\mathrm{U}_{\hbar}(\mathfrak{l})\), but they are divisible by \(\hbar\) because of commutation. The same is true for \(T_{12;1}^{(r)}\), where the terms have the form \(\widetilde{E}_{1,1}^{k}\cdot x\) for some \(k\) and \(x\in\mathrm{U}_{\hbar}(\mathfrak{b})\). Recall the coefficients \(x_{N-j}^{N-i}\) from (5.19). **Lemma 5.19**.: _The asymptotically \(\mathfrak{l}\)-linear terms in \((-1)^{j-i}\cdot_{i+1}T_{22;1}^{(j-i)}\) are the same as in \(x_{N-j}^{N-i}\) for \(N-j\neq 1\). Likewise, the asymptotically \(\mathfrak{l}\)-linear terms in \({}_{i+1}T_{12;1}^{(N-i-1)}\) are the same as \(x_{1}^{N-l}\)._ Proof.: Recall (5.19) that the canonical generators can be constructed from \(\{\widetilde{v}_{i}^{\psi}\}\) by inductively removing \(\mathfrak{l}\)-constant terms. But, according to Lemma 5.18, they are all divisible by \(\hbar\), and the statement follows. Therefore, by Proposition 5.17, it is enough to consider only the asymptotically linear terms of \(T\)-generators. **Proposition 5.20**.: _The explicit forms for the asymptotically linear and \(\mathfrak{l}\)-linear terms are given below._ * _The asymptotically linear part of_ \(T_{ij;x}^{(r)}\) _is_ \[\sum_{\begin{subarray}{c}\mathrm{row}(i_{1})=i,\\ \mathrm{row}(j_{1})=j,\\ \mathrm{col}(j_{1})-\mathrm{col}(i_{1})+1=r\end{subarray}}(-1)^{r-1}E_{i_{1},j _{1}}.\] * _The asymptotically_ \(\mathfrak{l}\)_-linear terms of_ \({}_{i+1}T_{22;1}^{(j-i)}\) _are_ \[\sum_{r=2}^{j-i}(-1)^{r}E_{1,r}E_{2,1}E_{1,1}^{j-i-r}.\] * _The asymptotically_ \(\mathfrak{l}\)_-linear terms of_ \({}_{i+1}T_{12;1}^{(N-i-1)}\) _are_ \[\sum_{r=2}^{N-i-2}(-1)^{r}E_{1,r}E_{1,1}^{N-i-r-1}.\] Proof.: Recall the formula (3.7) of \(T\)-generators. Observe that in order to have a (not necessarily asymptotically) linear term in a summand with \(s\) terms, we need at least \(s-1\) of those \(\widetilde{E}_{i_{l},j_{l}}\) to carry a constant term. Hence, we must have \(i_{l}=j_{l}\) for at least \(s-1\) values of \(l\) where \(1\leq l\leq s\). But then \(\hbar\) divides each of these constant terms by (3.6), so we require \(s-1=0\). Therefore, only the linear part \[\sum_{\begin{subarray}{c}\operatorname{row}(i_{1})=i,\\ \operatorname{row}(j_{1})=j,\\ \operatorname{col}(j_{1})-\operatorname{col}(i_{1})+1=r\end{subarray}}\tilde{E }_{i_{1},j_{1}}\] can contribute to the first power of \(\hbar\), which is precisely the formula from the statement. Now let us study the asymptotically \(\mathfrak{l}\)-linear terms of \({}_{i+1}T_{22;1}^{(j-i)}\). Consider a summand of (3.7). By condition (1), \(\operatorname{row}(i_{1})=2\). Assume that \(\operatorname{col}(i_{1})>1\). Then by condition (3), \(\operatorname{col}(j_{1})>1\), and so \(\operatorname{row}(j_{1})=2\). In particular, \(\sigma_{\operatorname{row}(j_{1})}=+\) meaning that \(\operatorname{col}(i_{2})>1\) by condition (5). Continuing, we obtain that this summand cannot contain \(E_{1,1}\) or \(E_{2,1}\). Consider \(\operatorname{col}(i_{1})=1\), i.e. \(\widetilde{E}_{i_{1},j_{1}}=E_{2,1}\). By condition (2), \(\operatorname{row}(j_{1})=1=\operatorname{row}(i_{2})\). If \(\operatorname{row}(j_{2})>1\) and we can apply previous arguments to conclude that the corresponding summand is \(E_{2,1}\widetilde{E}_{1,r}\) (recall that we are interested only in \(\mathfrak{l}\)-linear terms). Otherwise, by condition (6), we see that \(i_{3}=1\), and we can repeat the argument. Summing all cases, we obtain a summand of the form (observe that we drop all the \(\hbar\)-factors) \[(-1)^{r}E_{2,1}E_{1,1}^{k}E_{1,r}.\] Now we commute the \(\mathfrak{l}\)-part to the right. Observe that it produces powers of \(\hbar\), so, \[(-1)^{r}E_{2,1}E_{1,1}^{k}E_{1,r}=(-1)^{r}E_{1,r}E_{2,1}E_{1,1}^{k}+O(\hbar).\] The relation between \(r\) and \(k\) follows from the degree condition (4). The analysis for \({}_{i+1}T_{12;1}^{(N-i-1)}\) is similar and will be omitted. Combining all preliminary results, we can compute the tensor isomorphism Theorem 5.9 for the vector representation. **Proposition 5.21**.: _The monoidal isomorphism \(J_{\mathbf{C}^{N},\mathbf{C}^{N}}\) from Proposition 5.16 has the form_ \[J_{\mathbf{C}^{N},\mathbf{C}^{N}}=\operatorname{id}_{\mathbf{C}^{N}\otimes \mathbf{C}^{N}\otimes\mathfrak{W}}+\hbar\mathbf{j}_{\mathbf{C}^{N},\mathbf{C} ^{N}}+O(\hbar^{2}),\] _where \(\mathbf{j}_{\mathbf{C}^{N},\mathbf{C}^{N}}\) is_ \[\mathbf{j}_{\mathbf{C}^{N},\mathbf{C}^{N}}=\mathbf{j}_{c}+\sum_{j=2}^{N-2} \sum_{i=j+2}^{N}\sum_{r=2}^{i-j}(-1)^{i-j-r}x_{21}x_{11}^{i-j-r}E_{1,r}\otimes E _{i,j}+\sum_{i=4}^{N}\sum_{r=2}^{i-2}(-1)^{i-r}x_{11}^{i-r-1}E_{1,r}\otimes E_{ i,1}\] _with \(x_{21},x_{11}\) the coordinate functions on \(\mathfrak{l}^{*}\) corresponding to \(E_{21},E_{11}\in\mathfrak{l}\). Here,_ \[\mathbf{j}_{c}=\sum_{j=2}^{N-1}\sum_{i=j+1}^{N}\left(\sum_{l=2}^{j}E_{l,l+i-j-1 }\right)\otimes E_{i,j}+\sum_{i=3}^{N}E_{1,i-1}\otimes E_{i,1}\] _is a map \(\mathfrak{b}^{*}\to\mathfrak{m}\) from Proposition 5.3._ Proof.: Denote by \(L_{N-j}^{N-k}\) the asymptotically linear part of \[(-1)^{j-k}{}_{k+1}T_{22;1}^{(j-k)},\,N-j\neq 1,\] \[(-1)^{N-k-2}\cdot{}_{k+1}T_{12;1}^{(N-k-2)},\,N-j=1.\] It follows from Proposition 5.20 that \[\begin{split} L_{N-j}^{N-k}&=-\sum_{l=2}^{N-j}E_{l, l+j-k-1},\\ L_{1}^{N-k}&=-E_{1,N-k-1}.\end{split} \tag{5.25}\] Combing Eq. (5.23), Lemma 5.18, and Lemma 5.19, the constant part of the first \(\hbar\)-power of \(J_{\mathbf{C}^{N},\mathbf{C}^{N}}\) is given by the action of \[\sum_{j=1}^{N-2}\sum_{k=0}^{j-1}\left(\sum_{l=2}^{N-j}E_{l,l+j-k-1}\right) \otimes E_{N-k,N-j}+\sum_{k=0}^{N-3}E_{1,N-k-1}\otimes E_{N-k,1}.\] Changing coefficients: \[\sum_{j=2}^{N-1}\sum_{i=j+1}^{N}\left(\sum_{l=2}^{j}E_{l,l+i-j-1}\right) \otimes E_{i,j}+\sum_{i=3}^{N}E_{1,i-1}\otimes E_{i,1}.\] Now let us consider the "dynamical" part. It follows from Proposition 5.20 that the asymptotically \(\mathfrak{l}\)-linear part of the coefficient \((-1)^{j-i}\cdot{}_{i+1}T_{22;1}^{(j-i)}\) of \(v_{N-i}\) in \(\widetilde{v}_{N-j}^{\psi}\) is \[\sum_{r=2}^{j-i}(-1)^{j-i+r}E_{1,r}E_{2,1}E_{1,1}^{j-i-r}.\] Recall from the construction of the tensor structure from Proposition 5.16 that we need to compute the first \(\hbar\)-power of the right action \[\sum_{r=2}^{j-i}(-1)^{j-i+r-1}(1\otimes v_{k})\cdot E_{1,r}E_{21}E_{11}^{j-i-r}\] modulo \(\mathfrak{b}\) for every \(k\). It follows that it is equal to \[\sum_{r=2}^{j-i}\left((-1)^{j-i+r}\otimes\mathrm{ad}_{E_{1,r}}(v_{k})\right)E _{2,1}E_{1,1}^{j-i-r},\] which for all admissible \(i,j\), gives the contribution \[\sum_{j=2}^{N-2}\sum_{i=j+2}^{N}\sum_{r=2}^{i-j}(-1)^{i-j-r}x_{21}x_{11}^{i-j- r}E_{1,r}\otimes E_{i,j}.\] Likewise, by considering the coefficients of \(v_{1}^{\psi}\), we get the contribution \[\sum_{i=4}^{N}\sum_{r=2}^{i-1}(-1)^{i-r}x_{11}^{i-r}E_{1,r}\otimes E_{i,1},\] and the theorem follows. In fact, the constant part \(\mathbf{j}_{c}\) is related to the form \(\omega\) from Proposition 5.3. **Proposition 5.22**.: _The inverse \(\omega^{-1}\) is equal to \(\mathbf{j}_{c}-\mathbf{j}_{c}^{21}\)._ Proof.: One can easily see that the conditions (5.6) are satisfied. Finally, by the Schur-Weyl duality, any representation of \(\mathrm{GL}_{N}\) can be canonically obtained as a subrepresentation of \((\mathbf{C}^{N})^{\otimes k}\otimes\det^{1}\) for some \(k,l\), where \(\det\) is the one-dimensional determinant representation. By naturality of construction, we obtain the main result of the paper. **Theorem 5.23**.: _The semi-classical limit of the monoidal isomorphisms \(J_{UV}\) from Theorem 5.9 is given by the action of the universal element_ \[\mathbf{j}=\mathbf{j}_{c}+\sum_{j=2}^{N-2}\sum_{i=j+2}^{N}\sum_{r=2}^{i-j}(-1)^{ i-j-r}x_{21}x_{11}^{i-j-r}E_{1,r}\otimes E_{i,j}+\sum_{i=4}^{N}\sum_{r=2}^{i-2}(-1)^ {i-r}x_{11}^{i-r-1}E_{1,r}\otimes E_{i,1},\] _where_ \[\mathbf{j}_{c}=\sum_{j=2}^{N-1}\sum_{i=j+1}^{N}\left(\sum_{l=2}^{j}E_{l,l+i-j-1 }\right)\otimes E_{i,j}+\sum_{i=3}^{N}E_{1,i-1}\otimes E_{i,1}\] _defines an inverse of \(\omega\) from Proposition 5.3._
2308.03252
Video2Action: Reducing Human Interactions in Action Annotation of App Tutorial Videos
Tutorial videos of mobile apps have become a popular and compelling way for users to learn unfamiliar app features. To make the video accessible to the users, video creators always need to annotate the actions in the video, including what actions are performed and where to tap. However, this process can be time-consuming and labor-intensive. In this paper, we introduce a lightweight approach Video2Action, to automatically generate the action scenes and predict the action locations from the video by using image-processing and deep-learning methods. The automated experiments demonstrate the good performance of Video2Action in acquiring actions from the videos, and a user study shows the usefulness of our generated action cues in assisting video creators with action annotation.
Sidong Feng, Chunyang Chen, Zhenchang Xing
2023-08-07T02:08:43Z
http://arxiv.org/abs/2308.03252v1
# Video2Action: Reducing Human Interactions in Action Annotation of App Tutorial Videos ###### Abstract. Tutorial videos of mobile apps have become a popular and compelling way for users to learn unfamiliar app features. To make the video accessible to the users, video creators always need to annotate the actions in the video, including what actions are performed and where to tap. However, this process can be time-consuming and labor-intensive. In this paper, we introduce a lightweight approach Video2Action, to automatically generate the action scenes and predict the action locations from the video by using image-processing and deep-learning methods. The automated experiments demonstrate the good performance of Video2Action in acquiring actions from the videos, and a user study shows the usefulness of our generated action cues in assisting video creators with action annotation. app tutorial video, user action, deep learning + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition and can free up mental resources allocated to understanding the content. However, manually annotating the app tutorial videos can be time-consuming and labor-intensive for video creators, including watching the video frame-by-frame, extracting the action clips, recalling the specific action locations, and annotating the actions. There are many studies that attempt to facilitate the annotation of natural videos [27, 59, 66, 75, 85], but rarely related to annotating the actions in mobile app videos. Some researchers model mobile app Uls and UI interactions based on a single static UI [18, 34, 51, 68], however, those approaches do not apply to model semantic interactions on video artifacts (sequences of Uls). To retrieve the action execution information, software instrumentation [58, 61] is widely used, i.e., adding extra code to an app for monitoring UI interactions. However, instrumentation requires sophisticated accessibility or UI automation APIs [39, 40] and continuous updates along with the app and different operating systems [48, 76]. In addition, the intrusive techniques cannot reliably and accurately acquire information from apps [47, 49], e.g., misaligned runtime view hierarchy. Some studies work on extracting the actions from the app usage videos based on extra recording apparatus, such as developer-mode touch indicators [14], third-party screen recorders [5], or external cameras [60]. These works add extra work for video creators, but not all non-developer or non-tester creators have such domain knowledge and are willing to use it according to our empirical study in Section 3. In this paper, we present Video2Action, a lightweight non-intrusive approach that only requires an app tutorial video as the input and automatically acquires the actions from the video, enabling human-AI collaboration to reduce the burdens of video creators in action annotation. Our approach consists of two main phases: 1) Action Scene Generation and 2) Action Location Prediction. First, we propose a heuristic image-processing method to segment the app video into action scenes. Given the action scenes, we then develop a novel deep-learning method to infer the action locations. Based on the actions acquired by our approach, we further implement a proof-of-concept user interface to offer an opportunity for video creators to navigate to specific frames of actions in the video, identify action locations, and effectively create annotations. We evaluate our approach Video2Action based on a large-scale crowdsourced Rico [26]. Results show that our approach achieves the best performance (81.6% Video F1-score and 86.4% Levenshtein score) in action scene generation from the videos compared with six commonly-used baselines. Our approach also achieves on average 50.1% and 81.9% accuracy in inferring top-1 and top-5 action locations, which significantly outperform three state-of-the-art baselines and three ablation studies. We further carry out a user study to evaluate the usefulness of Video2Action in assisting action annotation of app tutorial videos in the real-world environment. Results show that participants save 85% of time annotating the actions with the help of the actions generated by our approach, in comparison to the annotation from scratch. The feedback from the participants also confirms the usefulness and helpfulness of the Video2Action in the social media community. Finally, we discuss the generality of our approach and show two potential applications that could benefit from our approach to interact or collaborate with, including bug recording replay and video captioning. The contributions of this paper are as follows: * We present a lightweight non-intrusive approach Video2Action for automatically acquiring actions from the app tutorial video to reduce human interaction burdens in action annotation. * We conduct an empirical study to investigate the action annotation problems of the app tutorial videos and understand the characteristic of actions. * A comprehensive evaluation including automated experiments and a user study to demonstrate the performance and usefulness of Video2Action. ## 2. Related Work ### Annotating UI-based Videos The advance of machine learning has provided new opportunities to reduce the cognitive and interaction burdens of users in video annotation, such as an adaptive video playback tool to assist the quick review of long video clips [11, 75], a mobile application to support real-time, precise emotion annotation [85], an interaction pipeline for the annotation of objects and their relations [66], and a novel method to acquire tracking data for sports videos [27, 59]. These prior studies focus on the videos of natural scenes or virtual scenes, and cannot easily transfer to our domain of digital scenes, UI screen-casting videos. There are few researchers that work on desktop-based screen-casting in assisting software development [12, 13, 19]. In contrast, we focus on the recording of more compact and denser screen, mobile-based videos. Most of the work for mobile videos is to facilitate automated app testing by bug record-and-replay, which aims to capture the screens that triggered the bug and play it back on the device. For example, Nurmuradov et al. [58] developed a program analysis tool to dump the action data during the recording process and then replay the actions on an Android emulator. The underlying technique of these works is software instrumentation by adding extra code to an app for monitoring action behavior. However, it relies on sophisticated accessibility or UI automation APIs (i.e., Accessibility, replaykit) [39, 40] and continuous updates along with the app and different operating systems [48, 76]. In many cases, the intrusive techniques cannot reliably and accurately acquire information from the apps [47, 49], i.e., misaligned runtime view hierarchy. Bernal et al. [14] introduced a lightweight record-and-replay tool V2S, but it required testers to access the Android developer setting to enable the touch indicator for action identification. A similar work is RoScript [60] which required testers to use an external camera to record the screen and finger movement. These works add extra work for video creators to record the screen, but not all non-developer or non-tester creators have such domain knowledge and are willing to spend that much effort according to our empirical study in Section 3. In this study, we propose a purely image-based approach to acquire the actions from the app tutorial videos, without any requirement of heavy testing framework installation, developer configuration setup, or extensive app instrument. In detail, we first leverage the image-processing method to segment the video into action scenes and then adopt deep-learning models to infer the action locations. With the rich action information acquired by our approach, we support efficient video content exploration, thus significantly reducing the burdens of video creators in action annotation. ### Modeling UI Interactions Video2Action is related to prior research on computationally modeling app UIs and UI interactions (Sweargnin et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). For example, Sweargnin et al. (Sweargnin et al., 2019) proposed a machine learning method Tap-Shoe, that leveraged the tappability signifiers of UI (e.g., type, size, text) to model whether the UI elements are tappable. Unlike these works, which model UI interactions using information from a single static UI image, we model interactions based on UI response, i.e., recognize the actions triggered from one UI to the next, allowing for more advanced semantic interactions, such as scrolling the UI or returning to the previous UI, etc. Lee et al. (Lee et al., 2019) developed a method to predict the UI element that the user is likely to tap on the current screen based on the previous screens. The underlying technique was a sequence model LSTM that treats actions as a sequence of tokens derived from the UI elements. A similar work is Humanoid (Hummer et al., 2017) which modeled UI interactions as automated UI testing. With the emergence of the Transformer (Sweargnin et al., 2017), Chen et al. (Chen et al., 2020) leveraged the element information from the UI hierarchy to train a Transformer to recommend the next tap location in the shopping app. Given the multi-modal information such as the user's action history and time of the day, Zhou et al. (Zhou et al., 2020) further improved the performance of tap location prediction. In contrast, our work focuses on the screenshots from app tutorial videos, which are just UI images without additional information. He et al. (He et al., 2020) introduced an image-processing method ActionBert to predict the tap location between UI images. The pipeline of ActionBert was to first detect the elements in the UI, then extract their information, and finally predict the tap location. However, this step-by-step method can lead to a "garbage in and garbage out" problem, i.e., imprecise UI element detection will result in incorrect tap location prediction. To that end, we propose an end-to-end differentiable model to detect the potential region of interest (i.e., tappable elements) in the UI, and extend them to infer the specific tap location to trigger the next UI. Considering the human knowledge of UI and UI interaction, we further develop a tailored data-augmentation method to enhance the robustness of our model. The results in Section 6.2 demonstrate our model with deep human knowledge can achieve better performance in modeling UI interaction. ## 3. Empirical Study of App Tutorial Videos and Related Problems In this section, we carry out a small empirical study to gain insight into the app tutorial videos and find implicit characteristics of user interaction behaviors for motivating the required tool support. We select Youtube (Zhou et al., 2020) as our study subject as it is the most widely-used social media sharing platform. We randomly collect 500 videos as our experimental dataset, showcasing a variety of tutorial topics, including instructional content and app demo walkthroughs, recorded through screencasting. Among these, 69% feature narration, while 44% are in English. To gain an understanding of app tutorial videos, we recruit three labelers from online posting to label the actions from our experimental dataset. All labelers have more than two years of labeling experience and have labeled at least one UI/UX-related dataset (i.e., UI element bounding box, UI animation linting). We first give them an introduction to our study and also a real example to try. Then, we assign the experimental set of app tutorial videos to them to label the actions from a comprehensive list of user interactions (Hummer et al., 2017) independently without any discussion. After the initial labeling, the labelers meet and correct the subtle discrepancies. In total, we obtain 9,764 actions from 500 experimental app tutorial videos, on average 19.5 actions per 3.2 minutes video. Based on the action labels, the following research questions are emerged. ### Are the actions in the app tutorial videos clearly annotated? We observe that more than 62% of videos are without action cues. To confirm such a phenomenon and understand the limitations of action annotation, we further conduct informal interviews with three professional app tutorial creators, including two creators (C1, C2) from the Alibaba app development team, and one creator (C3) with more than 10k followers on Youtube. All of them mention that they do annotate the actions for some important tutorials (i.e., app releases, key feature introductions, etc.) and acknowledge that it can engage users and gain more attention from the community. However, they may not annotate every video due to the following practical reasons. First, annotating actions in the video is a time-consuming and tedious process. As C2 says: _"To create a great tutorial video with action annotations, I need to first split the video into action clips based on the timing of each action. Since the actions in the video may play too fast, I always need to pause and replay the video multiple times. And sometimes I need to watch the video frame-by-frame to split the action clips precisely. After getting the clips, I need to further recall the action attributes such as the tapping location, the scrolling offset, etc., and finally, use the video editing tools to annotate the actions."_ C1 also confirms the challenges of adding action annotations in the industry due to budget constraints and market pressures. Second, developers and testers have developed built-in touch indicators (Hummer et al., 2017) or third-party screen recorders (Hummer et al., 2017) to annotate the actions performed on the screen for automated app testing. However, the creators may not have the domain knowledge to set it up, as C3 says: _"I tried following the developers' instructions to enable the default touch indicator on the device, but I found it too difficult, requiring opening developer settings, rebooting the device, etc."_ In addition, C3 explains the inadequacy of such touch indicators for users replaying the videos and emphasizes the necessity of manual action annotation: _"Annotation is meant to guide the user's attention to the key elements or locations on the UI. However, existing touch indicators, such as built-in indicators or third-party cursors, are too small (less than 1% of the UI), inconspicuous (low contrast with the UI), and unclear (they don't show action semantics). As a result, they are not very helpful for users to learn and follow. To help users perceive the key points of actions more easily, I will use high-contrast colors and well-sized annotations (such as arrows, bounding boxes, and action illustrations) to explicitly highlight the actions as shown in Figure 1."_ ### What are the actions in the app tutorial video? Although the set of actions is fairly large (Krishna et al., 2017), there is often only a limited set of actions that are appropriate for a given app and device. To provide opportunities for all users to learn and replicate, the video creators often make the actions in the tutorial semantically clear and simple (Krishna et al., 2017). For instance, the swipe action that moves from left to right to return to the previous UI, is not supported on older devices, and one simple alternative is to tap the system's backward button. Therefore, we investigate the actions in the app tutorial videos to gain insight into common user interaction behaviors. Across all the labeled actions (9,764) in the experimental dataset, we find three most commonly-used actions: * **TAP (80.4%)** Allows users to interact with elements and access additional functionality. It usually transits to a very different UI. * **SCROLL (10.7%)** Allows users to slide screens vertically or horizontally to move continuously through content. * **BACKWARD (7.5%)** A semantic action of tap returns to the previous screen. It is often used to return to the app's landing page to demonstrate the next app functionality. It can be done by tapping the backward button in the system navigation bar at the bottom of the screen. * **Others. (1.4%)** There are also some other actions such as pinch, flinch, etc. However, they rarely appear in our experimental dataset (\(<2\%\)). ### What are the potential patterns in TAP actions? As the dominant action, tapping involves more diverse elements and responses than other actions as shown in Figure 2. To further understand TAP actions, we ask three labelers to code the categories of tapping patterns using the existing UI/UX design knowledge documented in books and websites such as The Design of Everyday Things (Things, 2018) and Mobile Design Pattern Gallery (Krishna et al., 2018). According to the background of UI and the transitions that triggered, we define the key characteristics of tapping actions into two main categories as shown in Table 1. First, we identify the TAP-PING AREA to describe the tapping location that triggers the UI transition. Second, we define the TAPPING RESPONSE to describe the rendering effect after tapping. Each of these main categories has a subset of specific categories, which jointly describe a tapping interaction. For example, as shown in the upper right of Figure 2, when the user taps the menu **icon**, it will transit to a UI with a **pop-up** menu list view. Another example shown in the upper left of Figure 2 illustrates an interaction by tapping the **text** view to a **new page** UI with different content and layout. **Summary**: By analyzing 500 app tutorial videos from Youtube, 62% of them are without action annotation. Despite the set of actions is fairly large, there are three most commonly-used actions in the tutorial videos, i.e., TAP, SCROLL, BACKWARD. As the most common action (85.4%), TAP action involves diverse area and corresponding response, resulting in the difficulty in identifying the tap location in the screen even by a human. ## 4. Video2Action Approach The findings in Section 3 confirm the necessity and difficulty of annotating actions in the app tutorial videos and motivate our approach development for automatic action acquisition to significantly reduce the cognitive and interaction burdens of video creators in action annotation. The overview of Video2Action is shown in Figure 3, consists of two main phases: **Action Scene Generation** and **Action Location Prediction**. For **Action Scene Generation**, since people perceive a sequence of graphics changes as a motion, consecutive images are perceptually dissimilar if people recognize any motions (i.e., UI transitions) from the image frames (Yin et al., 2018). In the human perception (a.k.a human vision) system, a majority of visual information is conveyed by patterns of contrasts from its brightness changes (Yin et al., 2019). Inspired by the biological vision, we propose a heuristic image-processing method based on brightness computation to segment action scenes from the video. That is, we first compute the luminance similarity between consecutive frames and cut the video into shots. Given the shots and consecutive frame similarity sequence, we then classify the action types (i.e., TAP, SCROLL, BACKWARD) and semantically correlate the shots into scenes. For **Action Location Prediction**, we aim to infer the action locations between scenes. For SCROLL action, we adopt the template matching (Krishna et al., 2018) method to calculate the moving distances; for BACKWARD action, we utilize the built-in system backward button. Since these methods are well-known and well-implemented, we omit the details for brevity in this paper. For TAP action, considering the Figure 2. An illustration example of tapping area and corresponding response in the UI. diversity of tapping area and response observed in Section 3.3, it would require significant effort to manually build a complete set of rules to detect action positions in all different situations. Therefore, we propose a novel deep-learning model to automatically learn the tapping area from the UI and predict the tapping coordinates. To improve the robustness and performance of the model, we further apply a tailored data augmentation method and a post-processing technique. ### Action Scene Generation #### 4.1.1. Shot Detection Different from natural scene videos, UI videos have clear shot boundaries of user actions, i.e. the start and end frames of a fully rendered UI. To detect the shots, we leverage the image-processing techniques to build a perceptual similarity score for consecutive frame comparisons based on luminance difference Y-Diff in YUV color space. Consider a video \(\left\{f_{0},f_{1},..,f_{N-1},f_{N}\right\}\), where \(f_{N}\) is the current frame and \(f_{N-1}\) is the previous frame. To calculate the Y-Diff of the current frame \(f_{N}\) with the previous \(f_{N-1}\), we first obtain the luminance mask \(Y_{N-1},Y_{N}\) by splitting the YUV color space converted by the RGB color space. Then, we apply the perceptual comparison metric, SSIM (Structural Similarity Index) (Srivastava et al., 2017), to produce a per-pixel similarity value related to the local difference in the average value, the variance, and the correlation of luminanions. An SSIM score is a number between 0 and 1, and a higher value indicates a strong level of similarity. Figure 4 shows a consecutive frame similarity sequence of a UI video. A shot is selected to be the fully rendered UI, that is the steady state where the consecutive frames are similar for a relatively long duration. The reason why we choose long duration is because of the occurrence of short steady duration in Figure 4A. While the UI layout of UI rendering is fast, resource loading may take time. For example, rendering images from the web depends on device bandwidth, image loading efficiency, etc. Based on a small pilot study, we set a duration of 1 second as a relatively long duration. #### 4.1.2. Scene Segmentation Videos such as movies, documentaries, and TV-series, follow some production rules (Krause et al., 2017) to proceed with shots to generate semantic correlated scenes. To generate these rules in UI videos, we look into the similarity scores of consecutive frames and their corresponding shots as shown in Figure 4. As we notice, the semantics of scenes strongly match the UI transition patterns observed in Section 3.2. Therefore, we develop a heuristic approach to identify the semantics of scenes following the matching patterns: (1) TAP: usually instantly transits UI to a very different UI as discussed in Section 3.2, revealing a drastically low similarity score during the transition, such as Figure 4A. (2) SCROLL: implicates a continuous transition from one UI to another, consequently, the similarity score starts with a drastically drop and then continues to increase slightly over a period of time, such as Figure 4B. (3) BACKWARD: depicts a semantic transition from the current UI to the previous UI as shown in Figure 4C. However, the similarity score cannot reliably detect BACKWARD actions, as it may coincide with the TAP actions. According to the BACKWARD actions are palindromic, e.g., UI-1 \(\xrightarrow{\text{Tap}}\) UI-2 \(\xrightarrow{\text{Tap}}\) UI-1, we develop a stack that follows the LIFO principle (last in, first out) (Krause et al., 2017) to check whether the palindromic UI shots are identical. ### Action Location Prediction Different from SCROLL and BACKWARD actions, TAP action is sensitive to the action location as clicking the different buttons will trigger different functionalities of the app. To accomplish this, we propose a novel deep-learning model that first recognizes the potential tappable area in the first UI and then predicts the tapping location that is perceived to transit to the second UI. To increase the robustness of the deep-learning model, we propose UI-specific data augmentation methods to integrate human knowledge into the model, and a post-processing method to further improve the model predictions. #### 4.2.1. Model Architecture Consider a UI transition (UI-1 \(\rightarrow\) UI-2), where UI-1 is the current UI, that transits to the next UI (UI-2). The overview of our tapping location inference model is shown in Figure 5, which consists of three main components: _Visual Encoding, Region Proposal Network_, and _Location Prediction Network_. For _Visual Encoding_ of the feature map for images, we adopt the most commonly applied approach ResNet-101 (Residual Neural \begin{table} \begin{tabular}{p{56.9pt}|p{113.8pt}|p{113.8pt}} \hline **Tapping Category** & **Specific Transition** & **Description** \\ \hline \hline \multirow{8}{*}{TAPPING AREA} & Text & The transition is triggered by the text content of an element, such as text, text field. \\ \cline{2-3} & Image & The transition is triggered by the image view of an element, such as image, icon. \\ \cline{2-3} & Button & The transition is triggered by the essential interactive elements, including button, toggle button, radio button, multi-tab button, spinner, switch, and checkbox. \\ \cline{2-3} & Others & Some infrequently seen elements can also trigger UI transitions, such as rating bar, seek bar, etc. \\ \hline \multirow{8}{*}{TAPPING RESPONSE} & New Page & It transits to a new UI. They may have some similar content, but they are visually different. \\ \cline{2-3} & Pop Up & It pops up a modal or dialog that appears on top of the previous UI with the background dimmed, such as tapping on the menu icon. \\ \cline{2-3} & Dropdown Menus & Different from the pop-up response, it reveals a list of options or commands and keeps the context of the information being requested visible, such as tapping on the spinner button. \\ \cline{2-3} & Selection Control & It responds to the users with visual feedback when they control certain options, settings, or states in the selection buttons, such as radio button, switch, and checkbox. \\ \cline{2-3} & Others & There are also some potential responses for UI transitions, such as text input, video play, etc. We categorize them as Others as they rarely appear in our empirical dataset (\(<1\%\)). \\ \hline \end{tabular} \end{table} Table 1. The categorization of tapping actions. Network) (Wang et al., 2017) with skip connections among the convolutional layers to capture more features in the image. To accelerate the training process, we apply fine-tuning (Wang et al., 2017) on the pre-trained model ImageNet (Wang et al., 2017) which already distills the visual features from more than 14 million natural images. Specifically, we freeze the top few blocks of layers that store useful low-level features that can also apply to UI (e.g., edges, curves, etc.), but train the last block of layers to learn the higher-order UI feature representations (e.g., element, layout, etc.). Inspired by the object detection task of detecting instances of objects of a certain class within a natural image, we exploit the neat network design _Region Proposal Network (RPN)_(Wang et al., 2017) to narrow down the feature maps by recognizing the perceived tapable areas in the UI. In detail, given the feature map, RPN generates a set of region proposals (a.k.a anchor boxes) to compute the region of interest (RoI) scores to determine whether the regions contain tapable elements or not. As the size and aspect ratio of elements in the UI is different from the objects in the natural scenes, we define five anchor-box scales (e.g., 32, 64, 128, 256 and 512), and four aspect width:height ratios (e.g., 1:1, 2:1, 4:1 and 8:1), which is empirically tested in UI element detection (Wang et al., 2017). Once we obtain the potential tapable areas in the UI-1, we propose _Location Prediction Network_ to predict the specific tap locations that transit to UI-2. Considering the UI transition, we first jointly combine the feature map of the potential tapable areas detected in UI-1 and the feature map of UI-2. Then, these features are given as input to a fully connected layer, whose output then goes into two branches. One branch for location regression is used to predict coordinate, and the other for classification that applies a Softmax activation layer to compute the probability of the coordinate to transit it to UI-2. In the end, we output the inferred tapping coordinates in confident ranking order. #### 4.2.2. Loss Function To train our proposed deep-learning model, we introduce a tailored loss function, which consists of _classification loss_ and _regression_ loss. The _classification loss_ is to train the model when the probability of the predicted coordinate diverges from the ground-truth. To achieve this, we leverage CrossEntropyLoss (Wang et al., 2017) to calculate the classification loss among 2 classes, where 0 indicates the predicted coordinate cannot trigger the UI transition, otherwise 1. The _regression loss_ is to train the model when the predicted coordinate (\(x,y\)) lies out of the ground-truth bound (\(x_{lower},y_{lower},x_{upper},y_{upper}\)). It is composed of the horizontal loss (x dimension) and vertical loss (y dimension) \(Loss_{reg}=Loss_{reg_{x}}+Loss_{reg_{y}}\). An example of regression loss from x dimension (likewise from y dimension) is calculated as \(Loss_{reg_{x}}=\mathds{1}_{\vec{\theta}[x_{lower},x_{upper}]}^{x}smooth_{L1}(x- \frac{x_{lower}+x_{upper}}{2})\), where \(\mathds{1}_{\vec{\theta}[x_{lower},x_{upper}]}^{x}\) is an indicator whose value is 1 if x is out of the bound (\(x_{lower},x_{upper}\)). \(smooth_{L1}\) is the robust regression loss function Smooth L1 (Wang et al., 2017). Usually, the boundary is loose and the key content is centered, therefore, we regress the coordinate towards the middle of the bound (\(\frac{x_{lower}+x_{upper}}{2}\)). Figure 4. An illustration of Y-Diff similarity scores of consecutive frames in the UI video. Figure 3. Overview of Video2Action to acquire actions from the video, consists of two main phases. Action Scene Generation phase takes a video as input and segments it into a scene transition graph with UI actions (e.g., Tap, Scroll, Backward). For each action, Action Location Prediction phase infers specific locations by adopting image-processing and deep-learning methods. #### 4.2.3. Data Augmentation The foundation of training deep-learning models is big data. Although we label some actions in the app tutorial videos in our empirical study (Section 3), the set of actions is not sufficient and manual labeling is prohibitively expensive. Therefore, we adopt one of the largest UI transition datasets Rico (Zhou et al., 2018). The Rico dataset contains 55k unique transition traces from 9.3k Android apps. The transition trace is represented as a sequence of UI screens, as well as information about the interactive coordinate and element. Rico also captures a video to record the transition trace. While Rico has a large amount of UI transition data, it may not cover abundant tapping patterns discovered in our empirical study (Section 3.3). To integrate human knowledge into the model, we apply data augmentation which is a technique used to create new synthetic data from existing data based on heuristic patterns. Specifically, we apply two UI-specific data augmentation methods, e.g., _Element Exchange_ and _Metamorphic Augmentation_. _Element Exchange_: UI is not merely a collection of individual and unrelated elements, such as texts, images, buttons, etc. Instead, it is designed with high-level semantics, forming perceptual groups such as tab, menu, card, or list. To keep the UI design consistent, the elements in the perception group often look similar (Zhou et al., 2018). According to this observation, we apply _Element Exchange_ to generate a number of synthetic samples by switching the position of similar UI elements in the perception group, without affecting the nature of UI. In detail, we first search the UIs in the Rico dataset that use certain Android layout classes that may contain a group of elements (e.g., ListView, FrameLayout, Card, TabLayout). Then, we heuristically examine the elements in the group to filter out those are not similar by width, height, element class, etc. For example, as shown in Figure 6(a), UI-Aug is artificially generated by switching the element "GENRES" to "PODCASTS" in the UI. _Metamorphic Augmentation_: Apart from augmenting the dataset based on a single UI, our task aims to predict the tapping location from one UI to another, prompting us to develop a tailored data augmentation method for pair of UIs. Inspired by the metamorphic testing (Zhou et al., 2018), some of the UI transitions can be reversed by tapping on the same location. For example, as shown in Figure 6(b), tapping on the "play" button in UI-1 will transit to the "pause" button in UI-2, and vice versa. To achieve this, we search the UI transitions that tap on certain elements that have opposite semantics (e.g., "play-pause", "on-off" switch, "selected-unselected" checkbox, etc.), and therefore add reverse samples in the training dataset to help the model learn deep human knowledge. Figure 5. The model architecture to predict tapping locations. Figure 6. An illustration of data augmentation method. Figure 7. The top-\(5\) candidate tapping positions with and without clustering. #### 4.2.4. Post-processing As there are many tapping coordinates predicted by the model, and some of them are very close to each other, which may affect the effect of action location recommendation as shown in Figure 7. We further post-processing these inferred tapping coordinates by a clustering algorithm, density-based spatial clustering (DBSCAN) (Wang et al., 2017), to drive more effective predictions. In detail, DBSCAN finds the nearby coordinates by Euclidean distance to form clusters and iteratively expands if its neighbors are close. Therefore, two parameters are required: the minimum number of points \(min_{pts}\) and a point needs to have within a certain radius \(\epsilon\) in order to be included in a cluster. We set \(min_{pts}\) to 1 and the value of \(\epsilon\) to 40, empirically by a small-scale experiment. Within each cluster, we choose the most confident coordinate as the representative of the cluster, yielding the tap location as shown in Figure 7. ## 5. Implementation of Video2Action With the acquired action scenes and action locations, we build a proof-of-concept Video2Action to allow video creators to interactively access the generated action scenes, visualize the predicted action locations, and eventually create annotations. The overview of the user interface is shown in Figure 8, including four interactive components. **(1) Video Playback Screen:** We allow the user to watch and annotate the app tutorial video in Figure 8A. We also provide a playback slider for user to navigate to arbitrary frames in the videos. **(2) Annotation Box**: With the annotation box in Figure 8B, the user can interactively create and modify the annotation of an action. We implement the annotation box by using Drag & Drop API (Drag and Kern, 2017), so that the user can easily annotate the action by dragging the action kit and dropping it onto the video playback screen, as shown in Figure 8A. **(3) Action Scene**: As shown in Figure 8C, the action scenes are automatically generated by our approach in Section 4.1. Each UI frame illustrates an action scene in the video. By clicking on the UI frame, the video will jump directly to the timeline where the action takes place. **(4) Action Location**: To help the user efficiently identify the actions triggered to the next UI, we automatically predict the potential action locations in Figure 8D. For TAP action, there are many locations predicted by our approach as discussed in Section 4.2. We provide user with top-k predictions to calibrate the final action location. Note we only consider k in the range 1-5, as the users rarely check a long recommendation list. With the help of our Video2Action, the action annotation process is straightforward. The video creator first positions the video at a frame of interest (where action performs) by clicking the UI frame in the action scene box (Figure 8C). According to the action locations in the recommendation list (Figure 8D), the video creator can examine frames back and forth in the video (Figure 8A) to quickly identify the real action location. Finally, the video creator can leverage the action kits in the annotation box (Figure 8B) to annotate the action in the video frame. ## 6. Automated Evaluation In this section, we describe the procedure we use to evaluate each phase of our approach in terms of its performance automatically. ### Action Scene Generation #### 6.1.1. Testing Data To evaluate the ability of our approach to accurately segment the UI videos into action scenes, we utilize 6k UI videos from the Rico dataset (Rendle et al., 2017). Each video provides a sequence of actions as the ground-truth. In total, we collect 30k TAP actions, 3k SCROLL actions, and 2k BACKWARD actions. On average, a 30s UI video contains 6.29 actions. #### 6.1.2. Baselines To demonstrate the advantage of using SSIM to segment scenes from the UI videos, we compare it with 4 widely-used image similarity metrics as baselines, including 2 pixel-level (e.g., **absolute differences**(Vaswani et al., 2017), **color histogram**(Vaswani et al., 2017)) and 2 structural-level (e.g., **SIFT**(Wang et al., 2017), **ORB**(Wang et al., 2017)). In addition, we set up 3 state-of-the-art methods (e.g., image processing, and machine learning) which are commonly-used for video segmentation as the baselines to compare with our method. **PySceneDetect**(Drag and Kern, 2017) is a practical Python library to detect shot boundaries by analyzing color, intensity, and motion estimation between frames. **Heate**(Wang et al., 2017) is a tool developed by Yahoo to generate shot boundaries by estimating frame quality and using machine learning to cluster frames and aggregate them as shots. **Scene Edit Detection**(Chen et al., 2017) is a hand feature in Adobe Premiere Pro CC, that leverages machine learning to automatically detect cut points and scene changes from the video. #### 6.1.3. Evaluation metrics We employ two widely-used evaluation metrics, e.g., Video F1-score, and Levenshtein score. To evaluate the precision of detecting the shots from the UI videos, we adopt the Video F1-score (Vaswani et al., 2017), which is a standard video shot boundary metric to measure the difference between two sequences of shots that properly accounts for the relative amount of overlap between corresponding shots. Consider the shots detected by our approach (\(c_{our}\)) and ground-truth (\(c_{gt}\)), the Video F1-score is computed as \(\frac{2|c_{our}C_{gt}|}{|c_{our}|+|c_{gt}|}\), where \(|c|\) denotes the duration of the shot. The higher the score value, the more precise the approach can detect the shots. To evaluate the accuracy of generating action scenes, we adopt the Levenshtein score (Levenshtein and Kern, 2017), which compares the sequence of ground-truth actions and generated actions. We express the score value in percentage. The higher the score value, the more similar the generated action scene is to the ground-truth. If the action scene generated by our approach exactly matches the ground-truth, the score value is 100%. #### 6.1.4. Results Table 2 shows the overall performance of all baselines. The performance of our approach Video2Action is much better than that of other baselines, i.e., 10.7%, 13.1% boost in Video \begin{table} \begin{tabular}{l|c|c} \hline \hline **Methods** & **Video F1-score** & **Levenshtein** \\ \hline Absolute (Vaswani et al., 2017) & 63.41\% & 72.18\% \\ Histogram (Vaswani et al., 2017) & 73.77\% & 76.34\% \\ SIFT (Wang et al., 2017) & 54.65\% & 63.33\% \\ ORB (Wang et al., 2017) & 53.92\% & 62.61\% \\ PySceneDetect (Drag and Kern, 2017) & 38.28\% & - \\ Heate (Wang et al., 2017) & 32.64\% & - \\ Scene Edit Detection (Chen et al., 2017) & 41.02\% & - \\ **Video2Action** & **81.67\%** & **86.41\%** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison of action scene generation. F1-score, and Levenshtein score even compared with the best baseline (Histogram). We observe that the state-of-the-art methods do not work well in our task, i.e., only achieve 38.28%, 32.64%, and 41.02% in Video F1-score for PySceneDetect, Hecate, and Scene Edit Detection, respectively. The issues with these methods are that they are designed for general videos which contain more natural scenes like humans, plants, animals, etc. Different from those videos, the UI videos belong to artificial artifacts with different image motions (i.e., UI rendering). We also observe that the pixel-level similarity methods (Absolute, Histogram) perform better than structural-level methods (SIFT, ORB), i.e., on average 14.3% and 11.3% improvement in Video F1-score and Levenshtein score, respectively. This is because, unlike images of natural scenes, the keypoints/features in the UIs may not be distinct. For example, a UI contains multiple identical checkboxes, and the duplicate keypoints of checkboxes can significantly affect similarity computation. Although the method based on the pixel-level metric (Histogram) achieves the best performance in the baselines, it does not perform well compared to our approach, i.e., 73.77% vs 81.67% in Video F1-score, and 76.34% vs 86.41% in Levenshtein score. This is because the color histogram is sensitive to the pixel value. The UI videos can often have image noise due to fluctuations of color or luminance, which may significantly affect pixel measurements. In contrast, our approach using SSIM achieves better performance as it takes similarity measurements in many aspects from spatial and pixel, which allows for a more robust measurement. Albeit the good performance of Video2Action, we still make wrong action scene generation for some UI videos due to two main reasons. First, some UIs may contain animated app elements such as advertisements or movie playing, which will change dynamically, resulting in inaccurate shot detection. Second, some UI videos start with the BACKWARD action, which limits our approach as we detect backward by comparing it with the previous UIs. ### Action Location Prediction Different from SCROLL and BACKWARD actions which are not sensitive to the action location, TAP location directly determines the response of the action. Therefore, we systematically evaluate the performance of tapping location prediction in this section. #### 6.2.1. Testing Data Since our approach employs a deep-learning model (Section 4.2) to predict the tapping location, we train and test our model using the Rico dataset as discussed in Section 4.2.3. Note that a simple random split cannot evaluate the model's generalizability, as tapping on the screens in the same app may have very similar visual appearances. To avoid this data leakage problem (Sandel et al., 2018), we split the screens in the dataset by apps. The resulting split has 26k (85%) tapping actions in the training dataset, 2k (8%) in the validation dataset, and 2k (7%) in the testing dataset. In addition, we apply the data augmentation methods in Section 4.2.3 to further enhance our training dataset. In total, we create 26% additional data, resulting in 33k samples in the training dataset. #### 6.2.2. Baselines & Ablations We set up 3 state-of-the-art methods which are widely-used for tap location prediction as the baselines to compare with our method. **ActionBert**(Krizhevsky et al., 2014), that first detects tappable elements in the UI, and then trains a classification model to identify which element is likely to trigger the tapping action. Since ActionBert is not publicly released, we follow their original paper to replicate the approach. **Humanoid**(Humans et al., 2017) is another baseline, which proposes a recurrent neural network to predict how the user will tap the UI step-by-step. Since our task predicts the tapping location based on a pair of UIs, we utilize the widely-used **Siamese** network (Sandel et al., 2018) as our baseline, which encodes the visual information from pair of images to yield predictions. Specifically, we use the state-of-the-art ResNet-101 architecture to capture the visual information (the same as our model) and predict two numeric variables corresponding to the tap location \((x,y)\). Figure 8. The implementation of Video2Action. The interface includes four major components: a video playback screen (A), an annotation box (B), an action scene box (C), and an action location box (D). To further demonstrate the advantage of our approach, we set up 3 ablation studies. Since we propose a tailored loss function to optimize the model training in Section 4.2.2, we consider a variant of our approach without the tailored loss function Video2Action w/o loss to see the impact of the loss optimization. We further investigate the contribution of our data augmentation methods in Section 4.2.3, namely Video2Action w/o augmentation, to see the performance of our model trained without 6,865 (26%) additional data. As discussed in Section 4.2.4, we propose a post-processing method to cluster the model predictions and filter the redundant ones. Therefore, we consider a variant of Video2Action w/o post-processing to compare the performance of our approach with and without post-processing. #### 6.2.3. Evaluation metrics We formulate the problem of tap location prediction as a searching task (i.e., search the most likely location to tap to the next UI), so we employ Precision@k to evaluate the performance. As one UI element occupies a certain area, tapping any specific point within that area can successfully trigger the action. Therefore, Precision@k is the proportion of the top-k predicted locations within the ground-truth UI element. The higher value of the metric is, the better a search method performs. Note we only consider k in the range 1-5, as users rarely check a long recommendation list. #### 6.2.4. Results Table 3 shows the overall performance of all methods. The performance of our model Video2Action is much better than that of other baselines in all metrics, i.e., 13.78%, 27.72%, 30.65% higher in Precision@1, Precision@3, and Precision@5 even compared with the best baseline (ActionBert). In contrast with the baselines, the Siamese network only achieves 22.32% in Precision@1, which confirms the difficulty of predicting a specific tapping coordinate in the UI beyond a simple regression model. ActionBert adopts more advanced models to predict the tapping locations, but it still does not perform well compared to our model (36.36% vs 50.14% in Precision@1). This is because, ActionBert applies a multi-phase pipeline that first detects the elements in the UI, then extracts their information, and finally predicts the tap locations. This pipeline may lead to a "garbage in and garbage out" problem, i.e., imprecise UI element detection will result in incorrect tap location predictions. In contrast, our model is end-to-end differentiable, which is more robust to predict the specific tap location. We further demonstrate the advantage of our model with ablation studies in Table 3. We can see that applying the post-processing method can significantly improve the performance of our model, i.e., improving 3.93%, 20.98%, and 30.12% in Precision@1, Precision@3, and Precision@5, respectively. This is because, many of the predicted locations are very close to each other (as shown in Figure 7), resulting in redundant predictions. Compared to traditional loss functions, our model with tailored loss optimization can achieve better performance, i.e., on average 7.27% improvement in precision. In addition, augmenting more training data improves 4.68%, 5.4%, and 7.88% model performance in Precision@1, Precision@3, and Precision@5, respectively. This suggests that our human knowledge-based data augmentation methods can further improve the model to capture the characteristics of UI transitions. To assess the trust of the model and interpret how the model gives a certain prediction, we visualize the features used to infer the final tap location into a heatmap by a visualization technique GradCAM (Gordord et al., 2017). Figure 9 presents examples of the conclusive feature heatmap from our model. We can see that most of the predicted tap locations are spotted on the tappable elements, indicating the reliability and interpretability of our model. Figure 10 shows some predicted tap locations for UI transitions. We can see that our model can accurately predict the locations in different complex transitions, including handling content in different styles, being sensitive to tiny features, and being robust to non-homology features. For the failure analysis of our model, we conclude three main reasons why our model fails. First, some UI designs are not reasonable in low-quality mobile apps. For example, in Figure 11(a), the designer proposes a "Play" icon to transit to a "Playlist" UI, while our predicted "Playlist" button in the multi-tab is more aligned to user experience. Second, the contents are ambiguous, such as "Logout" and "Delete my account" in Figure 11(b) may both be able to transit to the next UI. Note that "Logout" is the second predicted tap location, suggesting this issue can be potentially solved by expanding the location search. Third, some UI transitions require a deep semantic understanding of UIs. For example, to predict the tap location in Figure 11(c), we need to first understand the contents in the UIs, and then through a difference analysis, we find that it is a structural change, so we speculate the semantic text "Switch layout" as the tap location. In the future, we will improve the performance of our model by adding more semantic information, such as the contents, layout structures, etc. ## 7. Usefulness evaluation We demonstrate the effectiveness of our approach in the last section, and we continue to show its usefulness with a user study to see if \begin{table} \begin{tabular}{l|c|c|c} \hline **Methods** & **Prec@1** & **Prec@3** & **Prec@5** \\ \hline ActionBert (Wang et al., 2018) & 36.36\% & 41.60\% & 56.24\% \\ Humanoid (Wang et al., 2018) & 29.72\% & 34.23\% & 47.22\% \\ Siamese (Wang et al., 2018) & 22.32\% & - & - \\ \hline Video2Action w/o loss & 46.58\% & 61.63\% & 71.33\% \\ Video2Action w/o augmentation & 45.46\% & 63.92\% & 74.01\% \\ Video2Action w/o post-processing & 46.21\% & 48.34\% & 51.77\% \\ **Video2Action** & **50.14\%** & **69.32\%** & **81.89\%** \\ \hline \end{tabular} \end{table} Table 3. Performance comparison of TAP location prediction. “Prec” denotes the precision of the predicted tap locations. Figure 9. The heatmap of tappable locations in the UI predicted by our model. it can really assist video creators to annotate the actions in the app tutorial videos. ### Dataset of User Study We randomly select 8 app tutorial videos from Youtube, covering different app usage scenarios (i.e., financial, booking, systematic). The details of the tutorial videos are shown in Table 4, which consists of 4 short videos and 4 long videos. On average, each short video is of 2.85 minutes and contains 15 actions; each long video is of 10.79 minutes and contains 49 actions. Our approach achieves an average accuracy of 68.6%, 82.9%, and 93.1% in predicting top 1, 3, and 5 actions, respectively. ### Experimental Design We recruit 12 video creators (8 females, 4 males) with experience in annotating app tutorial videos from an online posting. 5 from the app development team in the industry, 2 from the movie industry, and 5 freelancers who regularly post on video-sharing platforms. Their ages range from 24 to 36 years (M = 28.7, SD = 3.8). Each participant will receive a 550 shopping card as a reward after the experiment. At the beginning of the study, we first give them an introduction to our study and also a demo tutorial video (not in the experimental dataset) to try. We also conduct a follow-up survey among the participants regarding their annotation experience. Participants are then asked to annotate 8 app tutorial videos in the experimental dataset individually in a quiet room, such as a lab or home, to minimize distractions. The study involves two groups of six participants: the experimental group from \(P_{1}\) to \(P_{6}\) who gets help with the actions inferred by our approach, and the control group \(P_{7}\) to \(P_{12}\) who starts from scratch. Each pair of participants \(\langle P_{x}\,, P_{x+6}\rangle\) has comparable annotation experience, so the experimental group has similar capability to the control group in total. Our approach can produce some action prediction errors, but we do not carefully correct these errors or tell the participants which predictions are incorrect. This is done to investigate the practical usefulness of our approach. Participants are asked to finish each annotation as fast as they can while ensuring annotation accuracy. To reduce stress bias, we allow them to take short breaks between each tutorial. We record the time used to annotate the tutorial videos. At the end of the session, we provide a cumulative questionnaire with 5-point Figure 11. Examples of three prediction errors. Blue color represents the ground-truth, and red color represents the prediction by our model. Figure 10. Examples of three kinds of accurately predicted tap locations. Blue color represents the prediction by our model. Likert scale questions and a 5-minute open-ended interview to collect their feedback, in terms of the ease of annotations and the helpfulness of Video2Action. ### Results Overall, participants appreciate the usefulness of our Video2Action for revealing the actions performed in the app tutorial videos, so that they can easily annotate them. We present the annotation time and questionnaire results in Figure 12. The detailed questionnaire results for the experimental group are in Table 5. To further understand the significance of the differences, we carry out the Mann-Whitney U test (Mann and Whitney, 2018) on the annotation time and questionnaire results between the experimental and the control group respectively. The test results suggest that our approach does significantly outperform the baseline in terms of these metrics with \(p<0.01\). #### 7.3.1. Participant Behaviors As shown in Figure 12(a) and Figure 12(b), participants in the experimental group can annotate the actions much faster than the control group (with an average of 12.11 minutes vs 22.39 minutes). That is the biggest strength of our approach, helping video creators annotate the actions in the app tutorial videos efficiently. Specifically, with the help of our approach, 72% and 90% of the time are saved for annotating short videos and long videos, respectively. This indicates the time savings become more evident for more actions and longer videos. We also analyze the event logs from the study to gain a better understanding of participants' annotation processes. We pay special attention to the falsely generated actions and find that none of these false actions were annotated by the participants, suggesting that participants can easily discern the correctness of predictions. #### 7.3.2. Easy to Annotate Overall, participants respond that our tool Video2Action is easier to annotate the videos, e.g., 4.33 vs 1.66 compared to the annotation from scratch. The questionnaire in Figure 5 shows that participants in the experimental group enjoyed the experience (Q1.1). Five (83%) of the participants agree that our interface is easy to understand and the annotation process is straightforward. All of the participants in the experimental group are able to focus on the video (Q1.2). One participant in the experimental group (P3) explains that _"Typically, I have to check the video frame by frame to find a specific event. Longer videos require more effort, which can easily lead to distraction. With the help of the interactive action scenes in the tool, I can navigate directly to each event, saving me a lot of effort."_ Participants in the experimental group also report lower mental effort (Q1.3) when annotating the actions in the videos, while three (50%) of the participants in the control group express difficulty. P7 in the control group says _"The videos are too fast. It is really difficult to recognize the action in a short time. As a result, I have to replay the action many times, and carefully identify where the action performs."_ #### 7.3.3. Helpfulness Participants find our tool helpful (4.00) in navigating actions (Q2.1), informing action types (Q2.2), and indicating action locations (Q2.3). Five (83%) of the participants in the experimental group praise the helpfulness of our generated action scenes (Q2.1). On the one hand, it helps video creators to navigate directly to the frames when performing actions, improving annotation efficiency. On the other hand, it can be helpful to understand the key actions in the video. P4 and P6 mention that _"The generated action scenes are particularly useful to me. This allows me to know in advance how many actions need to be annotated, and roughly the action flow of the video."_ P3 further supports the usefulness of action scene generation in practice, _"Youtube recommends the video creators add timestamps to their videos, representing the key moments. For the app tutorial videos, the key moments are just the action scenes, which can be automatically generated by Video2Action."_ Most participants find that the action locations predicted by our tool contribute to their positive experience (Q2.3). P2 mentions that _"Sometimes, the action locations are not inconspicuous to realize, especially those that don't have any animation effects like ripple, expand, etc. It leaves me guessing the action locations. In contrast, the action locations predicted by the approach can potentially provide hints for locating regions of interest."_ \begin{table} \begin{tabular}{l|l c c|c c c} \hline \hline **Video Group** & **Title** & **Length(min)** & **\# TAP/SCROLL/BACK** & **Prec@1** & **Prec@3** & **Prec@5** \\ \hline \multirow{4}{*}{Short Video} & How to check screen time on Android? & 1.55 & 6 / 2 / 0 & 62.5\% & 87.5\% & 100\% \\ & How to use Fitness \& Bodybuilding app? & 2.35 & 10 / 3 / 3 & 75\% & 87.5\% & 93.8\% \\ & How to fix stopped Android apps? & 2.81 & 14 / 2 / 4 & 75\% & 90\% & 95\% \\ & How to track usage in Edge app? & 4.68 & 13 / 4 / 1 & 61.1\% & 72.2\% & 83.3\% \\ \hline \multirow{4}{*}{Long Video} & How to book an Airbnb? & 8.50 & 14 / 15 / 4 & 69.7\% & 78.8\% & 90.9\% \\ & How to save Android battery life? & 9.62 & 34 / 14 / 11 & 61.1\% & 74.6\% & 86.4\% \\ \cline{1-1} & How to manage budgets in Mint app? & 10.28 & 21 / 13 / 6 & 75\% & 92.5\% & 100\% \\ \cline{1-1} & How to become a driver in Doordash app? & 14.78 & 47 / 4 / 14 & 69.2\% & 80\% & 95.4\% \\ \hline \hline \end{tabular} \end{table} Table 4. The details of our experimental app tutorial videos. \begin{table} \begin{tabular}{l c} \hline \hline Statement & Median (IQR) \\ \hline 1.1 enjoyed the experience. & 4.0(0.25) \\ 1.21 was able to focus on the video. & 4.5(1.0) \\ 1.3 The mental effort required to annotate the actions was low. & 4.5(1.0) \\ 2.4.**Helpfulness** \\ 2.1 it was helpful to reveal the action scenes. & 4.0(0.75) \\ 2.2.1 it was helpful to reveal what kinds of actions. & 4.0(0.75) \\ 2.3 It was helpful to reveal where to touch. & 4.5(1.75) \\ \hline \hline \end{tabular} \end{table} Table 5. Results for the questionnaires (Median, Interquantile Range). ## 8. Discussion Video2Action has several opportunities for improvement. First, while our approach saves 81% of the time for annotating app tutorial videos, it still requires some manual effort from users, as our approach cannot achieve 100% accuracy in inferring actions as discussed at the end of each subsection of the evaluation in Section 6. In the future, we aim to further improve the performance of our approach to minimize human interaction in the video annotation process as much as possible. Second, we focus only on the most fundamental and common actions found in app tutorial videos. There are numerous other actions, such as pinch and rotate, which we believe can be addressed with a reasonable engineering effort. For some high-level gesture actions in 3D and AR/VR apps, a systematic study of patterns may be necessary. Besides, we discuss the generality and the implications of our approach and put them as future work. ### Generalization of Video2Action Video2Action is designed to assist the action annotation of app tutorial videos to reduce the cognitive and interaction burdens of video creators in the annotation. It has achieved satisfactory performance in generating action scenes and predicting action locations from Android app videos as evaluated in Section 6. In addition to Android, there are also many other platforms such as iOS, web. Supporting the videos of different platforms can bring analogous benefits to video creators. For mobile platforms like iOS, the actions and usage patterns exert almost no difference from Android. Therefore, our approach might be easily adapted to it with reasonable engineering effort. For platforms using different devices like desktop, the differences between these platforms with Android can be considerably big. In such cases, a detailed empirical study of the user behaviors and customization of our approach is required to determine the extension. In the future, we will try to extend our approach to help video creators annotate the app tutorial videos of multi-platform. ### Bug Replay Bug recordings of mobile applications are easy to capture and include a wealth of information (i.e., reproduce steps, configurations, etc.), making them a popular mechanism for users to inform developers of the problems encountered in the bug reports. In order to effectively resolve the bugs, the developer has to first understand the action steps performed in the bug recordings and then manually repeat them in the order shown. This process can be time-consuming and error-prone, especially for novice developers (Srivastava et al., 2017). We would expect that our approach can be applied to bug recordings to extract the bug reproduction steps (i.e., a sequence of actions). Once we derive the steps, we could further proceed to generate the testing script using Sikuli (Sikuli, 2017) to automate bug replay. ### Video Captioning Captions or subtitles are provided to add clarity of details, better engage users, and translate the different languages (Srivastava et al., 2017). It is particularly useful for people with vision impairments (e.g., the aged or blind) to access the video content without requiring caregivers (Srivastava et al., 2017). Our approach could be applied to enhance the accessibility of the app tutorial videos by generating clear and concise captions for action steps, enabling people with vision impairments to easily access the information and service of the mobile apps for convenience. To that end, given UI scenes and action locations generated by our Video2Action, we could leverage the existing mature methods (Srivastava et al., 2017) to recognize the interacted UI element and detect any associated text. Then, we could convert these UI elements into easy-to-understand natural language descriptions and embed them as subtitles. The combination of video and text should provide a well-rounded and comprehensive learning experience. ## 9. Conclusion & Future Work This paper proposes Video2Action, a lightweight approach to support action annotation for app tutorial videos. This approach uses image-processing and deep-learning methods to automatically generate the action scenes and predict the action locations. We set up automated evaluations to demonstrate the performance of our approach, significantly outperforming the commonly-used and state-of-the-art baselines. We further conduct a user study on the proof-of-concept interface to demonstrate the usefulness of Video2Action in helping video creators locate, analyze, and annotate actions more efficiently. In the future, we will work in three directions. First, we will keep improving our approach for better performance, such as incorporating the information of animation between UI transitions. Second, we will develop our approach to support more high-level actions, such as pinch-in, gesture, etc. Third, according to the user feedback, we will integrate our approach into the existing video editing tools, strengthening the collaboration between human and machine computation powers. Figure 12. Performance comparison between the control and experimental group.* denotes \(\boldsymbol{p<0.01}\).
2304.11583
F-12 density matrices and cumulants from the explicitly connected coupled-cluster theory
We present the expansion to the expectation value coupled cluster theory (XCC) to the wavefunctions that include the inter electronic distances $r_{12}$ explicitly. We have extended our algebraic manipulation code \paldus to deal with the rems arising in the CC-F12 theory. We present the full working expressions for the one-electron density matrix (1RDM) and cumulant of the two-electron density matrix ($\lambda$-2RDM) in the framework of XCC-F12 theory. We discuss the possible approximations the expressions.
Aleksandra M. Tucholska, Marcin Modrzejewski, Robert Moszynski
2023-04-23T08:46:28Z
http://arxiv.org/abs/2304.11583v1
# F-12 density matrices and cumulants from the explicitly connected coupled-cluster theory ###### Abstract We present the expansion to the expectation value coupled cluster theory (XCC) to the wavefunctions that include the inter electronic distances \(r_{12}\) explicitly. We have extended our algebraic manipulation code Paldusto deal with the rems arising in the CC-F12 theory. We present the full working expressions for the one-electron density matrix (1RDM) and cumulant of the two-electron density matrix (\(\lambda\)-2RDM) in the framework of XCC-F12 theory. We analyze the computational cost and discuss the possible approximations the expressions. ## 1 Introduction For the computation of the molecular properties of small- and medium-sized systems the coupled cluster (CC) theory [1, 2, 3] is the leading _ab initio_ approach. CC method is size extensive and allows for systematic approximation by including selected excitations. Currently CC is routinely used for the computation of ground-state energies, molecular properties, excited states, etc. Still, to obtain chemical accuracy (\(<1\) kcal mol\({}^{-1}\)) without including costly, higher excitations, one needs to address the incompleteness of the basis set that causes the well-known basis set error. It originates from the fact that one-electron orbitals are used to construct two-electron basis sets. It was known since 1957 Kato's discovery of the cusp condition [4] that the inclusion of the inter electronic distance \(r_{12}\) explicitly in the wave function might lead to the construction of an efficient wavefunction. The main obstacle of using such methods are the high-dimension integrals arising in the theory. So far numerous approaches to deal with this problem have been proposed, form the direct evaluation of the high-dimension integrals, [5, 6] through expanding the correlation factor in terms of Gaussian Geminals, [7, 8] to the well known R12/F12 methods proposed by Kutzelnig [9, 10] where through the insertion of the resolution of identity (RI) only two-electron integrals remain. Numerous approaches have been developed to deal with this problem. Among them the standard approximation idea (SA), proposed by Kutzelnig and Klopper, to introduce the resolution of identity (RI) to the integrals which allows for a reduction of the three- and four-electron integrals to two-electron terms. Although the SA simplified the integrals, the \(r_{12}\) methods still required to use large basis sets. [10, 11, 12] This problem was addressed be Klopper and Samson [13] by the introduction of ABS basis - additional basis set for the RI. Valeev [14] proposed a robust modification to this approach called the complementary auxiliary basis set - CABS method which involves expansion in the orthogonal complement to the span of orbital basis set (OBS). which will be utilized in this work. The large CABS basis is used only for the RI terms and the normal orbital basis set is retained for the rest of the terms, making the \(r_{12}\) methods feasible. Within the standard approximation the CC-F12 theory was first presented by Noga and collaborators.[15] The exponential form generates highly nonlinear, complicated expressions, therefore it is a common practice to further approximate the expressions for the amplitude, e.g. Fliegl,[16, 17] Tew[18] or Ten-no.[19] Shiozaki[20] presented the full form of the CC method up to the quadruple excittaions for ground state (CC-R12), excited states (EOM-CC-R12) and for the \(\Lambda\) equation (\(\Lambda\)-CC-R12) of the CC analytical gradient theory. In this work we propose introducing the explicitly correlated wavefunction to the computation of the one electron density matrix (1RDM) and the cumulant of the two-electron density matrix (\(\Lambda\)-2RDM) in the framework of the expectation value coupled cluster theory (XCC).[21, 22, 23] In this way we propose more accurate method to the computation of one- and two-electron properties of the ground state, while making use of the XCC ability of highly controllable approximations, at relatively low cost. ## 2 The CC-F12 theory In the CC-F12 theory the wavefunction \(\Psi_{0}\) is represented by the usual coupled cluster expansion \[\Psi_{0}=e^{T}\Phi_{0} \tag{1}\] where \(\Phi_{0}\) is the reference determinant usually Hartree-Fock determinant, and the cluster operator \(T\) is a sum of \(n\)-tuple excitation operators \[T=\sum_{n=1}^{N}T_{n} \tag{2}\] where \(N\) is the number of electrons. Each of the cluster operators can be represented by the product of singlet excitation operators \(E_{ai}\)[24] \[T_{n}=\frac{1}{n!}\sum_{\mu_{n}}^{N}t_{\mu_{n}}\mu_{n}=\frac{1}{n!}\sum_{\mu _{n}}^{N}t_{\mu_{n}}E_{ai}E_{bj}\ldots E_{fm}, \tag{3}\] where \(\mu_{n}\) denotes \(n\)-th excitation level. The indices \(a,b,c\ldots\), \(i,j,k\ldots\) and \(p,q,r\ldots\) denote virtual, occupied and general orbitals, respectively,see Table 1. When we restrict the excitations to single and double the cluster operator is composed of the standard part supplemented by the explicitly correlated component, \[\begin{split}& T=T_{1}+T_{2}+T_{2}^{\prime},\\ & T_{2}^{\prime}=\frac{1}{2}\sum_{ijkl}(t_{2}^{\prime})_{ij}^{kl }\left[\sum_{\alpha\beta}\left\langle\alpha\beta\right|f_{12}\left|kl\right\rangle E _{\alpha i}E_{\beta j}\right.\\ &\left.-\sum_{ab}\left\langle ab\right|f_{12}\left|kl\right\rangle E _{ai}E_{bj}\right]\end{split} \tag{4}\] where \(\alpha,\beta\ldots\) denote the complete set of orbitals, \(f_{12}\) is the \(r_{12}\)-dependent correlation factor. The new operator \(T_{2}^{\prime}\) should satisfy the condition \[T_{2}^{\prime}=\hat{Q}_{12}T_{2}^{\prime} \tag{5}\] in order to assure that \(T_{2}^{\prime}\) is strongly orthogonal to products of occupied orbitals. This ensures that \(T_{2}\) produces only two-electron correlation effect. The \(\hat{Q}_{12}\) can take several forms, among which is the so-called ansatz-3 proposed by Veleev[14] \[\begin{split}&\hat{Q}_{12}=(1-\hat{O}_{1})(1-\hat{O}_{2})-\hat{V}_ {1}\hat{V}_{2}\\ &=\hat{V}_{1}(1-\hat{P}_{2})+(1-\hat{P}_{1})\hat{V}_{2}+(1-\hat{ P}_{1})(1-\hat{P}_{2})\end{split} \tag{6}\] where \(\hat{O}_{i}\), \(\hat{V}_{i}\) and \(\hat{P}_{i}\) are the projections onto the occupied, virtual, and all orbital basis orbitals respectively, and \((1-\hat{P}_{i})\) projects on the set of virtual orbitals of the complete basis that does not include virtual orbitals from orbital basis, see Table 2 and Fig. 1. This particular form of the \(\hat{Q}_{12}\) operator allows us to approximate the \((\hat{1}-\hat{P})\) subspace instead of approximating the whole space \(\hat{1}\). The operator \(T_{2}^{\prime}\) is a product of the geminal amplitudes \((t_{2}^{\prime})_{ij}^{kl}\) and molecular integrals Figure 1: Partition of orbital space in CABS R12 with corresponding indices involving an explicit \(r_{12}\) dependent factor. \[F^{\alpha\beta}_{kl}=\int\int d\mathbf{r}_{1}d\mathbf{r}_{1}\phi_{ \alpha}(\mathbf{r}_{1})^{*} \tag{7}\] \[\phi_{\beta}(\mathbf{r}_{2})^{*}f_{12}(\phi_{k}(\mathbf{r}_{1}) \phi_{l}(\mathbf{r}_{1})-\phi_{l}(\mathbf{r}_{1})\phi_{k}(\mathbf{r}_{1})) \tag{8}\] with \(\hat{P}_{1}\phi_{\alpha}=0\) or \(\hat{P}_{2}\phi_{\beta}=0\), and \(F^{\alpha\beta}_{kl}=0\) otherwise. For the \(f_{12}\) correlation factor we used the Slater-type function of \(r_{12}\) \[f_{12}=(-\gamma r_{12}). \tag{9}\] The Ansatz Eq. (1), with thus defined \(T\) amplitudes, Eq. (4), is incorporated into the Schrodinger equation \[\hat{H}\Psi_{0}=E\Psi_{0} \tag{10}\] with Hamiltonian \(\hat{H}\) defined as \[\hat{H}=\sum_{\kappa\lambda}h_{\kappa\lambda}E_{\kappa\lambda}+\frac{1}{2} \sum_{\kappa\lambda\zeta\tau}g_{\kappa\lambda\zeta\tau}E^{\kappa\lambda}_{ \zeta\tau}, \tag{11}\] where \(\kappa,\lambda,\zeta\ldots\) denote general indices in a complete basis, Table 1. The equation is then multiplied from the left by \(e^{-T}\) and projected into the excited manifold producing the full CCSD-F12 expression for the energy, and CC-F12 amplitudes \[\left\langle\Phi_{0}\right|\bar{H}\left|\Phi_{0}\right\rangle \tag{12}\] \[\left\langle\Phi_{i}^{a}\right|\bar{H}\left|\Phi_{0}\right\rangle\] (13) \[\left\langle\Phi_{ij}^{ab}\right|\bar{H}\left|\Phi_{0}\right\rangle\] (14) \[\left\langle\Phi_{ij}^{kl}\right|\bar{H}\left|\Phi_{0}\right\rangle. \tag{15}\] \(\bar{H}\) denotes the similarity transformed Hamiltonian \(e^{-T}\hat{H}e^{T}\). ## 3 Elimination of the complete basis set In the CC-F12 theory, the elimination of indices \(\kappa,\lambda\ldots\) which run through the complete basis set is necessary in order to obtain expressions that are computationally manageable. Table 1 and Fig. 1 summarize how the space is divided in the CABS approach, and associates the specified indices with the corresponding subspace. The choice of the operator \(\hat{Q}_{12}\) allows for an efficient approximation of the special intermediates of the F12 theory \[\begin{split}\mathcal{V}^{pq}_{ij}&=\frac{1}{2}g_{ pq\alpha\beta}F^{\alpha\beta}_{ij}\\ \mathcal{X}^{kl}_{ij}&=\frac{1}{2}F^{kl}_{\alpha \beta}F^{\alpha\beta}_{ij}\\ \mathcal{B}^{kl}_{ij}&=\frac{1}{2}F^{kl}_{\alpha \beta}f_{\alpha\gamma}F^{\beta\gamma}_{ij}\\ \mathcal{P}^{kl}_{ij}&=\frac{1}{2}F^{kl}_{\alpha \beta}f_{\alpha\beta\gamma\delta}F^{\gamma\delta}_{ij}\end{split} \tag{16}\] which are rewritten in terms of the products of two-electron integrals expressed in either the OBS basis or the complete basis belonging to the \(\hat{1}-\hat{P}\) subspace. The complete basis is further approximated by the finite CABS basis belonging to the \(\hat{P}^{\prime}\) subspace \[\phi_{\alpha^{\prime}}\approx\phi_{a^{\prime}}. \tag{17}\] The special intermediates are identified in the orbital expressions, before any approximations take place, and are marked for an evaluation of an external integral engine during the computation stage. All other terms involving the summation over the complete basis are approximated by replacing \(\alpha^{\prime}\) by \(a^{\prime}\). ## 4 XCC approach to the computation of properties In the literature, there are several rigorous approaches that can be extended to calculate the molecular properties of the CC-F12 theory. The first approach base on the differentiation of CC energy expressions was introduced by Monkhorst [25, 26] in 1997 and later extended by Bartlett [27, 28, 29] et. al. and is known as the \(\Lambda\) vector technique. Koch and Jor \begin{table} \begin{tabular}{l l l} \hline \(\hat{O}_{i}\) & \(i,j,k,l\ldots\) & occupied \\ \(\hat{V}_{i}\) & \(a,b,c,d\ldots\) & virtual in OBS \\ \(\hat{P}_{i}\) & \(p,q,r,s\ldots\) & general in OBS \\ \(\hat{1}\) & \(\kappa,\lambda,\mu,\nu\ldots\) & general in complete \\ \(\hat{1}-\hat{O}\) & \(\alpha,\beta,\gamma,\delta\ldots\) & virtual in complete \\ \(\hat{1}-\hat{P}\) & \(\alpha^{\prime},\beta^{\prime},\gamma^{\prime},\delta^{\prime}\ldots\) & virtual in complete - OBS \\ \(\hat{P}^{\prime}_{i}\) & \(a^{\prime},b^{\prime},c^{\prime},d^{\prime}\ldots\) & virtual in CABS \\ \hline \end{tabular} \end{table} Table 1: Projectors on spaces and corresponding indices gensen [30, 31, 32] proposed the time-averaged quasi-energy Lagrangian technique TD-CC. In these approaches a set of linear response equations must be solved to obtain the \(\Lambda\) vector. With this quantity at hand the CC expectation value can be calculated from a non-symmetric expression alike \[\bar{X}=\left\langle(1+\Lambda)e^{(-T)}Xe^{T}\right\rangle, \tag{18}\] where \(\left\langle A\right\rangle\) denotes the expectation value of an operator A with a reference wavefunction, \(\Phi_{0}\). The second approach called XCC theory is based on the computation of molecular properties directly from the average value of an operator [21, 22] \[\bar{X}=\frac{\left\langle\Psi_{0}\right|\,X\left|\Psi_{0}\right\rangle}{ \left\langle\Psi_{0}\right|\!\left.\right\rangle}. \tag{19}\] The wavefunction \(\Psi_{0}\) is parameterized by the CC ansatz, and an auxiliary operator S is introduced by means of the following formula \[e^{S}\Phi_{0}=\frac{e^{T^{\dagger}}e^{T}\Phi_{0}}{\left\langle e^{T}|e^{T} \right\rangle}\qquad S=S_{1}\!+\!S_{2}\!+\!\ldots S_{N} \tag{20}\] where N is the number of electrons in the system. With help of this auxiliary operator the CC expectation value is rewritten as \[\bar{X}=\left\langle e^{S^{\dagger}}e^{-T}Xe^{T}e^{-S^{\dagger}}\right\rangle. \tag{21}\] The average value of X can then be expressed as a finite series of commutators in this approach. It is important to differentiate the XCC method discussed in this work from the approach developed by Bartlett and Noga with the same name [33]. Also the operator \(S\) was introduced by Arponen and coworkers in the context of the extended coupled cluster theory (ECC) [34, 35, 36]. However, the \(S\) operator was defined in their work by a set of nonlinear equations for which no systematic approximation scheme existed. Later, Jeziorski and Moszynski [21] proposed an expression for \(S\) that could be systematically approximated by satisfying a set of linear equations. The main finding of their work describing the \(S\) operator technique is that the operator \(S\) is related to the operator \(T\) by means of a relatively simple linear equation. Moreover, this equation does not need to be solved in practice. In fact, the operator \(S\) can be expanded in a combined power series of the cluster operators \(T\) and \(T^{\dagger}\), \[S_{n} =T_{n}-\frac{1}{n}\mathcal{P}_{n}[\sum_{k=1}\sum_{p=1}\frac{1}{k!}p[T_{p}^{\dagger},T]_{k}\] \[\quad+\sum_{k=1}\sum_{m=0}\sum_{p=1}\frac{1}{k!}\frac{1}{m!}p[[S_ {p},T^{\dagger}]_{k},T]_{m}] \tag{22}\] and \([A,B]_{k}\) is a shorthand for a \(k\)-times nested commutator. The superoperator \(\hat{\mathcal{P}}_{n}(X)\) yields the excitation part of \(X\) \[\hat{\mathcal{P}}_{n}(X)=\frac{1}{n!}\sum_{\mu_{n}}\left\langle\mu_{n}|X \right\rangle\mu_{n}. \tag{23}\] where for simplicity we introduce the following notation \(\left\langle A|B\right\rangle=\left\langle A\Phi_{0}|B\Phi_{0}\right\rangle\). It is clear from the expressions above that it can be truncated, e.g. on the basis of the perturbation theory arguments. This also constitutes the biggest advantage of the \(S\) operator technique over the \(\Lambda\) method. The task of solving the response equations to obtain \(\Lambda\) is almost as expensive as the original CC iterations themselves. Computation of the operator \(S\), on the other hand, is a relatively simple one-step (non-iterative) procedure which can be accomplished very efficiently. Since its introduction, the \(S\) operator technique has been applied to calculation of the molecular properties at the conventional CCSD and CC3 levels [37], CC3 transition moments between the ground and excited states [37] and excited to excited states [38], electrostatic and exchange contributions to the interaction energies of closed-shell systems [22, 39, 40] and others. The \(S\) operator technique has, not yet been utilized in the context of explicitly correlated wavefunctions. ## 5 F-12 expressions for the Operator S In the explicitly correlated version of CC, the \(T_{2}\) amplitudes are supplemented by the explicitly correlated component from Eq. (4). For the purpose of deriving expressions for the \(S\) amplitudes, we rewrite the T amplitudes in a more compact form \[\begin{split} T_{1}&=\sum_{\alpha i}t_{i}^{\alpha}h( \alpha)E_{\alpha i}\\ T_{2}&=\frac{1}{2}\sum_{\alpha\beta ij}\left(\bar{ \bar{t}}_{ij}^{\alpha\beta}p(\alpha)p(\beta)+t_{ij}^{\alpha\beta}h(\alpha)h( \beta)\right)E_{\alpha i}E_{\beta j}\end{split} \tag{24}\] where \[\bar{\bar{t}}_{ij}^{\alpha\beta}=\sum_{kl}(t_{2}^{{}^{\prime}})_{ij}^{kl}F_{kl} ^{\alpha\beta} \tag{25}\] and \(h(\alpha)\) and \(p(\alpha)\) are defined as \[\begin{split} h(\alpha)=0&\text{if}\quad\alpha\in( 1-\hat{P})\\ h(\alpha)=1&\text{if}\quad\alpha\in\hat{V}\\ p(\alpha)=1&\text{if}\quad\alpha\in(1-\hat{P})\\ p(\alpha)=0&\text{if}\quad\alpha\in\hat{V}.\end{split} \tag{26}\] Expressing the \(T\) amplitudes in a complete basis allows us to re-derive the expressions for the \(S\) amplitudes starting from Eq. (20) \[e^{T^{\dagger}}e^{T}\Phi_{0}=\left\langle e^{T}\middle|e^{T}\right\rangle\Phi _{0}. \tag{27}\] We will not follow the full derivation as it can be found in the original work,[21] instead we only note changes necessary to obtain the \(S\)-F12 amplitudes. We act on both sides of Eq. (27) with the \(Q\) operator expressed in the complete basis \[Q=\alpha^{\dagger}\alpha \tag{28}\] to ensure it satisfies \([Q,T_{n}]=nT_{n}\) and \([Q,S_{n}]=nS_{n}\) with the F12 amplitudes. Next we multiply both sides by \(e^{-T}e^{-T^{\dagger}}e^{S}\) \[e^{-T}e^{-T^{\dagger}}e^{S}Qe^{-S}e^{T^{\dagger}}e^{T}=0. \tag{29}\] In order to obtain the \(S\)-F12 version of the set of linear equations from Eq. (22), we project Eq. (29) onto the \(n\)-tuply excited states in a complete basis, to retain information on \(T_{2}^{{}^{\prime}}\). Only in this way we are able to recover the first approximation to the \(S\) amplitudes, which should be equal to \(T_{2}+T_{2}^{{}^{\prime}}\). This implicates the form of the projection operator \(\hat{\mathcal{P}}_{n}\) which spans over the complete basis, i.e. \[\hat{\mathcal{P}}_{n}(X)=\frac{1}{n!}\sum_{\begin{subarray}{c}i_{1},\ldots i _{n}\\ \alpha_{i}\ldots\alpha_{n}\end{subarray}}\left\langle\mu_{\begin{subarray}{c }i_{1}\ldots i_{n}\\ \alpha_{1}\ldots\alpha_{n}\end{subarray}}\left|X\right\rangle\mu_{\begin{subarray} {c}i_{1}\ldots i_{n}\\ \alpha_{i}\ldots\alpha_{n}\end{subarray}}. \tag{30}\] Eq. (22) is a linear equation that can be solved iteratively but it was proven more practical to expand \(S_{n}\) by either MBPT expansion or expansion in powers of \(T\). In this work we obtain the \(T\) amplitudes from CC-F12 theory, where we in fact perform a summation to an infinite MBPT order. However, to facilitate discussion and formula verification, one should keep in mind that the \(T\) amplitudes can be expanded into MBPT orders as follows:[21, 41] \[\begin{split} T_{1}^{(2)}&=T_{1}^{\{2\}}+T_{1}^{\{3 \}}+\ldots\\ T_{2}^{(1)}&=T_{2}^{\{1\}}+T_{2}^{\{2\}}+\ldots\end{split} \tag{31}\] where the superscript in curly braces indicates the pure MBPT order and the superscript in round parentheses denotes the lowest MBPT order in which the term appears for the first time. When the \(T\) amplitudes are acquired from CCSD-F12 approximation, \(T=T_{1}+T_{2}+T_{2}^{\prime}\) the leading terms for the operators \(S_{n}^{(m)}\) are \[\begin{split} S_{1}^{(2)}&=T_{1}^{(2)}=T_{1}^{\{2\} }+T_{1}^{\{3\}}+\ldots\\ S_{2}^{(1)}&=T_{2}+T_{2}^{{}^{\prime}}\\ S_{3}^{(3)}&=\hat{\mathcal{P}}_{1}\left([T_{1}^{\dagger },T_{2}+T_{2}^{\prime}]\right)\\ S_{2}^{(3)}&=\frac{1}{2}\hat{\mathcal{P}}_{2}\left([[ T_{2}^{\dagger}+(T_{2}^{\dagger})^{\prime},T_{2}+T_{2}^{\prime}],T_{2}+T_{2}^{ \prime}]\right).\end{split}\] We stress that because of the CCSD approximation we are not including some of the low order terms, that are expressed through \(T_{3}\) or higher amplitudes, e.g. \(S_{3}^{(2)}=T_{3}\). The orbital expressions for the \(S\) amplitudes are derived automatically by the code Paldus, developed by one of us (AT). At this point we do not introduce any intermediates, as the operators \(S\) contain the integrals \(F_{ij}^{\alpha\beta}\) expressed in a complete basis. Upon using the \(S\) operator in computation of the properties one should first analyze the integrals and possible singularities, and only then introduce new special intermediates and later on the CABS basis. ## 6 Density matrix in the XCC-f12 theory One-electron reduced density matrix (1-RDM) of an \(N\)-electron wavefunction \(\psi\) in configuration space is defined as \[\rho_{1}\left(x_{1}\right) =N\int\psi^{*}\left(x_{1},x_{2}\ldots x_{n}\right) \tag{38}\] \[\times\psi\left(x_{1}\ldots x_{N}\right)dx_{2}\ldots dx_{N}, \tag{39}\] and the average value of an arbitrary operator \(\hat{X}\) can be obtained as \[\langle\hat{X}\rangle=\int[\hat{X}\rho_{1}(x;x^{\prime})]_{x^{\prime}=x}dx= \int\hat{X}\rho(x)dx, \tag{40}\] where the last equality holds for multiplicative operators. In second quantization 1-RDM is usually denoted as \(\gamma_{\kappa\lambda}\) and in a spin-adapted form is defined through singlet excitation operators \(E_{\kappa\lambda}\) as \[\gamma_{\kappa\lambda}=\langle\Psi_{0}|\;E_{\kappa\lambda}\;|\Psi_{0}\rangle\,. \tag{41}\] In the case of XCC theory the density matrix is expressed with the use of the operators \(S\), Eq. (22). \[\gamma_{\kappa\lambda}=\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^ {-S^{\dagger}}\rangle\,. \tag{42}\] Because the expression for the expectation value is in a form of \(e^{-Y}Xe^{Y}\) it is easily seen from the Baker-Campbell-Hausdorff expansion formula that it is in fact a sum of multiple commutators of connected quantities, and is therefore explicitly connected. For the XCCSD-F12 approximation the explicit form of this equation is \[\gamma_{\kappa\lambda} =\langle E_{\kappa\lambda}\rangle\] \[+\langle S_{1}|E_{\kappa\lambda}\rangle+\langle[E_{\kappa\lambda},T_{1}]\rangle+\langle S_{2}|[E_{\kappa\lambda},T_{2}+T_{2}^{\prime}]\rangle\] \[+\langle S_{1}|[E_{\kappa\lambda},T_{2}+T_{2}^{\prime}]\rangle\] \[+\langle S_{1}|[E_{\kappa\lambda},T_{1}]\rangle+\langle S_{2}|[ E_{\kappa\lambda},T_{1}],T_{2}+T_{2}^{\prime}]\rangle\] \[+\frac{1}{2}\left\langle S_{1}^{2}\right|\!\!\left[E_{\kappa \lambda},T_{2}+T_{2}^{\prime}\right]\rangle\] \[+\frac{1}{2}\left\langle S_{1}S_{2}|[[E_{\kappa\lambda},T_{2}+T_ {2}^{\prime}],T_{2}+T_{2}^{\prime}]\rangle\] \[+\frac{1}{2}\left\langle S_{1}|[[E_{\kappa\lambda},T_{1}],T_{1}]\right\rangle\] \[+\frac{1}{2}\left\langle S_{3}|[[E_{\kappa\lambda},T_{2}+T_{2}^ {\prime}],T_{2}+T_{2}^{\prime}]\right\rangle\] \[+\frac{1}{2}\left\langle S_{1}^{2}\right|\!\!\left[[E_{\kappa \lambda},T_{1}],T_{2}+T_{2}^{\prime}]\right\rangle\] \[+\frac{1}{12}\left\langle S_{1}^{3}\right|\!\!\left[[E_{\kappa \lambda},T_{2}+T_{2}^{\prime}],T_{2}+T_{2}^{\prime}\right]\rangle\] This is a complete expression within the CCSD-F12 approximation. The term \(S_{3}\) appearing in one of the terms refers to the \(S_{3}^{(4)}=\hat{\cal P}_{3}\left(\left[[T_{1}^{\dagger},T_{2}+T_{2}^{\prime}],T_{2}+T_{2}^{\prime}]\right)\) which is of leading \begin{table} \begin{tabular}{c l} \hline \(\hat{S}_{1}^{(2)}=\sum_{ai}(s_{i}^{a})^{(2)}\hat{E}_{ai}\) & \(=\sum_{ai}(t_{i}^{a})^{(2)}\hat{E}_{ai}\) \\ \(\hat{S}_{1}^{(3)}=\sum_{\alpha i}(s_{i}^{\alpha})^{(3)}\hat{E}_{\alpha i}\) & \(=-\sum_{abij}(t_{ji}^{ab})^{(1)}(t_{j}^{b})^{(2)}\hat{E}_{ai}+2\sum_{ abij}(t_{ij}^{ab})^{(1)}(t_{j}^{b})^{(2)}\hat{E}_{ai}\) \\ & \(-\sum_{aaijk}(t_{il}^{jk}F_{jk}^{a\alpha})^{(1)}(t_{l}^{a})^{(2)}\hat{E}_{ \alpha i}+2\sum_{\alpha aijkl}(t_{il}^{jk}F_{kj}^{a\alpha})^{(1)}(t_{l}^{a})^ {(2)}\hat{E}_{\alpha i}\) \\ \(\hat{S}_{2}^{(1)}=\frac{1}{2}\sum_{\alpha\beta ij}(s_{ij}^{\alpha\beta})^{(1)} \hat{E}_{\alpha i}\hat{E}_{\beta j}=\frac{1}{2}\sum_{abij}(t_{ij}^{ab})^{(1)} \hat{E}_{ai}\hat{E}_{bj}+\frac{1}{2}\sum_{\alpha\beta ijkl}(t_{ij}^{kl}F_{kl}^{ \alpha\beta})^{(1)}\hat{E}_{\alpha i}\hat{E}_{\beta i}\) \\ \(\hat{S}_{2}^{(3)}=\frac{1}{2}\sum_{\alpha\beta ij}(s_{ij}^{\alpha\beta})^{(3)} \hat{E}_{\alpha i}\hat{E}_{\beta j}=\frac{1}{2}\sum_{abij}(W1)_{aibj}\hat{E}_{ ai}\hat{E}_{bj}+\frac{1}{2}\sum_{\alpha bijkl}(W2)_{\alpha ibj}\hat{E}_{ \alpha i}\hat{E}_{bj}\) \\ & \(+\frac{1}{2}\sum_{\alpha\beta ijkl}(W3)_{\alpha\beta ij}\hat{E}_{ \alpha i}\hat{E}_{\beta j}\) \\ \hline \end{tabular} \end{table} Table 2: Orbital expressions for the explicitly correlated \(\hat{S}\) operators. Expressions for \((W1)_{aibj},(W2)_{\alpha ibj}\) and \((W3)_{\alpha\beta ij}\) can be found in supplementary material. 4th MBPT order. We do not consider the \(S_{3}^{(4)}\) in this work. The overall leading order of this term is 5. Because of the absence of \(T_{3}\) amplitudes, some of the low MBPT order terms are not included in the \(\gamma_{\kappa\lambda}\). Specifically the term \(\langle S_{2}|[X,T_{3}]\rangle\) of leading 3rd order is absent. Therefore, overall the XCCSD-F12 expression for 1-RDM is correct through the 2nd MBPT order. The expression for the expectation value is dependent on the operator used and the characteristics of the special intermediates \(\mathcal{Z}_{ijkl}=F^{ij}_{\gamma a}x_{\alpha\beta}F^{\beta\gamma}_{kl}\) that arise in the calculation of the average value of an operator due to the presence of the \(F^{\alpha\beta}_{kl}\) integrals. In the subsequent sections, we will derive the expectation value of a general operator \(\hat{X}\) in the complete basis, section 6.1, without making any assumptions about the nature of the special intermediates. In section 6.2 we assume that the CABS basis can be introduced prior to performing the multiplication of the special intermediates, and derive the corresponding expressions for the density matrix. ### Expression for the expectation value with complete indices The expression for the average value of an operator in the complete basis can be rewritten as \[\begin{split}&\hat{X}=\sum_{\kappa\lambda}x_{\kappa\lambda}\hat{ E}_{\kappa\lambda}\\ &=\sum_{\alpha i}x_{\alpha i}\hat{E}_{\alpha i}+\sum_{\alpha i}x_{ i\alpha}\hat{E}_{i\alpha}\\ &+\sum_{\alpha\beta}x_{\alpha\beta}\hat{E}_{\alpha\beta}+\sum_{ ij}x_{ij}\hat{E}_{ij}\end{split} \tag{44}\] From Eq. (43) we take only terms that are quadratic in \(T\) and within we write only nonzero contributions All of the terms are summarized in Table 3. In this expression we identify the special intermediate defined in the preceding sections \(\mathcal{X}^{kl}_{ij}\) and \(\mathcal{Z}_{ijkl}\). ### Expression for the density matrix in CABS basis For the operators that does not require special treatment of the integrals \(\mathcal{Z}_{ijkl}\) it is possible to define the density matrix \(\gamma_{\kappa\lambda}\). Because \(\kappa,\lambda\) are general indices, we distinguish nine separate blocks of the density matrix \[\gamma_{ij},\gamma_{ai},\gamma_{ia},\gamma_{ab},\gamma_{iA},\gamma_{Ai}, \gamma_{Aa},\gamma_{aA},\gamma_{AB}. \tag{58}\] The expression from Eq. (43) is finite, therefore it is theoretically possible to include all of the terms in calculations. In Table X in the supplementary material we present all of the contributions to the XCCSD-F12 1-RDM. For each contribution we write the leading MBPT order and the cost of the most expensive term. As a practical approximation we propose to take only the terms that are quadratic in \(T\). This implicates, that we only include \(S_{1}^{(2)},S_{1}^{(3)}\) and \(S_{2}^{(1)}\). All of the contributions for thus approximated 1-RDM are presented in Table 4. The following symmetry should be satisfied \[\gamma_{\kappa\lambda}^{\{m\}}=\gamma_{\lambda\kappa}^{\{m\}} \tag{68}\] where \(\{m\}\) is the pure MBPT order. As an example we show \(\gamma_{Aa}^{\{2\}}=\gamma_{aA}^{\{2\}}\). From Table 4 we take all of the terms of \(\gamma_{Ai}^{\{2\}}\) and \(\gamma_{iA}^{\{2\}}\) that are of 2 leading order in MBPT, and for the \(T\) amplitudes we only take their pure MBPT order according to Eq. (31). \[\begin{split}&\gamma_{aA}^{\{2\}}=-4\cdot\frac{1}{2}(F^{Ab}_{kl} t^{kl}_{ij})^{\{1\}}(t^{ab}_{ji})^{\{1\}}\\ &+8\cdot\frac{1}{2}(F^{Ab}_{kl}t^{kl}_{ij})^{\{1\}}(t^{ab}_{ij})^{ \{1\}}\\ &-4\cdot\frac{1}{2}(F^{AB}_{kl}t^{kl}_{ij})^{\{1\}}(F^{Ba}_{mn}t^{ mn}_{ij})^{\{1\}}\\ &+8\cdot\frac{1}{2}(F^{AB}_{kl}t^{kl}_{ij})^{\{1\}}(F^{Ba}_{mn}t^ {mn}_{ji})^{\{1\}}\\ &\gamma_{Aa}^{\{2\}}=-2(F^{Ab}_{kl}t^{kl}_{ij})^{\{1\}}(t^{ab}_{ ji})^{\{1\}}\\ &+4(F^{Ab}_{kl}t^{bl}_{ij})^{\{1\}}(t^{ab}_{ij})^{\{1\}}\\ &-4\cdot\frac{1}{2}(F^{AB}_{kl}t^{kl}_{ij})^{\{1\}}(F^{Ba}_{mn}t^ {mn}_{ij})^{\{1\}}\\ &+8\cdot\frac{1}{2}(F^{AB}_{kl}t^{kl}_{ij})^{\{1\}}(F^{Ba}_{mn}t^ {mn}_{ji})^{\{1\}}\end{split} \tag{70}\] ## 7 Cumulants in the XCC theory with f12 Cumulants originate from quantum field theory, and they are the analogs of the connected, size extensive part of the Green's functions [42, 43]. In quantum chemistry they are formulated as the irreducible part of the density matrices. The n-RDM (where n\(>\)1) can be divided into the nth order cumulant which is non separable, products of 1-RDMs and lower order cumulants. Cumulants are size extensive in contrast to the density matrices and can be consistently truncated which is especially important for 3-RDMs and higher. The cumulants in the coupled cluster framework were extensively studied by Korona [23, 44]. Recently this approach gathered an interest and the XCC cumulant was used in the computations of the corrections to the correlation energy in adiabatic connections approach [45]. In this paper we present the 2-RDM cumulant in th XCC-f12 theory. In second quantization in the spin-free formalism, cumulant can be written as \[\Lambda_{qs}^{pr}=\Gamma_{qs}^{pr}-\gamma_{pq}\gamma_{rs}+\frac{1}{2}\gamma_{rq }\gamma_{ps} \tag{71}\] where \(\Gamma_{qs}^{pr}\) is the two-electron reduced density matrix and can be expressed using the singlet \begin{table} \begin{tabular}{|l l|} \hline & \[\sum_{ij}x_{ij}\left\langle\hat{E}\right\rangle_{ij}\] & \[=2\sum_{i}x_{ii}\] \\ & \[\sum_{ij}x_{ij}\left\langle S_{1}^{(2)}\middle|\left[\hat{E}_{ij},T_{1}\right]\right\rangle\] & \[=-2\sum_{aij}x_{ij}t_{i}^{a}t_{j}^{a}\] \\ & \[\sum_{ij}x_{ij}\left\langle S_{2}^{(1)}\middle|\left[\hat{E}_{ij},T_{2}+T_{2}^ {\prime}\right]\right\rangle\] & \[=2\sum_{abkij}x_{ij}t_{ik}^{ab}t_{jk}^{ba}-4\sum_{abkij}x_{ij}t_{ik}^{ab}t_{ jk}^{ab}\] \\ & \[+\sum_{\begin{subarray}{c}ijokl\\ mn\alpha\beta\end{subarray}}x_{ijl}t_{jl}^{mn}\mathcal{X}_{ko}^{mn}(2t_{il}^{ ok}-4t_{il}^{ko})\] \\ & \[=2\sum_{ai}x_{ia}t_{i}^{a}\] \\ & \[\sum_{i\alpha}x_{i\alpha}\left\langle S_{1}^{(2)}\middle|\left[\hat{E}_{i \alpha},T_{2}+T_{2}^{\prime}\right]\right\rangle\] & \[=-2\sum_{abj}x_{ia}t_{ji}^{ab}t_{j}^{b}+4\sum_{abj}x_{ia}t_{ij}^{ab}t_{j}^{b}\] \\ & \[+\sum_{ajkl}x_{i\alpha}F_{jk}^{a\alpha}t_{l}^{a}(4t_{il}^{kj}-2t_{il}^{jk})\] \\ & \[\sum_{\alpha i}x_{\alpha i}\left\langle S_{1}^{(2)}+S_{1}^{(3)}\middle|\hat{E} _{\alpha i}\right\rangle\] & \[=2\sum_{ai}x_{ai}t_{i}^{a}-2\sum_{abij}x_{ai}t_{ji}^{ab}t_{j}^{b}+4\sum_{abij} x_{ai}t_{ij}^{ab}t_{j}^{b}\] \\ & \[+\sum_{ajkl}x_{ai}F_{jk}^{a\alpha}t_{l}^{a}(4t_{il}^{kj}-2t_{il}^{jk})\] \\ & \[\sum_{\alpha\beta}x_{\alpha\beta}\left\langle S_{2}^{(1)}\middle|\left[\hat{E} _{\alpha\beta},T_{2}+T_{2}^{\prime}\right]\right\rangle\] & \[=-2\sum_{abij}x_{ab}t_{ij}^{ac}t_{ji}^{bc}+4\sum_{abcij}x_{ab}t_{ij}^{ac}t_{ij}^ {bc}\] \\ & \[+4\sum_{abijkl}x_{ab}F_{ji}^{a\alpha}t_{lk}^{ab}(2t_{kl}^{ij}-t_{lk}^{ij})\] \\ & \[+\sum_{abijkl}\mathcal{Z}_{kl}^{ij}(4t_{nm}^{ij}t_{nm}^{kl}-2t_{nm}^{ij}t_{mn}^ {kl})\] \\ & \[=2\sum_{abi}x_{ab}t_{i}^{a}t_{i}^{b}\] \\ \hline \end{tabular} \end{table} Table 3: XCCSD-F12 expression for the expectation value of an operator. Only terms up to quadratic in \(T\) are taken. excitation operators \(E_{pq}\) as \[\Gamma_{qs}^{pr}=\left\langle 0\right|E_{pq}E_{rs}-\delta_{rq}E_{ps}\left|0 \right\rangle. \tag{72}\] Therefore the cumulant in this formalism is \[\Lambda_{qs}^{pr}=\left\langle 0\right|E_{pq}E_{rs}\left|0 \right\rangle-\delta_{rq}\left\langle 0\right|E_{ps}\left|0\right\rangle \tag{73}\] \[-\left\langle 0\right|E_{pq}\left|0\right\rangle\left\langle 0 \right|E_{rs}\left|0\right\rangle+\frac{1}{2}\left\langle 0\right|E_{rq}\left|0 \right\rangle\left\langle 0\right|E_{ps}\left|0\right\rangle\] Introducing the XCC parametrization we arrive at the following expression \[\Lambda_{\lambda\tau}^{\kappa\zeta} =\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}E_{\zeta \tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[-\delta_{\lambda\zeta}\left\langle e^{S^{\dagger}}e^{-T}E_{ \kappa\tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[-\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\zeta\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[+\frac{1}{2}\left\langle e^{S^{\dagger}}e^{-T}E_{\zeta\lambda}e^ {T}e^{-S^{\dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa \tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{T}E_{\zeta\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[+\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{T}E_{\zeta\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[-\delta_{\lambda\zeta}\left\langle e^{S^{\dagger}}e^{-T}E_{ \kappa\tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[-\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\zeta\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[+\frac{1}{2}\left\langle e^{S^{\dagger}}e^{-T}E_{\zeta\lambda}e^ {T}e^{-S^{\dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa \tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[-\delta_{\lambda\zeta}\left\langle e^{S^{\dagger}}e^{-T}E_{ \kappa\tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[+\frac{1}{2}\left\langle e^{S^{\dagger}}e^{-T}E_{\zeta\lambda}e^ {T}e^{-S^{\dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa \tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[-\delta_{\lambda\zeta}\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa \tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[+\frac{1}{2}\left\langle e^{S^{\dagger}}e^{-T}E_{\zeta\lambda}e^{T} e^{-S^{\dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[=\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \[-\delta_{\lambda\zeta}\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa \tau}e^{T}e^{-S^{\dagger}}\right\rangle\] \[+\frac{1}{2}\left\langle e^{S^{\dagger}}e^{-T}E_{\zeta\lambda}e^{T }e^{-S^{\dagger}}\right\rangle\left\langle e^{S^{\dagger}}e^{-T}E_{\kappa\tau}e^{T}e^{- S^{\dagger}}\right\rangle\] \begin{table} \begin{tabular}{|c|} \hline \(\gamma_{ij}=\) 2 \\ \(\gamma_{ij}=\) 2 \\ \(\gamma_{ij}=\) 2 \\ \(\gamma_{ij}=\) 2 \\ \(\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad Since the cumulant represents the connected part of the reduced density matrix, the aforementioned equation should also be connected. The last two terms are disconnected but they cancel out with all the disconnected terms that arise from evaluation of the first term. Therefore we can write \[\Lambda^{\kappa\zeta}_{\lambda\tau} \tag{74}\] \[= (\langle e^{S^{\dagger}}e^{-T}E_{\kappa\lambda}e^{T}e^{-S^{ \dagger}}\mathcal{P}(e^{S^{\dagger}}e^{T}E_{\zeta\tau}e^{T}e^{-S^{\dagger}}) \rangle)_{C}\] where the subscript \(C\) means taking only connected terms and keeping in mind that \(T\) and \(S\) operators are connected. The following symmetries hold for the cumulant of 2-RDM \[\Lambda^{\kappa\zeta}_{\lambda\tau}=\Lambda^{\lambda\tau}_{\kappa\zeta}= \Lambda^{\zeta\kappa}_{\tau\lambda} \tag{75}\] In Table 5 we present the commutator expression for the XCCSD-F12 cumulant, terms up to quadratic in \(T\). In Table 6 we present the orbital expressions for the XCCSD-F12 cumulant in complete basis. The function \(\bar{\delta}_{a\alpha}\) gives 0 if \(\alpha\in(1-\hat{P})\) and 1 if \(\alpha\in\hat{V}\). ## 8 Computational details The derivation of the orbital-level expressions in this work extremely error-prone. We automated this process with the Paldus code, which is designed to derive, simplify, and automatically implement expressions of the type \[\langle[V_{1},\mu_{n}]_{k_{1}}|[V_{2},V_{3}]_{k_{2}}|[V_{4},\nu_{m}]_{k_{3}} \rangle\,, \tag{94}\] where \(k_{1},k_{2},k_{3}\) denote \(k\)-tuply nested commutators. The operators \(V_{1}-V_{4}\) could be any excitation, deexcitation, or general operators that are represented by the products of the \(E_{pq}\) operators. Each of the integrals is approximated within the requested level of theory and integrated using the Wick's theorem,[46] generalized to the form of contracting and ordering \(E_{pq}\) strings. This process can be a limiting step for a long \(E_{pq}\) strings, especially in the F12 case, therefore the integration is carried out into a parallel mode. The result of the integration usually contain tens of thousands of terms that need to be compared efficiently. This is done by the standardization of each term to an unambiguous form according to index names and their permutations. Subsequently, each term is translated to a compiled-language representation and the simplification is carried out in this form, which drastically speed up the process. The next step after the simplification is the identification of the special intermediates \(V\), \(X\), \(B\), \(P\), \(Z\). Finally, the result is translated back and a parallel Fortran ready to attach module is produced. The implementation is optimized in the sense that Paldus automatically computes and selects the best intermediates for each term, considering memory usage to computational time ratio. ## 9 Summary In this work we have presented the expressions for the 1-RDM and the 2-RDM cumulant in the framework of the XCC-F12 theory. The reduced density matrices are quantities that are widely used in quantum chemistry. They pose an alternative to the wavefunction approach. As the density matrices similarly to wavefunctions are not extensive and can be further separated it is useful to work with the irreducible parts of the density matrices - cumulants. Cumulants are not only connected (and thus extensive) but can also be systematically approximated making them a desirable tool for demanding computations. In order to obtain chemical accuracy in the computations that are making use of the cumulants (e.g. properties) we proposed to express them in the framework of the expectation value coupled cluster theory together with the explicitly correlated wavefunction. In this way we obtained the expressions for 1- and 2-RMD cumulants that are based on the coupled cluster theory, are connected and can be systematically approximated. On top of that by using the explicitly correlated wavefunction we are able to obtain expressions that would generate more accurate results at the CCSD level without introducing the costly triples amplitudes. We have presented the ready-to-implement expressions for the F12 \(S\) amplitudes, 1-RDM and 2-RDM cumulant. We have described the technical details needed to obtain the intermediates needed to lower the computational cost. ## 10 Acknowledgment This research was supported by the National Science Center (NCN) under Grant No. 2017/25/B/ST4/02698.
2310.02306
Collective Thomson scattering in magnetized electron and positron pair plasma and the application to induced Compton scattering
We consider collective Thomson scattering of an incident X-mode wave (with the electric vector perpendicular to the background magnetic field) in magnetized electron and positron pair plasma. The collective effects do not exactly cancel out in contrast to the non-magnetized case. Still, the cross-section is comparable to the non-collective one, with the same suppression by the square of the cyclotron frequency in a strong magnetic field. The comparable cross-section holds even though the net current is nearly zero from the drift motion of electrons and positrons. The plasma response does not also affect the cross-section so much. The spectrum of the scattered wave in finite temperature plasma peaks at cyclotron overtones. Based on these results, we also estimate induced Compton scattering in strongly magnetized pair plasma. Implications for pulsars and fast radio bursts are discussed.
Rei Nishiura, Kunihito Ioka
2023-10-03T18:00:00Z
http://arxiv.org/abs/2310.02306v2
# Collective Thomson scattering in magnetized electron and positron pair plasma ###### Abstract We consider collective Thomson scattering of an incident X-mode wave (with the electric vector perpendicular to the background magnetic field) in magnetized electron and positron pair plasma. The collective effects do not exactly cancel out in contrast to the non-magnetized case. Still, the cross-section is comparable to the non-collective one, with the same suppression by the square of the cyclotron frequency in a strong magnetic field. The comparable cross-section holds even though the net current is nearly zero from the drift motion of electrons and positrons. The plasma response does not also affect the cross-section so much. The spectrum of the scattered wave in finite temperature plasma peaks at cyclotron overtones. Based on these results, we also estimate induced Compton scattering in strongly magnetized pair plasma. Implications for pulsars and fast radio bursts are discussed. ## I Introduction Fast Radio Bursts (FRBs) are the brightest radio transients, first discovered in 2007 [1; 2; 3]; most FRBs originate outside our Galaxy, and their origin is not fully understood. Notably, the observation of FRB 2020428 in 2020, coinciding with an X-ray burst from the Galactic magnetar SGR 1935+2154 [4; 5; 6; 7; 8; 9], marked a step towards understanding these phenomena. FRBs are also used as a tool to extract information from the traces of interactions of FRBs with distant intergalactic material during their propagation and to apply this information to cosmology [10; 11; 12] (see the review by Bhandari and Flynn [13] for various studies). Despite advancements in observational and applicational studies, the emission mechanism of FRBs remains unresolved. The emission region of FRBs has sparked a debate [14; 15; 16; 17] regarding whether they arise within a magnetosphere of the magnetar [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28] or through interactions between circumstellar matter located at a distance from the magnetosphere and relativistic outflows from the magnetar [29; 30; 31; 32; 33; 34]. It has been argued that coherent waves such as FRBs and radio emission from pulsars may cause induced Compton scattering with electron and positron plasma (\(e^{\pm}\) plasma) in the magnetosphere and may not escape the magnetosphere if the Lorentz factor of the scattered particles is small [35; 36; 37; 38]. This paper focuses on three effects that can influence the scattering processes to understand the observed FRBs. The first effect is the suppression of Thomson scattering due to a strong magnetic field. In the presence of a strong magnetic field, the motion of scattering particles is constrained by the magnetic field, leading to a reduced Thomson scattering cross-section for X-mode electromagnetic waves [39; 40; 41; 42; 43]. Consequently, the rate for induced Compton scattering may also be suppressed [44]. The second effect is the plasma response to radiation. Gil _et al._[45] estimated the curvature radiation from charged particles moving along an infinitely strong curved magnetic field, considering the plasma response. They found that the radiation is significantly suppressed compared to vacuum curvature radiation. This fact implies the need to consider plasma response also in Thomson scattering. Regarding the third effect, there is an intuitive argument that the electric field of the incident electromagnetic wave causes electrons and positrons to drift in the same direction, leading to a mutual cancellation of currents and significant suppression of scattering in a strong magnetic field [46]. It is crucial to consider these effects consistently to treat the scattering processes in the magnetar magnetosphere. These effects alter the reaction rate of induced Compton scattering in the scattering cross-section. Therefore, we aim to cohesively integrate these effects into the Thomson cross-section for magnetized \(e^{\pm}\) plasma. We believe that collective Thomson scattering can provide a unified explanation for both magnetic field effects and plasma effects. Collective Thomson scattering is a theory that considers particle correlations in plasma. This theory deals with scattering induced by plasma density fluctuations and considers the interactions among many charged particles in plasma [47; 48; 49]. Collective Thomson scattering has been widely studied in the field of plasma physics [50; 51] and applied to precise measurements of ion temperature in laboratory plasma experiments [52; 53; 54; 55]. However, most of the existing research in this theory deals with scattering in ion-electron plasma. Thomson scattering in \(e^{\pm}\) plasma has been studied with out a background magnetic field [56]. Sincell and Krolik [56] showed that electrons and positrons completely cancel the collective effect. However, the collective scattering behavior in the presence of a background magnetic field has yet to be studied as far as we know. In this study, we first explore collective Thomson scattering in \(e^{\pm}\) plasma with a background magnetic field. In Section II, we review single-particle scattering, i.e., scattering of a free particle, in the presence of a background magnetic field. In Section III, we consider the Thomson scattering in magnetized \(e^{\pm}\) plasma and the properties of the obtained scattering cross-section. Section IV discusses whether electrons and positrons cancel the Thomson scattering in a strong magnetic field. We also discuss a possible implication for observations of pulsars, taking the obtained spectra of the scattering cross-section into account. Furthermore, based on the analysis of the collective Thomson scattering, we estimate the effective optical depth of induced Compton scattering in a strong magnetic field. The discussion of Thomson scattering considering the plasma response is included in Appendix A because it does not affect the main conclusions of this study. Throughout this paper, the notation \(A=10^{n}A_{n}\) and the Centimeter-Gram-Second (CGS) system of units are consistently employed. ## II Thomson scattering by a free particle in a strong magnetic field We calculate Thomson cross-section for X-mode waves (linearly polarized perpendicular to the plane of the magnetic field and wave vector) in the presence of a strong magnetic field. We impose the following assumptions in deriving the details. * Consider a free electron or a free positron as a scattering particle and assume that it is static at the origin before scattering. * A uniform magnetic field \(\mathbf{B}_{0}=(B_{0},0,0)\) exists in the \(x\)-axis direction. * The X-mode wave is perpendicularly incident on the magnetic field at a wave-number vector \(\mathbf{k}_{0}=(0,0,k_{0})\) and angular frequency \(\omega_{0}\). * The magnetic field of the incident electromagnetic wave is assumed to be sufficiently small compared to the background magnetic field, that is \(|\mathbf{B}_{\rm wave}|\ll|\mathbf{B}_{0}|\). * The motion of a particle in the wave field is approximated as non-relativistic. The electric field of the X-mode plane wave in pair plasma can be written as \[E_{\rm X}^{\rm in}(t)=E_{0}e^{-i\omega_{0}t}\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right). \tag{1}\] Then the equation of motion for particles in the wave field can be denoted by \[m_{\rm e}\dot{\mathbf{v}}_{\pm}=\pm e\mathbf{E}_{\rm X}^{\rm in}\pm\frac{e}{c}\mathbf{v}_{ \pm}\times\mathbf{B}_{0}, \tag{2}\] where \(e\) is the absolute value of elementary charge (i.e., the positron charge) and \(c\) is the speed of light. From this equation of motion, the motion of the particle is represented by \[\mathbf{v}_{\pm}=\frac{eE_{0}}{m_{\rm e}\omega_{0}}\frac{\omega_{0}^{2}}{\omega_{0 }^{2}-\omega_{c}^{2}}\left(\begin{array}{c}0\\ \frac{\pm i}{\omega_{0}}\end{array}\right)e^{-i\omega_{0}t}, \tag{3}\] where \[\omega_{\rm c}\equiv\frac{eB_{0}}{m_{\rm e}c} \tag{4}\] is the electron cyclotron frequency. A strong magnetic field means that the cyclotron frequency is sufficiently large compared to the angular frequency of the incident electromagnetic wave (\(\omega_{c}\gg\omega_{0}\)). When the background magnetic field is strong, the particle motion is characterized by a dominant drift motion. The particles in the wave field have a figure-8 motion in the plane perpendicular to the background magnetic field. When the background field is strong, the drift velocity is \((\omega_{\rm c}/\omega_{0})\) times larger than that in the direction of the incident electric field. The physical reason for this is that the particles, supposed initially to oscillate in the direction of the incident electric field, are immediately bent to the drift direction by the strong background magnetic field. The electromagnetic field produced by the oscillating particles is described by Lienard-Wiechert potentials \[\begin{split}\mathbf{E}_{\rm rad}&\equiv\frac{q}{cR}\left[ \mathbf{n}\times(\mathbf{n}\times\dot{\mathbf{\beta}})\right]_{\rm ret},\\ \mathbf{B}_{\rm rad}&\equiv\frac{q}{cR}\left[\mathbf{n} \times\left\{\mathbf{n}\times(\mathbf{n}\times\dot{\mathbf{\beta}})\right\}\right]_{\rm ret },\end{split} \tag{5}\] where \(\mathbf{\beta}\equiv\mathbf{v}/c\) and the retarded time is defined as follows \[t^{\prime}=t-\frac{|\mathbf{R}-\mathbf{r}\left(t^{\prime}\right)|}{c}. \tag{6}\] Let \(\mathbf{R}\) be the observer's position and \(\mathbf{n}\) be the unit vector from the charged particle to the observer at the retarded time. If the observer is sufficiently far away from the radiation source, the retarded time can be approximated as \[t^{\prime}\simeq t-\frac{R}{c}+\frac{\mathbf{n}\cdot\mathbf{r}}{c}. \tag{7}\] The energy radiated by the oscillating particles per unit time can be calculated by the radiative Poynting flux through a sphere of sufficiently large radius. In the non-relativistic limit, this is expressed as \[P_{\rm NR}=\frac{e^{2}}{4\pi c}\int{\rm d}\Omega|\mathbf{n}\times(\mathbf{n}\times \dot{\mathbf{\beta}})|^{2}. \tag{8}\] If the angle between \(\mathbf{n}\) and \(\dot{\mathbf{\beta}}\) is \(\theta\), it can be written as \(|\mathbf{n}\times(\mathbf{n}\times\dot{\mathbf{\beta}})|^{2}=\dot{\beta}^{2}\sin^{2}\theta\). Note that when the plasma density is enormous (i.e., \(\omega_{\rm p}\gg\omega_{0}\)), the response of the plasma must be taken into account, and the Liener-Wiechert potentials, which assumes electromagnetic wave propagation in a vacuum, cannot be used. In Appendix A, we estimated the energy of scattered X-mode electromagnetic waves by particles in the limit of large plasma density and background magnetic field (i.e., the limit of \(\omega_{\rm c},\omega_{\rm p}\gg\omega_{0}\)). However, even considering the plasma response, the scattering cross-section of the X-mode wave is found to be within 50% of that in vacuum for \[\omega_{\rm c}>\omega_{\rm p}\gg\omega_{0}. \tag{9}\] Therefore, we adopt Liener-Wiechert potentials to evaluate the order of the scattering cross-section in this study. Substituting the motion of the oscillating particle into equation (8), we obtain the energy per unit time emitted from an electron or a positron in the X-mode waves \[\left\langle\frac{\mathrm{d}P_{\rm X}}{\mathrm{d}\Omega}\right\rangle=\frac{e ^{4}E_{0}^{2}}{8\pi m_{\rm e}^{2}c^{3}}\left(\frac{\omega_{0}^{2}}{\omega_{0}^ {2}-\omega_{\rm c}^{2}}\right)^{2}\left\{1+\left(\frac{\omega_{\rm c}}{\omega _{0}}\right)^{2}\right\}\sin^{2}\theta. \tag{10}\] The scattering cross-section of the X-mode waves is obtained by dividing the scattered energy per unit time by the energy flux of the incident electromagnetic wave \[\sigma_{\rm X}=\frac{8\pi}{cE_{0}^{2}}\left\langle P_{\rm X}\right\rangle= \frac{1}{2}\sigma_{\rm T}\left\{\left(\frac{\omega_{0}}{\omega_{0}+\omega_{ \rm c}}\right)^{2}+\left(\frac{\omega_{0}}{\omega_{0}-\omega_{\rm c}}\right)^ {2}\right\}, \tag{11}\] where \[\sigma_{\rm T}\equiv\frac{8\pi}{3}r_{\rm e}^{2}\equiv\frac{8\pi}{3}\left( \frac{e^{2}}{m_{\rm e}c^{2}}\right)^{2} \tag{12}\] is Thomson cross-section. If the background magnetic field is sufficiently large, the scattering of an X-mode wave is suppressed by a factor of \((\omega_{0}/\omega_{\rm c})^{2}\). The physical interpretation is that although the electric field of the X-mode wave tries to swing the charged particle, the charged particle firmly sticks to the background magnetic field and is hardly shaken by the waves. As a result, the radiation from the particle is suppressed. ## III Thomson scattering in electron-positron magnetized plasma This section considers Thomson scattering of electromagnetic waves by \(e^{\pm}\) plasma. The following are the differences in the setup from the previous section. * Assume \(e^{\pm}\) plasma as a scattering medium. * For simplicity, we assume that only longitudinal wave components are produced by density fluctuations in the \(e^{\pm}\) plasma (electrostatic approximation). It has been argued that without accounting for the full electromagnetic fluctuations, resonance peaks due to electromagnetic waves would not be visible in the scattering spectra [57; 51]. ### Basic equations This section formally derives the energy radiated per unit time by \(e^{\pm}\) plasma when it scatters electromagnetic waves. First, the equation of motion of an electron or a positron perturbed by an X-mode electromagnetic wave is given by \[m_{\rm e}\mathbf{v}_{\pm}=\pm e\mathbf{E}_{i0}e^{i\left(\mathbf{k}_{0}\cdot\mathbf{r}\left(t^ {\prime}\right)-\omega_{0}t^{\prime}\right)}\pm\frac{e}{c}\mathbf{v}_{\pm}\times \mathbf{B}_{0}. \tag{13}\] Solving this equation yields the velocity of the oscillating particles as follows \[\mathbf{v}_{\pm}=\frac{eE_{0}}{m_{\rm e}\omega_{0}}\frac{\omega_{0}^{2}}{\omega_{ 0}^{2}-\omega_{\rm c}^{2}}\left(\begin{array}{c}0\\ \pm i\\ \frac{\omega_{0}}{\omega_{0}}\end{array}\right)e^{i\left(\mathbf{k}_{0}\cdot\mathbf{r} -\omega_{0}t\right)}. \tag{14}\] In considering plasma scattering, it has been argued that scattering from the uniform density component can be neglected [58]. As a simple physical interpretation, for a scattered wave emitted in a specific direction and wavelength, imagine a pair of thin uniform plasma plates, separated by half the wavelength, aligned perpendicular to the direction of wave travel. Since the phases of the scattered waves from the thin plates are out of phase by \(\pi\), the scattered waves cancel each other perfectly. Considering similar pairs over the entire scattering region, all scattered waves from the uniform density component can be neglected. In other words, all the scattering of electromagnetic waves in the plasma is caused by statistical density fluctuations. The scattered electric field by the electron or positron population can be evaluated by Lienard-Wiechert potentials produced by the density fluctuations \[\mathbf{E}_{\pm}(\mathbf{R},t) =\pm\frac{e}{cR}\int_{V}\mathrm{d}^{3}\mathbf{r}\int\mathrm{d}^{3} \mathbf{v}\ \delta F_{\pm}\left(\mathbf{r},\mathbf{v},t^{\prime}\right) \tag{15}\] \[\times\left[\mathbf{n}\times(\mathbf{n}\times\dot{\mathbf{\beta}}_{\pm}) \right]_{\rm ret}.\] The relationship between the first order perturbation of the distribution function in the scattering region, \(\delta F_{\pm}\), and the density fluctuations \(\delta n_{\pm}(\mathbf{r},t)\) is \[\delta n_{\pm}(\mathbf{r},t)=\int\mathrm{d}^{3}\mathbf{v}\ \delta F_{\pm}(\mathbf{r},\mathbf{v},t). \tag{16}\] The total electric field scattered by the plasma is the sum of the contributions from the electron and positron populations \[\mathbf{E}_{\mathrm{tot}}(\mathbf{R},t) =\frac{e}{cR}\int_{V}\mathrm{d}^{3}\mathbf{r}\int\mathrm{d}^{3}\mathbf{v} \tag{17}\] \[\times\left\{\delta F_{+}\left(\mathbf{r},\mathbf{v},t^{\prime}\right) \left[\mathbf{n}\times(\mathbf{n}\times\dot{\mathbf{\beta}}_{+})\right]_{\mathrm{ret}}\right.\] \[\left.-\delta F_{-}\left(\mathbf{r},\mathbf{v},t^{\prime}\right)\left[ \mathbf{n}\times(\mathbf{n}\times\dot{\mathbf{\beta}}_{-})\right]_{\mathrm{ret}}\right\}.\] The energy radiated per unit time and unit solid angle can be time averaged over a large enough sphere \[\frac{\mathrm{d}P_{\mathrm{s}}}{\mathrm{d}\Omega}(\mathbf{R})=\frac{cR^{2}}{4\pi} \lim_{T\rightarrow\infty}\frac{1}{T}\int_{-\frac{\pi}{2}}^{\frac{T}{2}}\mathrm{ d}t\ \left|\mathbf{E}_{\mathrm{tot}}(\mathbf{R},t)\right|^{2}. \tag{18}\] Here the time component of the scattered electric field is Fourier-transformed into its frequency component \[\widetilde{\mathbf{E}_{\mathrm{tot}}}\left(\mathbf{R},\omega_{1}\right)=\int_{-\infty }^{+\infty}\mathrm{d}t\ \mathbf{E}_{\mathrm{tot}}(\mathbf{R},t)e^{-i\omega_{1}t}. \tag{19}\] Using Parseval's identity for the absolute square of the radiative electric field, the radiation power per solid angle can be expressed as \[\frac{\mathrm{d}P_{\mathrm{s}}}{\mathrm{d}\Omega}(\mathbf{R})=\frac{cR^{2}}{4\pi} \lim_{T\rightarrow\infty}\frac{1}{\pi T}\int_{0}^{\infty}\mathrm{d}\omega_{1 }\ \left|\widetilde{\mathbf{E}_{\mathrm{tot}}}\left(\omega_{1}\right)\right|^{2}. \tag{20}\] In the subsequent section, we will calculate the scattered electric field from \(e^{\pm}\) plasma in an X-mode wave specifically. ### Radiative electric field In this section, we calculate the electric field radiated from \(e^{\pm}\) plasma in an X-mode wave and show that it can be written as a combination of density fluctuations. The density fluctuation is evaluated in the next section. From equations (16), (17), and (19), the Fourier-transformed scattered electric field is obtained by adding up the electric fields created by the electron and positron density fluctuations at the retarded time over the scattering region as follows \[\widetilde{\mathbf{E}_{\mathrm{tot}}}\left(\omega_{1}\right) =\frac{e}{cR}\int_{-\infty}^{\infty}\mathrm{d}t\int_{V}\mathrm{d} ^{3}\mathbf{r}\ e^{-i\omega_{1}\left(t^{\prime}+\frac{R}{c}-\frac{\mathbf{n}\cdot\bm {r}}{c}\right)} \tag{21}\] \[\times\left\{\delta n_{+}\left(\mathbf{r},t\right)\left[\mathbf{n}\times (\mathbf{n}\times\dot{\mathbf{\beta}}_{+})\right]_{\mathrm{ret}}\right.\] \[\left.-\delta n_{-}\left(\mathbf{r},t\right)\left[\mathbf{n}\times(\mathbf{n }\times\dot{\mathbf{\beta}}_{-})\right]_{\mathrm{ret}}\right\},\] where \(V\) is the scattering region. The scattered wave's travel direction is expressed in spherical coordinates as \(\mathbf{n}=(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)\). The polar angle \(\theta\) is defined as the angle between the direction of the incident electromagnetic wave and the direction of the scattered wave. Substituting equation (14) into equation (21), we find \[\widetilde{\mathbf{E}_{\mathrm{tot}}}\left(\omega_{1}\right) =\frac{e^{2}E_{0}}{c^{2}Rm_{\mathrm{e}}}\frac{\omega_{0}^{2}}{ \omega_{0}^{2}-\omega_{\mathrm{c}}^{2}}\int_{-\infty}^{\infty}\mathrm{d}t\int _{V}\mathrm{d}^{3}\mathbf{r}\ e^{-i\omega_{1}\left(t^{\prime}+\frac{R}{c}-\frac{ \mathbf{n}\cdot\mathbf{r}}{c}\right)} \tag{22}\] \[\times\left[\left(\begin{array}{c}\sin^{2}\theta\sin\varphi \cos\varphi\\ -(1-\sin^{2}\theta\sin^{2}\varphi)\\ \sin\theta\cos\theta\sin\varphi\end{array}\right)\cos\left(\mathbf{k}_{0}\cdot\mathbf{r }-\omega_{0}t\right)\right.\] \[\times\left\{\delta n_{+}\left(\mathbf{r},t\right)+\delta n_{-} \left(\mathbf{r},t\right)\right\}-\frac{\omega_{\mathrm{c}}}{\omega_{0}}\left( \begin{array}{c}\sin\theta\cos\theta\cos\varphi\\ \sin\theta\cos\theta\sin\varphi\\ -\sin^{2}\theta\end{array}\right)\] \[\left.\times\sin\left(\mathbf{k}_{0}\cdot\mathbf{r}-\omega_{0}t\right) \left\{\delta n_{+}\left(\mathbf{r},t\right)-\delta n_{-}\left(\mathbf{r},t\right) \right\}\right].\] The density fluctuations of electrons and positrons are Fourier-transformed with respect to space and time as \[\delta n_{\pm}\left(\mathbf{r},t\right)=\frac{1}{(2\pi)^{4}}\int\mathrm{d}^{3}\bm {k}\ \mathrm{d}\omega\ e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\widetilde{\delta n_{\pm}}( \mathbf{k},\omega). \tag{23}\] Then the Fourier-transformed scattered electric field is expressed by \[\left|\widetilde{\mathbf{E}_{\mathrm{tot}}}\left(\omega_{1}\right) \right|^{2}=\left(\frac{e^{2}E_{0}}{2c^{2}Rm_{\mathrm{e}}}\frac{\omega_{0}^{2}} {\omega_{0}^{2}-\omega_{\mathrm{c}}^{2}}\right)^{2} \tag{24}\] \[\times\left\{\left|\widetilde{\delta n_{+}}+\widetilde{\delta n_{- }}\right|^{2}\left(1-\sin^{2}\theta\sin^{2}\varphi\right)\right.\] \[\left.+\left(\frac{\omega_{\mathrm{c}}}{\omega_{0}}\right)^{2} \left|\widetilde{\delta n_{+}}-\widetilde{\delta n_{-}}\right|^{2}\sin^{2} \theta\right\}.\] Here, the argument for the wave vector and frequency of the density fluctuations is described by the difference between the scattered waves (\(\mathbf{k}_{1}=\omega_{1}\frac{\mathbf{n}}{c},\omega_{1}\)) and incident waves as follows \[\widetilde{\delta n_{\pm}}=\widetilde{\delta n_{\pm}}(\mathbf{k}_{1}-\mathbf{k}_{0}, \omega_{1}-\omega_{0}). \tag{25}\] In the next section, we evaluate the combinations of density fluctuations that characterize the magnitude of the scattered electric field. ### Spectral density functions Spectral density functions for density fluctuations are defined as a physical quantity that characterizes the intensity of plasma scattering. The following four types of spectral density functions characterize the scattering of \(e^{\pm}\) plasma: \[S_{\pm\pm}(\mathbf{k},\omega) \equiv\lim_{V,T\rightarrow\infty}\frac{\left\langle\left| \widetilde{\delta n_{\pm}}(\mathbf{k},\omega)\right|^{2}\right\rangle_{\mathrm{ ensemble}}}{VTn_{\mathrm{e}}}, \tag{26}\] \[S_{\pm\mp}(\mathbf{k},\omega) \equiv\lim_{V,T\rightarrow\infty}\frac{\left\langle\widetilde{ \delta n_{\pm}}(\mathbf{k},\omega)\widetilde{\delta n_{\mp}}^{*}(\mathbf{k},\omega) \right\rangle_{\mathrm{ensemble}}}{VTn_{\mathrm{e}}}.\] Here \(\left\langle\cdots\right\rangle_{\mathrm{ensemble}}\) denotes taking the statistical mean according to the plasma distribution function, and \[n_{\mathrm{e}}\equiv n_{0+}=n_{0-} \tag{27}\] represents the electron or positron uniform density. Using equations (20), (24), and (26), the energy radiated by the plasma per unit time, unit solid angle, and unit frequency can be expressed by \[\left\langle\frac{\mathrm{d}P_{\mathrm{s}}}{\mathrm{d}\Omega\mathrm{d}\omega_{1}} \right\rangle_{\mathrm{ensemble}}=\frac{Vn_{\mathrm{e}}}{\pi}\frac{cR^{2}}{4 \pi}\left(\frac{e^{2}E_{0}}{2c^{2}Rm_{\mathrm{e}}}\frac{\omega_{0}^{2}}{\omega_ {0}^{2}-\omega_{\mathrm{c}}^{2}}\right)^{2} \tag{28}\] \[\times\left[(S_{++}+S_{+-}+S_{-+}+S_{--})\left(1-\sin^{2}\theta \sin^{2}\varphi\right)\right.\] \[\left.+\left(\frac{\omega_{\mathrm{c}}}{\omega_{0}}\right)^{2}(S _{++}+S_{--}-S_{+-}-S_{-+})\sin^{2}\theta\right].\] The total scattering cross-section for \(2Vn_{\mathrm{e}}\) particles in the scattering region \(V\) is determined by dividing the scattering energy by the energy flux of the incident electromagnetic wave: \[\frac{\mathrm{d}\sigma^{(2Vn_{\mathrm{e}})}}{\mathrm{d}\Omega\mathrm{d}\omega _{1}}=\left\langle\frac{\mathrm{d}P_{\mathrm{s}}}{\mathrm{d}\Omega\mathrm{d} \omega_{1}}\right\rangle_{\mathrm{ensemble}}\cdot\frac{8\pi}{cE_{0}^{2}}. \tag{29}\] The differential cross-section for scattering into \(\mathrm{d}\Omega\) and \(\mathrm{d}\omega_{1}\) by the \(2Vn_{\mathrm{e}}\) particles is then \[\frac{\mathrm{d}\sigma^{(2Vn_{\mathrm{e}})}}{\mathrm{d}\Omega \mathrm{d}\omega_{1}}=Vn_{\mathrm{e}}\frac{e^{4}}{2\pi m_{\mathrm{e}}^{2}c^{4} }\left(\frac{\omega_{0}^{2}}{\omega_{0}^{2}-\omega_{\mathrm{c}}^{2}}\right)^ {2} \tag{30}\] \[\times\left[(S_{++}+S_{+-}+S_{-+}+S_{--})\left(1-\sin^{2}\theta \sin^{2}\varphi\right)\right.\] \[\left.+\left(\frac{\omega_{\mathrm{c}}}{\omega_{0}}\right)^{2}(S _{++}+S_{--}-S_{+-}-S_{-+})\sin^{2}\theta\right].\] The argument of the spectral density function is \[S=S(\mathbf{k}_{1}-\mathbf{k}_{0},\omega_{1}-\omega_{0}). \tag{31}\] Equation (30) is a general expression describing Thomson scattering in \(e^{\pm}\) plasma. The spectral density function is evaluated by taking the statistical mean of the plasma distribution function that gives the initial conditions of the position and velocity of each particle before scattering, as shown below. Given the appropriate initial plasma conditions, the scattering cross-section can be obtained, considering the correlation between particles. We derive the density fluctuations of \(e^{\pm}\) plasma in the presence of a background magnetic field. The density fluctuations for ion-electron plasma in a magnetic field were derived by Fejer [47], Dougherty and Farley [48], Salpeter [59]. The density fluctuations for the case where the constituent particles of the plasma are electrons and positrons can be obtained by replacing ion mass with electron mass. In the following, the derivation of the plasma density fluctuation is briefly described, and the detailed derivation is given in the Appendix C. First, the Fourier transform of the density fluctuations for time must be replaced by the Laplace transform to incorporate the initial conditions of the particles before scattering into the equations. Assuming that \(t=0\) is the time when the incident electromagnetic wave first enters the scattering region, the Fourier-Laplace transform of the density fluctuation can be described by \[\widetilde{\delta n_{\pm}}(\mathbf{k},\omega) =\int_{0}^{\infty}\mathrm{d}t\ e^{-i(\omega-i\varepsilon)t}\int \mathrm{d}^{3}\mathbf{r}\ \delta n_{\pm}(\mathbf{r},t)e^{i\mathbf{k}\cdot\mathbf{r}} \tag{32}\] \[=\int\mathrm{d}^{3}\mathbf{v}\ \widetilde{\delta F}_{\pm}(\mathbf{k},\mathbf{v}, \omega).\] Here \(\varepsilon\) is a positive infinitesimal quantity, a regularization factor that represents the elimination of effects due to scattering in the infinite future. From equation (32), the density fluctuations of electrons and positrons are represented by the first-order perturbation of the plasma's distribution function. This first-order perturbation can be obtained by using the Vlasov equation for the distribution function and Gauss's laws, with the non-perturbed components taken as the zeroth-order distribution functions \(F_{0\pm}\), particle velocities \(\mathbf{v}_{\pm}\), and the background magnetic field \(\mathbf{B}_{0}\). The perturbed components are expressed by the first-order distribution functions \(\delta F_{\pm}\) and the fluctuating electric field \(\mathbf{E}\) generated by the plasma as follows \[\frac{\partial F_{0\pm}}{\partial t}+\mathbf{v}_{\pm}\cdot\frac{ \partial F_{0\pm}}{\partial\mathbf{r}_{\pm}}\pm\frac{e}{m_{\mathrm{e}}c}\left(\mathbf{ v}_{\pm}\times\mathbf{B}_{0}\right)\cdot\frac{\partial F_{0\pm}}{\partial\mathbf{v}_{ \pm}}=0, \tag{33}\] \[\frac{\partial\delta F_{\pm}}{\partial t}+\mathbf{v}_{\pm}\cdot\frac{ \partial\delta F_{\pm}}{\partial\mathbf{r}_{\pm}}\] \[\pm\frac{e}{m_{\mathrm{e}}c}\left(\mathbf{v}_{\pm}\times\mathbf{B}_{0} \right)\cdot\frac{\partial\delta F_{\pm}}{\partial\mathbf{v}_{\pm}}\pm\frac{e}{m_ {\mathrm{e}}}\mathbf{E}\cdot\frac{\partial F_{0\pm}}{\partial\mathbf{v}_{\pm}}=0,\] \[\nabla\cdot\mathbf{E}=4\pi\rho\sum_{q=\pm\epsilon}4\pi q\int\mathrm{d }^{3}\mathbf{v}\ \delta F_{\pm}.\] Let \(f_{\pm}(\mathbf{v})\) be an one-particle distribution function, we can write \(F_{0\pm}\equiv n_{\mathrm{e}}f_{\pm}(\mathbf{v})\). Before scattering, each plasma particle is in cyclotron motion in the background magnetic field. The initial velocity, position and phase of a particle are given by \[\mathbf{v}_{\pm}(t) =\left(\begin{array}{c}v_{\parallel}\\ v_{\perp}\cos\varphi_{\pm}(t)\\ v_{\perp}\sin\varphi_{\pm}(t)\end{array}\right), \tag{34}\] \[\mathbf{r}_{\pm}(t) =\mathbf{r}_{\pm}(0)+\left(\begin{array}{c}v_{\parallel}t\\ \pm r_{\mathrm{L}}\sin\varphi_{\pm}(t)\\ \mp r_{\mathrm{L}}\cos\varphi_{\pm}(t)\end{array}\right),\] \[\varphi_{\pm}(t) \equiv\pm\omega_{\mathrm{c}}t+\phi_{0},\] where \(v_{\parallel}\) and \(v_{\perp}\) are the velocities in the direction parallel and perpendicular to the background magnetic field, \(\phi_{0}\) is the angle between the particle position just before scattering and the \(z\)-axis, and \[r_{\mathrm{L}}\equiv\frac{v_{\perp}}{\omega_{\mathrm{c}}} \tag{35}\] is so-called Lamor radius. \(\mathbf{r}_{\pm}(0)\) represents the particle position just before scattering, and each particle follows a canonical distribution in the electrostatic potential created by the plasma. From equation (33), we can evaluate the density fluctuations for \(e^{\pm}\) plasma satisfying the initial condition (34). The density fluctuation for \(e^{\pm}\) plasma in a background magnetic field can be written as follows, referring to the calculations of Salpeter [59] who derived the density fluctuation for ion-electron plasma. The detailed derivation is given in Appendix C \[\widetilde{\delta n_{\pm}}(\mathbf{k},\omega)=-i\left[\left(1-\frac{H_ {\pm}}{\varepsilon_{\mathrm{L}}}\right)\sum_{j=1}^{N_{\pm}}e^{i\mathbf{k}\cdot\bm {r}_{\pm j}(0)}\right. \tag{36}\] \[\times\sum_{l,m=-\infty}^{+\infty}\frac{J_{l}\left(\pm k_{\perp}r _{\mathrm{L}}\right)J_{m}\left(\pm k_{\perp}r_{\mathrm{L}}\right)}{\omega-i \varepsilon-k_{x}v_{\parallel}\mp l\omega_{\mathrm{c}}}e^{i(l-m)\phi_{0j}}+ \frac{H_{\pm}}{\varepsilon_{\mathrm{L}}}\] \[\times\sum_{h=1}^{N_{\mp}}e^{i\mathbf{k}\cdot\mathbf{r}_{\mp h}(0)}\sum_ {l,m=-\infty}^{+\infty}\frac{J_{l}\left(\mp k_{\perp}r_{\mathrm{L}}\right)J_{ m}\left(\mp k_{\perp}r_{\mathrm{L}}\right)}{\omega-i\varepsilon-k_{x}v_{ \parallel}\pm l\omega_{\mathrm{c}}}e^{i(l-m)\phi_{0k}}\right],\] where \(J_{l}(z)\) is a Bessel function, and \[k_{\perp}\equiv\sqrt{k_{y}^{2}+k_{z}^{2}}. \tag{37}\] Here, \(\varepsilon_{\mathrm{L}}\) and \(H_{\pm}\) are the longitudinal dielectric function and the positron/electron electric susceptibility, respectively, and can be described by \[\begin{split}&\varepsilon_{\mathrm{L}}(\mathbf{k},\omega)\equiv \frac{\mathbf{k}\cdot\mathbf{\varepsilon}(\mathbf{k},\omega)\cdot\mathbf{k}}{k^{2}},\\ & H_{\pm}(\mathbf{k},\omega)\equiv\frac{4\pi i}{\omega}\frac{\mathbf{k} \cdot\mathbf{\sigma}_{\pm}(\mathbf{k},\omega)\cdot\mathbf{k}}{k^{2}},\end{split} \tag{38}\] where \(\mathbf{\varepsilon}\) is the dielectric tensor of a magnetized plasma and \(\mathbf{\sigma}_{\pm}\) is the positron/electron electrical conductivity tensor, which are related to each other by \(\mathbf{\varepsilon}=\mathbf{I}+\frac{4\pi i}{\omega}(\mathbf{\sigma}_{+}+\mathbf{\sigma}_{-})\). The density fluctuation equation (36) is divided into three terms, each with a physical interpretation. Specifically, we focus on the electron density fluctuations \(\widetilde{\delta n_{-}}(\mathbf{k},\omega)\): the first term without \(H_{\pm}/\varepsilon_{\mathrm{L}}\) and summed over the electron indices is called the non-collective term, which means that each electron is in cyclotron motion in the background magnetic field; the second term with \(H_{\pm}/\varepsilon_{\mathrm{L}}\) and summed over the electron indices is the effect of each electron on the rest of the electron population; the third term with \(H_{\pm}/\varepsilon_{\mathrm{L}}\) and summed over the positron indices is the effect of each positron on the electron population. The second and third terms are called the collective term, which is the effect of the particles that make up the plasma being distributed and correlated. Depending on whether the time variable of the density fluctuation is Fourier-transformed or Laplace-transformed, the expression of the spectral density function differs as \[\begin{split} S(\mathbf{k},\omega)&\equiv\lim_{ \begin{subarray}{c}\mathbf{\varepsilon}\to 0\\ \mathbf{V}\rightarrow\infty\end{subarray}}\frac{2\varepsilon}{V}\frac{\left\langle \left|\widetilde{\delta n}(\mathbf{k},\omega)\right|^{2}\right\rangle_{\text{ ensemble}}^{\text{Laplace}}}{n_{\text{e}}}\\ &=\lim_{\begin{subarray}{c}\mathbf{T}\rightarrow\infty\\ \mathbf{V}\rightarrow\infty\end{subarray}}\frac{1}{TV}\frac{\left\langle\left| \widetilde{\delta n}(\mathbf{k},\omega)\right|^{2}\right\rangle_{\text{ensemble}}^{ \text{Fourier}}}{n_{\text{e}}}.\end{split} \tag{39}\] Equation (39) are equivalent to each other according to the ergodic hypothesis that taking a long-time average is equivalent to taking a multi-particle statistical average. Using the expression for density fluctuations, four spectral density functions can be evaluated. The equation below is one example of substituting density fluctuations (36) into the definition of \(S_{--}(\mathbf{k},\omega)\) expressed as \[\begin{split}& S_{--}(\mathbf{k},\omega)=\lim_{ \begin{subarray}{c}\mathbf{\varepsilon}\to 0\\ \mathbf{V}\rightarrow\infty\end{subarray}}\frac{2\varepsilon}{Vn_{\text{e}}} \left\langle\left\{\left(1-\frac{H_{-}}{\varepsilon_{\mathrm{L}}}\right) \right.\right.\\ &\times\sum_{j=1}^{N_{-}}e^{i\mathbf{k}\cdot\mathbf{r}_{-j}(0)}\sum_{l,m} \frac{J_{l}\left(-k_{\perp}r_{\mathrm{L}}\right)J_{m}\left(-k_{\perp}r_{ \mathrm{L}}\right)}{\omega-k_{x}v_{\parallel}+l\omega_{\mathrm{c}}-i \varepsilon}e^{i(l-m)\phi_{0j}}\\ &\left.\left.+\frac{H_{-}}{\varepsilon_{\mathrm{L}}}\sum_{h=1}^{N_ {+}}e^{i\mathbf{k}\cdot\mathbf{r}_{+h}(0)}\sum_{l,m}\frac{J_{l}\left(k_{\perp}r_{ \mathrm{L}}\right)J_{m}\left(k_{\perp}r_{\mathrm{L}}\right)}{\omega-k_{x}v_{ \parallel}-l\omega_{\mathrm{c}}-i\varepsilon}e^{i(l-m)\phi_{0k}}\right\}\\ &\left\{\left(1-\frac{H_{-}^{*}}{\varepsilon_{\mathrm{L}}^{*}} \right)\sum_{s=1}^{N_{-}}e^{-i\mathbf{k}\cdot\mathbf{r}_{-s}(0)}\sum_{l^{\prime},m^{ \prime}}\frac{J_{l^{\prime}}\left(-k_{\perp}r_{\mathrm{L}}\right)J_{m^{ \prime}}\left(-k_{\perp}r_{\mathrm{L}}\right)}{\omega-k_{x}v_{\parallel}+l^{ \prime}\omega_{\mathrm{c}}+i\varepsilon}\right.\\ &\left.\left.\times e^{-i(l^{\prime}-m^{\prime})\phi_{0\omega}}+ \frac{H_{-}^{*}}{\varepsilon_{\mathrm{L}}^{*}}\sum_{g=1}^{N_{+}}e^{-i\mathbf{k} \cdot\mathbf{r}_{+g}(0)}\right.\\ &\left.\left.\times\ \sum_{l^{\prime},m^{\prime}}\frac{J_{l^{\prime}} \left(k_{\perp}r_{\mathrm{L}}\right)J_{m^{\prime}}\left(k_{\perp}r_{\mathrm{ L}}\right)}{\omega-k_{x}v_{\parallel}-l^{\prime}\omega_{\mathrm{c}}+i \varepsilon}e^{-i(l^{\prime}-m^{\prime})\phi_{0g}}\right\}\right\rangle.\end{split} \tag{40}\] The spectral density function is a multiplication of terms describing the motion of each of the \(N_{-}\) electrons and \(N_{+}\) positrons in the scattering region. If we take a statistical average over a population of particles, only the product of the identical particles remains. That is, only terms with indices \(j=s\) and \(h=g\) remain. The physical interpretation is that the phase factor of equation (40), \(e^{i\mathbf{k}\cdot\mathbf{r}_{-j}(0)}\), completely cancels out only the terms of the identical particles, while the terms of different particles do not cancel out entirely and converge to zero when the statistical average is taken (see, e.g., [49; 51]). This is because the position of each particle before scattering is randomly distributed according to the canonical distribution. Furthermore, regarding the infinite sums of Bessel functions for \(l\), \(l^{\prime}\), \(m\), and \(m^{\prime}\), only the terms with \(l=l^{\prime}\) and \(m=m^{\prime}\) remain, and all other terms become zero when taking the statistical average. The four spectral density functions are expressed in the following using the infinite sum formula for Bessel functions \(\sum_{m=-\infty}^{+\infty}J_{m}^{2}(z)=1\), \[\begin{split} S_{\pm\pm}(\mathbf{k},\omega)=\lim\limits_{ \begin{subarray}{c}V\to 0\\ V\to\infty\end{subarray}}2\varepsilon\\ \times\left[\left|1-\frac{H_{\pm}}{\varepsilon_{\mathrm{L}}} \right|^{2}\sum\limits_{l=-\infty}^{\infty}\int\mathrm{d}^{3}\mathbf{v}\frac{J_{l }^{2}\left(\pm k_{\perp}r_{\mathrm{L}}\right)f_{\pm}(\mathbf{v})}{\left(\omega-k_{ x}v_{\parallel}\mp l\omega_{\mathrm{c}}\right)^{2}+\varepsilon^{2}}\right.\\ \left.+\left|\frac{H_{\pm}}{\varepsilon_{\mathrm{L}}}\right|^{2} \sum\limits_{l=-\infty}^{+\infty}\int\mathrm{d}^{3}\mathbf{v}\frac{J_{l}^{2} \left(\mp k_{\perp}r_{\mathrm{L}}\right)f_{\mp}(\mathbf{v})}{\left(\omega-k_{x}v_ {\parallel}\pm l\omega_{\mathrm{c}}\right)^{2}+\varepsilon^{2}}\right],\\ S_{\pm\mp}(\mathbf{k},\omega)=\lim\limits_{\begin{subarray}{c}V\to 0 \\ V\to\infty\end{subarray}}2\varepsilon\\ \times\left[\left(1-\frac{H_{\pm}}{\varepsilon_{\mathrm{L}}} \right)\frac{H_{\pm}^{\pm}}{\varepsilon_{\mathrm{L}}^{\pm}}\sum\limits_{l=- \infty}^{\infty}\int\mathrm{d}^{3}\mathbf{v}\frac{J_{l}^{2}\left(\pm k_{\perp}r_{ \mathrm{L}}\right)f_{\pm}(\mathbf{v})}{\left(\omega-k_{x}v_{\parallel}\mp l\omega_ {\mathrm{c}}\right)^{2}+\varepsilon^{2}}\right.\\ \left.+\left(1-\frac{H_{\mp}^{\pm}}{\varepsilon_{\mathrm{L}}^{ \pm}}\right)\frac{H_{\pm}}{\varepsilon_{\mathrm{L}}}\sum\limits_{l=-\infty}^ {+\infty}\int\mathrm{d}^{3}\mathbf{v}\frac{J_{l}^{2}\left(\mp k_{\perp}r_{\mathrm{ L}}\right)f_{\mp}(\mathbf{v})}{\left(\omega-k_{x}v_{\parallel}\mp l\omega_{\mathrm{c}} \right)^{2}+\varepsilon^{2}}\right].\end{split} \tag{42}\] The spectral density function and Thomson cross-section for \(e^{\pm}\) plasma in a background magnetic field can be obtained from these expressions. In the next section, we assume the Maxwellian distribution for the plasma distribution function and discuss the behavior of plasma scattering. ### Maxwellian distributions In this section, we investigate the behavior of Thomson scattering in the case of a Maxwellian distribution of \(e^{\pm}\) plasma. First, we evaluate the electric susceptibility and longitudinal dielectric function in magnetized electron and positron plasma that appear in the spectral density functions. According to the plasma kinetic theory (e.g., [51]), the expression for the electric susceptibility \(H_{\pm}\), (38), is given by \[\begin{split} H_{\pm}(\mathbf{k},\omega)=&\int\mathrm{d }^{3}\mathbf{v}\frac{4\pi e^{2}n_{\mathrm{e}}}{m_{\mathrm{e}}k^{2}}\sum\limits_{l =-\infty}^{+\infty}\left(k_{x}\frac{\partial f_{0\pm}}{\partial v_{x}}\pm\frac {l\omega_{\mathrm{c}}}{v_{\perp}}\frac{\partial f_{0\pm}}{\partial v_{\perp}}\right) \\ &\times\frac{J_{l}^{2}\left(\pm\frac{k_{\perp}v_{\perp}}{\omega_{ \mathrm{c}}}\right)}{\omega-i\varepsilon-k_{x}v_{x}\mp l\omega_{\mathrm{c}}}. \end{split} \tag{43}\] Assume the following Maxwellian distribution as the \(e^{\pm}\) plasma distribution function \[\begin{split} f_{\pm}(\mathbf{v})&=\left(\frac{m_{ \mathrm{e}}}{2\pi k_{\mathrm{B}}T_{\mathrm{e}}}\right)^{\frac{3}{2}}\exp \left(-\frac{m_{\mathrm{e}}v^{2}}{2k_{\mathrm{B}}T_{\mathrm{e}}}\right)\\ &=\frac{1}{\left(\pi v_{\mathrm{th}}^{2}\right)^{\frac{3}{2}}} \exp\left(-\frac{v_{x}^{2}+v_{\perp}^{2}}{v_{\mathrm{th}}^{2}}\right),\end{split} \tag{44}\] where the thermal velocity is denoted by \[v_{\mathrm{th}}\equiv\left(\frac{2k_{\mathrm{B}}T_{\mathrm{e}}}{m_{\mathrm{e}}} \right)^{\frac{1}{2}} \tag{45}\] and the thermal velocity in the direction parallel and perpendicular to the magnetic field is assumed to be equal. Substituting the Maxwellian distribution into equation (43) and performing velocity integration, the electric susceptibility can be represented by the modified Bessel function \(I_{l}(x)\) and plasma dispersion function \(Z(x)\). Using the following special function formulae, \[\begin{split}\int_{0}^{\infty}&\quad J_{l}^{2}(at) \exp\left(-b^{2}t^{2}\right)t\ \mathrm{d}t\\ &=\ \frac{1}{2b^{2}}\exp\left[-\left(\frac{a^{2}}{2b^{2}}\right) \right]I_{l}\left(\frac{a^{2}}{2b^{2}}\right),\\ Z(\xi)&\equiv\ \frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty} \frac{1}{z-\xi}e^{-z^{2}}\ \mathrm{d}z,\end{split} \tag{47}\] in performing the velocity integral of the electric susceptibility, it can be written as \[\begin{split} H_{\pm}(\mathbf{k},\omega)=&\ \frac{\omega_{\mathrm{p}}^{2}}{k^{2}v_{\mathrm{th}}^{2}}\left\{1+\frac{ \omega}{k_{x}v_{\mathrm{th}}}\sum_{l}I_{l}\left[\frac{1}{2}\left(\frac{k_{\perp }v_{\mathrm{th}}}{\omega_{\mathrm{c}}}\right)^{2}\right]\right.\\ &\times\exp\left[-\frac{1}{2}\left(\frac{k_{\perp}v_{\mathrm{th}}} {\omega_{\mathrm{c}}}\right)^{2}\right]Z\left(\frac{\omega_{\mp}l\omega_{ \mathrm{c}}}{k_{x}v_{\mathrm{th}}}-i\varepsilon\right)\right\},\end{split} \tag{48}\] where \[\omega_{\mathrm{p}}\equiv\sqrt{\frac{4\pi\cdot 2n_{\mathrm{e}}e^{2}}{m_{\mathrm{e}}}} \tag{49}\] is defined as the \(e^{\pm}\) plasma frequency. From the symmetry \(I_{l}(x)=I_{-l}(x)\) of the modified Bessel function, we see that the electric susceptibility of electrons and positrons in the Maxwellian distribution is equal \[H_{+}(\mathbf{k},\omega)=H_{-}(\mathbf{k},\omega)\equiv H(\mathbf{k},\omega). \tag{50}\] Next, we evaluate four types of spectral density functions (41) and (42) under the assumption of the Maxwellian distribution. Using the formula (46), the integral appearing in the spectral density functions is calculated as \[\begin{split}&\lim\limits_{\varepsilon\to 0}2\varepsilon\sum\limits_{l =-\infty}^{+\infty}\int\mathrm{d}^{3}\mathbf{v}\frac{J_{l}^{2}\left(\pm k_{\perp}r_ {\mathrm{L}}\right)f_{\pm}(\mathbf{v})}{\left(\omega-k_{x}v_{x}\mp l\omega_{ \mathrm{c}}\right)^{2}+\varepsilon^{2}}\\ =&\lim\limits_{\varepsilon\to 0}2\varepsilon\sum \limits_{l}\int_{0}^{2\pi}\frac{\mathrm{d}\varphi}{\left(\pi v_{\mathrm{th}}^{2} \right)^{\frac{3}{2}}}\int_{0}^{\infty}v_{\perp}\mathrm{d}v_{\perp}J_{l}^{2} \left(k_{\perp}r_{\mathrm{L}}\right)\exp\left(-\frac{v_{\perp}^{2}}{v_{ \mathrm{th}}^{2}}\right)\\ &\times\int_{-\infty}^{+\infty}\mathrm{d}v_{z}\frac{\exp\left(-v_ {x}^{2}/v_{\mathrm{th}}^{2}\right)}{\left(\omega-k_{x}v_{x}\mp l\omega_{ \mathrm{c}}\right)^{2}+\varepsilon^{2}}\\ =&\ 2\sqrt{\pi}\sum\limits_{l=-\infty}^{+\infty}\exp \left\{-\frac{1}{2}\left(\frac{v_{\mathrm{th}}k_{\perp}}{\omega_{\mathrm{c}}} \right)^{2}\right\}I_{l}\left[\frac{1}{2}\left(\frac{v_{\mathrm{th}}k_{ \perp}}{\omega_{\mathrm{c}}}\right)^{2}\right]\\ &\times\frac{\exp\left[-\left(\frac{\omega\mp l\omega_{\mathrm{c}}}{k _{x}v_{\mathrm{th}}}\right)^{2}\right]}{k_{x}v_{\mathrm{th}}}.\end{split} \tag{51}\] From the symmetry of the modified Bessel function \(I_{l}(x)=I_{-l}(x)\) and the equation (50), we obtain the spectral density function as \[\begin{split} S_{++}&=S_{--}=2\sqrt{\pi}\left(\left|1- \frac{H}{\varepsilon_{\rm L}}\right|^{2}+\left|\frac{H}{\varepsilon_{\rm L}} \right|^{2}\right)\\ &\times\sum_{l=-\infty}^{+\infty}\exp\left\{-\frac{1}{2}\left( \frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2}\right\}I_{l}\left[\frac{ 1}{2}\left(\frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2}\right]\\ &\times\frac{\exp\left[-\left(\frac{v-l\omega_{\rm c}}{k_{x}v_{ \rm th}}\right)^{2}\right]}{k_{x}v_{\rm th}},\end{split} \tag{52}\] \[\begin{split} S_{+-}&=S_{-+}=2\sqrt{\pi}\left\{ \left(1-\frac{H}{\varepsilon_{\rm L}}\right)\frac{H^{*}}{\varepsilon_{\rm L}^ {*}}+\left(1-\frac{H^{*}}{\varepsilon_{\rm L}^{*}}\right)\frac{H}{\varepsilon _{\rm L}}\right\}\\ &\times\sum_{l=-\infty}^{+\infty}\exp\left\{-\frac{1}{2}\left( \frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2}\right\}I_{l}\left[ \frac{1}{2}\left(\frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2}\right] \\ &\times\frac{\exp\left[-\left(\frac{v-l\omega_{\rm c}}{k_{x}v_{ \rm th}}\right)^{2}\right]}{k_{x}v_{\rm th}}.\end{split} \tag{53}\] Hence, the linear combinations of the spectral density functions appearing in the differential scattering cross-section (30) can be obtained as follows \[\begin{split}& S_{++}+S_{--}+S_{-+}+S_{+-}\\ =& 4\sqrt{\pi}\sum_{l=-\infty}^{+\infty}\exp\left\{- \frac{1}{2}\left(\frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2}\right\} I_{l}\left[\frac{1}{2}\left(\frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2} \right]\\ &\times\frac{\exp\left[-\left(\frac{v-l\omega_{\rm c}}{k_{x}v_{ \rm th}}\right)^{2}\right]}{k_{x}v_{\rm th}},\end{split} \tag{54}\] \[\begin{split}& S_{++}+S_{--}-S_{-+}-S_{+-}\\ &=4\sqrt{\pi}\left\{1-4\operatorname{Re}\left(\frac{H}{ \varepsilon_{\rm L}}\right)+4\left|\frac{H}{\varepsilon_{\rm L}}\right|^{2} \right\}\\ &\times\sum_{l=-\infty}^{+\infty}\exp\left\{-\frac{1}{2}\left( \frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2}\right\}I_{l}\left[\frac{ 1}{2}\left(\frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2}\right]\\ &\times\frac{\exp\left[-\left(\frac{w-l\omega_{\rm c}}{k_{x}v_{ \rm th}}\right)^{2}\right]}{k_{x}v_{\rm th}}.\end{split} \tag{55}\] Concerning equation (54), the linear combination of the spectral density functions due to the motion of the oscillating particles in the incident electromagnetic wave direction is independent of the electric susceptibility \(H\) and the scattering is strictly non-collective. On the other hand, in equation (55), the term due to drift motion is dependent on the electric susceptibility \(H\), and scattering has a collective effect. In Figure 1, we plot the spectra of the differential scattering cross-section (30) at two different temperature \(T_{\rm e}=153\) keV and \(T_{\rm e}=5\) keV. The other parameters we use are shown in Table 1. The differential cross-section is non-dimensionalized by the Thomson cross-section \(\sigma_{\rm T}\) and the angular frequency of the incident electromagnetic wave \(\omega_{0}\) as \[\begin{split}&\frac{4\pi\omega_{0}}{\sigma_{\rm T}}\frac{{\rm d }\sigma^{(1)}}{{\rm d}{\rm d}{\rm d}\omega_{1}}=\frac{3\omega_{0}}{8\pi}\left( \frac{\omega_{0}^{2}}{\omega_{0}^{2}-\omega_{\rm c}^{2}}\right)^{2}\\ &\times\left[(S_{++}+S_{+-}+S_{-+}+S_{--})\left(1-\sin^{2}\theta \sin^{2}\varphi\right)\right.\\ &\left.+\left(\frac{\omega_{\rm c}}{\omega_{0}}\right)^{2}(S_{++} +S_{--}-S_{+-}-S_{-+})\sin^{2}\theta\right]\\ &\equiv\hat{\sigma}_{\rm Electric}+\hat{\sigma}_{\rm Drift},\end{split} \tag{56}\] where the contribution due to oscillation of particles in the direction of electric field of the incident electromagnetic wave is defined as \(\hat{\sigma}_{\rm Electric}\) and the contribution due to drift motion is defined as \(\hat{\sigma}_{\rm Drift}\). We consider the case where the scattering wave vector lies in the plane of the incident electromagnetic wave and the magnetic field (\(\varphi=0\)). That is, the angle between the background magnetic field and the direction of the scattered wave is defined as \(\theta_{\rm B}\equiv\pi/2-\theta\). Figure 1 shows the differential cross-sections at two plasma temperatures for the cases where the scattering wave is nearly perpendicular to the background magnetic field (\(\theta_{\rm B}=84^{\circ}\)) and nearly parallel to it (\(\theta_{\rm B}=1^{\circ}\))1. Footnote 1: The reason for not choosing the scattered wave to be completely perpendicular or parallel is that it would result in the differential cross-sections having delta-function-like peaks. First, in \(\hat{\sigma}_{\rm Electric}\), when the direction of the scattered wave is perpendicular to the background magnetic field (see the left side of Figure 1), the first significant peak appears at the angular frequency \(\omega_{0}\) of the incident electromagnetic wave and the following peaks at frequencies \(\omega_{0}+n\omega_{\rm c}\) (\(n\) is a natural number) decrease the height as \(n\) becomes large. Moreover, as the temperature decreases, the peak width and height on the high-frequency side become smaller. As will be shown later, Thomson scattering spectrum in the cold plasma limit has a delta-function peak localized at the frequency \(\omega_{0}\) of the incident electromagnetic wave. This correspo \begin{table} \begin{tabular}{l c c} \hline Parameter & Value1 & Value2 \\ \hline \(\omega_{0}\) [GHz/\(2\pi\)] & 1 & \\ \(\omega_{\rm c}\) [GHz/\(2\pi\)] & 2 & \\ \(n_{\rm e}\) [cm\({}^{-3}\)] & \(6\times 10^{5}\) & \\ \(T_{\rm e}\) [keV] & 153 & 5 \\ \hline \end{tabular} \end{table} Table 1: Parameters used for the differential cross-sections plotted at two different temperatures. single-particle scattering is considered. The peaks separated by the cyclotron frequency arise for finite temperature plasma because particles in the plasma, initially moving at integer multiples of the cyclotron frequency, are excited by the incident electromagnetic wave as density fluctuations. As the thermal motion of the particles becomes smaller, the peaks on the high-frequency side are suppressed, and the \(\omega_{0}\) peaks become dominant. Next, in \(\hat{\sigma}_{\rm Electric}\), when the scattered wave propagates parallel to the background magnetic field (see left side of Figure 1), the peaks separated by the cyclotron frequency become less distinguishable. As the temperature decreases, the intensity on the high-frequency side diminishes. Finally, for \(\hat{\sigma}_{\rm Drift}\), the rough configuration is the same as for \(\hat{\sigma}_{\rm Electric}\), but it has a more complicated shape due to its dependence on the electric susceptibility. The spectrum shows a spiky shape near the zero point of the longitudinal dielectric function (\(\varepsilon_{\rm L}=0\)), i.e., near the eigenmodes of electron and positron plasma in a magnetic field. The dependence of the scattering spectrum on the electric susceptibility, i.e., the collective effect, is suppressed at higher frequencies. This is because the electric susceptibility has a wavelength dependence \[H(\mathbf{k},\omega)\propto\frac{\omega_{\rm p}^{2}}{k^{2}v_{\rm th}^{2}}\sim\left( \frac{\lambda}{\lambda_{\rm De}}\right)^{2}, \tag{57}\] from equation (48), and becomes smaller at shorter wave Figure 1: Thomson scattering spectra from electron and positron plasma in a magnetic field. The solid red line represents the contribution from particles oscillating in the direction of the electric field of the incident electromagnetic wave, and the blue dashed line represents the contribution from the drift motion of the particles. The green dot-dashed line represents the spectrum when the cold plasma limit is taken (see section III.5 for details). The analysis is conducted using the parameters listed in Table 1. (a) \(k_{\rm B}T_{\rm e}=153\) keV and the direction of the scattered wave is almost perpendicular to the magnetic field. (b) \(k_{\rm B}T_{\rm e}=153\) keV and the direction of the scattered wave is almost parallel to the magnetic field. (c) \(k_{\rm B}T_{\rm e}=5\) keV and the direction of the scattered wave is almost perpendicular to the magnetic field. (d) \(k_{\rm B}T_{\rm e}=5\) keV and the direction of the scattered wave is almost parallel to the magnetic field. lengths, i.e., higher frequencies. Here \[\lambda_{\rm De}\equiv\left(\frac{k_{\rm B}T_{\rm e}}{8\pi e^{2}n_{\rm e}}\right)^ {\frac{1}{2}}=\frac{v_{\rm th}}{\sqrt{2}\omega_{\rm p}} \tag{58}\] is Debye length. ### Cold plasma limit In this section, we estimate the spectral density function and calculate the scattering cross-section per particle in the limit where the thermal motion of plasma particles is negligible, and the background magnetic field is large. First, we evaluate the part of the spectral density function that depends on the electric susceptibility \(H\). The electric susceptibility (48) is expanded using Taylor series around \(k_{\perp}v_{\rm th}/\omega_{\rm c}\) and \(k_{x}v_{\rm th}/(\omega\mp l\omega_{\rm c})\), assuming that the thermal velocity is sufficiently small. The infinite sum of the modified Bessel functions converges quickly enough in \(k_{\perp}v_{\rm th}/\omega_{\rm c}\ll 1\). Then the main orders of the electric susceptibility \(H\) and the longitudinal dielectric function \(\varepsilon_{\rm L}=1+2H\) appear only for \(l=-1,0,1\) and are expressed as \[\begin{split} H(\mathbf{k},\omega)=&-\frac{1}{2}\frac{k _{x}^{2}}{k^{2}}\left(\frac{\omega_{\rm p}}{\omega}\right)^{2}+\frac{1}{2}\frac {k_{\perp}^{2}}{k^{2}}\frac{\omega_{\rm p}^{2}}{\omega_{\rm c}^{2}}\left(1- \frac{\omega^{2}}{\omega^{2}-\omega_{\rm c}^{2}}\right)\\ &+\mathcal{O}\left(\left(\frac{kv_{\rm th}}{\omega}\right)\right). \end{split} \tag{59}\] Then, the following Taylor expansion can be made on the order of cyclotron frequency \[\frac{H_{\pm}}{\varepsilon_{\rm L}}\bigg{|}_{\omega_{1}\to 0}=1+\mathcal{O} \left(\left(\frac{\omega_{1}-\omega_{0}}{\omega_{\rm c}}\right)^{2}\right). \tag{60}\] Next, we evaluate the linear couplings (54) and (55) of the spectral density functions appearing in the differential cross-section (30). If the angular frequency of the scattered wave is sufficiently smaller than the cyclotron frequency, i.e., if the magnetic field is sufficiently strong, only the term \(l=0\) needs to be considered. Furthermore, in the cold plasma limit, the approximation \(\frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\ll 1\) holds. Given the above approximations, the infinite sum part of the spectral density function is evaluated as \[\begin{split}&\sum_{l=-\infty}^{+\infty}\exp\left\{-\frac{1}{2} \left(\frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2}\right\}I_{l} \left[\frac{1}{2}\left(\frac{v_{\rm th}k_{\perp}}{\omega_{\rm c}}\right)^{2} \right]\\ &\times\frac{\exp\left[-\left(\frac{\omega-l\omega_{\rm th}}{k_{ x}v_{\rm th}}\right)^{2}\right]}{k_{x}v_{\rm th}}\sim 2\sqrt{\pi}\frac{\exp\left[-\left(\frac{ \omega}{k_{x}v_{\rm th}}\right)^{2}\right]}{k_{x}v_{\rm th}}.\end{split} \tag{61}\] Finally, the sums of the spectral density functions are represented by the following delta function in the cold plasma limit \[\begin{split}&\left.\left(\left.S_{++}+S_{--}+S_{+-}+S_{-+}\right) \right|_{k_{x},k_{y},k_{z}-k_{0},\omega-\omega_{0}}\\ \sim& 4\sqrt{\pi}\frac{\exp\left[-\left(\frac{ \omega-\omega_{0}}{k_{x}v_{\rm th}}\right)^{2}\right]}{k_{x}v_{\rm th}} \xrightarrow{v_{\rm th}\to 0}4\pi\delta(\omega-\omega_{0}),\end{split} \tag{62}\] \[\begin{split}&\left.\left(\left.S_{++}+S_{--}-S_{+-}-S_{-+}\right) \right|_{k_{x},k_{y},k_{z}-k_{0},\omega-\omega_{0}}\\ &\xrightarrow{v_{\rm th}\to 0}4\pi\delta(\omega-\omega_{0}). \end{split} \tag{63}\] The physical meaning of the spectral density function in the cold plasma limit is that the thermal motion of the plasma is sufficiently small that the spectrum of the scattered wave is localized to the frequency of the incident electromagnetic wave. By substituting equations (54) and (55) into equation (30), we obtain the cold plasma limit of the differential cross-section per particle. \[\begin{split}&\frac{\mathrm{d}\sigma_{\rm cold}^{(1)}}{\mathrm{d} \Omega\mathrm{d}\omega_{1}}=r_{\rm e}^{2}\left[\left(\frac{\omega_{0}^{2}}{ \omega_{0}^{2}-\omega_{\rm c}^{2}}\right)^{2}\left(1-\sin^{2}\theta\sin^{2} \varphi\right)\right.\\ &\left.+\left(\frac{\omega_{0}\omega_{\rm c}}{\omega_{0}^{2}- \omega_{\rm c}^{2}}\right)^{2}\sin^{2}\theta\right]\delta(\omega_{1}-\omega_{0} ).\end{split} \tag{64}\] Then, the total cross-section per particle can be obtained by integrating over the frequency and solid angle of the scattered wave \[\sigma_{\rm cold}^{(1)}=\frac{1}{2}\sigma_{\rm T}\left[\left(\frac{\omega_{0}} {\omega_{0}+\omega_{\rm c}}\right)^{2}+\left(\frac{\omega_{0}}{\omega_{0}- \omega_{\rm c}}\right)^{2}\right]. \tag{65}\] This is the same as when considering single particle scattering (11). ## IV Discussion ### Difference from the case without background magnetic fields In the case without a magnetic field, Thomson scattering in \(e^{\pm}\) plasma has been studied by Sincell and Krolik [56]. Their research concluded that the collective effects completely cancel out. To understand this, one must consider the physical interpretation of density fluctuations. When combining the density fluctuations of electrons and positrons, focusing on a single electron reveals that the electron cloud moves away from the electron while the positron cloud approaches it2. Collective Thomson scattering represents radiation caused by Lienard-Wiechert potential generated by density fluctuations. The electric field produced by Lienard-Wiechert potential is given by equation (5). Without a magnetic field, the charge \(q\) and the acceleration of the radiating particle \(\mathbf{\dot{\beta}}\) both depend on the charge sign. However, \(\mathbf{E}_{\rm rad}\) itself does not depend on the charge sign. As the electron and positron clouds move in opposite directions, their radiative currents cancel out. Thus, the radiation from the collective effect is entirely canceled3. Footnote 3: Even when incorporating the electric field produced by the longitudinal eigenmode into the motion equation of the scattered particles (13), the cancellation of the collective effect is guaranteed even when considering the nonlinear ponderomotive force arising from the coupling of the incident electric field and the eigenmode. This is because the ponderomotive force acts similarly on both electrons and positrons, leading to a cancellation within the pair plasma. As a result, it does not generate an electric field in the plasma; thus, it is believed not to contribute to the radiation. On the other hand, when a magnetic field is present, the collective effects in the scattering within \(e^{\pm}\) plasma do not entirely cancel out. From equation (54), it can be seen that the density fluctuations due to the motion of scattering particles in the direction of the incident electric field are entirely canceled by the collective effects, similar to the case without a background magnetic field. However, according to equation (55), the density fluctuations due to drift motion direction retain some collective effects. This is because the acceleration of the radiating particle \(\mathbf{\dot{\beta}}\) has both a component dependent on the charge sign in the direction of the incident electric field and a component independent of the charge sign in the drift motion direction (see equations (3)). Thus, the radiation originating from the drift does not cancel out. In any case, as shown in the previous section, in strongly magnetized cold plasma limit, the scattering cross-section per particle is equal to that of the case of single particle scattering (see equation(11) and (65)), i.e., the collective effect can be neglected. The reason why the collective effect is negligible can be interpreted by focusing on the behavior of density fluctuations. In the case of a large magnetic field, if we look at equation (57) that shows the dependence of the electric susceptibility on the Debye length and the plasma fluctuation wavelength, we find that in the limit of small Debye length (zero temperature limit), or in the limit where the wavelength of the plasma fluctuation is large, it approaches \(H_{\pm}/\varepsilon_{\rm L}\to 1\). For example, focusing on the electron density fluctuation equation (36), in these limits, when considering the motion of a single electron, the other electron clouds make a response by shielding that motion, i.e., the non-collective term cancels out some of the collective terms. Eventually, the electron density fluctuations are dominated by the collective term that the positron cloud responds to when focusing on a single electron motion, and the net scattering effect turns out to be exactly the same as for a single particle. ### Why is the scattering not canceled out by the drift motion? When considering the scattering of electrons and positrons in a background magnetic field, one could think that the particles will drift in the same direction as the incident electromagnetic wave vector, leading to a cancellation of currents, and thus, scattering is almost negligible (see Appendix B). However, when taking into account the scattering from density fluctuations in \(e^{\pm}\) plasma (see Section III), it becomes evident that the cross-section per particle remains unchanged compared to the case of considering single-particle scattering. Physically, the latter description is more accurate. The difference in scattering intensities between the two pictures (Appendix B vs. Section III) arises from the distinct initial conditions of particles just before scattering at \(t=0\). For instance, we focus on the differential cross-section attributed to the drift motion of particles in cold plasma. In the first scenario (Appendix B), it is assumed that an electron and a positron are at rest and separated by a distance \(d\) prior to scattering. The differential cross-section is estimated from equation (B1) as \[\begin{split}\frac{\mathrm{d}\sigma_{\rm drift}^{(2)}}{\mathrm{d} \Omega}&\propto\left|\widetilde{\mathbf{E}_{\rm rad}}\right|^{2}\\ &\propto(e^{i\mathbf{k}\cdot\mathbf{r}_{+}}-e^{i\mathbf{k}\cdot\mathbf{r}_{-}})(e ^{-i\mathbf{k}\cdot\mathbf{r}_{+}}-e^{-i\mathbf{k}\cdot\mathbf{r}_{-}})\\ &\propto 2\left[1-\cos(k_{x}d)\right]\frac{\mathrm{d}\sigma_{\rm drift }^{(1)}}{\mathrm{d}\Omega}.\end{split} \tag{66}\] The term involving different particles, i.e., the cross-term \(e^{i\mathbf{k}\cdot(\mathbf{r}_{+}-\mathbf{r}_{-})}+\mathrm{c.c.}\), takes a finite value and significantly alters the scattering intensity. On the other hand, when considering scattering from density fluctuations (Section III), the squared absolute value of the radiative electric field is represented as in Equation (24), and the differential cross-section is estimated as \[\begin{split}&\frac{\mathrm{d}\sigma_{\rm drift}^{(N_{+}+N_{-})}}{ \mathrm{d}\Omega}\propto\left\langle\left|\widetilde{\delta n_{+}}-\widetilde {\delta n_{-}}\right|^{2}\right\rangle_{\rm ensemble}\\ &\propto\left\langle\left\{\sum_{j=1}^{N_{-}}e^{i\mathbf{k}\cdot\mathbf{r }_{j}(t=0)}(\cdots)-\sum_{h=1}^{N_{+}}e^{i\mathbf{k}\cdot\mathbf{r}_{h}(t=0)}(\cdots) \right\}\right\rangle\\ &\times\left\{\sum_{s=1}^{N_{-}}e^{-i\mathbf{k}\cdot\mathbf{r}_{s}(t=0)}( \cdots)-\sum_{g=1}^{N_{+}}e^{-i\mathbf{k}\cdot\mathbf{r}_{g}(t=0)}(\cdots)\right\} \right\rangle_{\rm ensemble}\\ &\propto\left\langle\sum_{j=s}^{N_{+}}(\cdots)+\sum_{h=g}^{N_{-}} (\cdots)\right\rangle_{\rm ensemble}\\ &-\left\langle\sum_{j\neq s}(\cdots)+\sum_{j,g}(\cdots)+\sum_{h,s }(\cdots)+\sum_{h\neq g}(\cdots)\right\rangle_{\rm ensemble}\\ &\propto\left(N_{+}+N_{-}\right)\frac{\mathrm{d}\sigma_{\rm drift}^ {(1)}}{\mathrm{d}\Omega}+0.\end{split} \tag{67}\] Here, the cross-terms in the differential cross-section (the second row from the bottom) become zero when averaging over position and velocity, providing a correct depiction. The mistake in the first scenario is the failure to consider the statistical nature of plasma particles, which are distributed randomly following a particular distribution prior to scattering. ### Observational implications of scattering spectra Hankins and Eilek [60] showed that the interpulse has several spectral bands in their observations of the Crab pulsar. They also showed that the spacing between adjacent peak frequency bands of interpulse increases in proportion to frequency, i.e., \(\nu\propto\Delta\nu\). The emission mechanism responsible for explaining such a spectrum is not fully understood. Collective Thomson scattering from plasma may explain the spectrum bands of the pulsar. As seen from Figure 1, when electromagnetic waves are scattered in magnetized thermal plasma, the scattered spectrum has peaks separated by the cyclotron frequency. Also, from the equation (65), if the magnetic field is large enough, the total scattering cross-section depends on \(\sigma\propto(\nu/\nu_{\rm c})^{2}\). Hence, the higher-frequency electromagnetic waves are scattered more in the larger magnetic field region, resulting in larger peak separations at higher frequencies, which qualitatively agrees with their observation. ### Optical depth for fast radio bursts to induced Compton scattering Based on the discussion of Thomson scattering in magnetized plasma and the collective effect, we will evaluate the effective optical depth for induced Compton scattering of X-mode electromagnetic waves in \(e^{\pm}\) plasma in a strong magnetic field. It should be noted that the scattering cross-section derived from collective Thomson scattering is valid only within the range where plasma density fluctuations can be treated perturbatively. In situations with strong nonlinear effects, such as induced Compton scattering of large amplitude electromagnetic waves, there is no guarantee that the differential scattering cross-section for collective Thomson scattering, given by equation (30), can be applied. However, we will proceed with the discussion, assuming its applicability. For simplicity, we limit our discussion to a rough order-of-magnitude estimate of the effective optical depth for induced Compton scattering. It is necessary to consider electron recoil due to Compton scattering and the quantum correction for the Thomson scattering cross-section [61; 62]. However, if one attempts to obtain the angular dependence, the calculations become significantly more complex. Therefore, the dependence on the angle between the incident and scattered waves is neglected in the final expression. The effective cross-section for induced Compton scattering without a magnetic field is briefly derived. The Boltzmann equation for photons considering Compton scattering in the collision term is expressed as \[\frac{\partial}{\partial t}N(\omega,\mathbf{\Omega})+c(\mathbf{\Omega}\cdot\nabla)N( \omega,\mathbf{\Omega}) \tag{68}\] \[=n_{\rm e}\int{\rm d}^{3}\mathbf{k}^{\prime}{\rm d}^{3}\mathbf{k}^{\prime \prime}\left[P\left(\mathbf{k}^{\prime}\rightarrow\mathbf{k}^{\prime\prime}\right)-P \left(\mathbf{k}^{\prime\prime}\rightarrow\mathbf{k}^{\prime}\right)\right]\delta \left(\mathbf{k}-\mathbf{k}^{\prime\prime}\right).\] Let \(N(\omega,\mathbf{\Omega})\) be the photon occupation number at angular frequency \(\omega\) in the direction \(\mathbf{\Omega}=(\theta,\varphi)\), and \(P\left(\mathbf{k}^{\prime}\rightarrow\mathbf{k}^{\prime\prime}\right)\) be the probability that an electromagnetic wave with wavenumber \(\mathbf{k}^{\prime}\) transitions to \(\mathbf{k}^{\prime\prime}\). The transition probability can be formulated in terms of Compton scattering as \[P\left(\mathbf{k}\rightarrow\mathbf{k}^{\prime}\right){\rm d}^{3}\mathbf{k }{\rm d}^{3}\mathbf{k}^{\prime} \tag{69}\] \[=c\ {\rm d}\sigma N(\omega,\mathbf{\Omega})\left[1+N\left(\omega^{ \prime},\mathbf{\Omega}^{\prime}\right)\right]\delta\left(\omega^{\prime}-\omega+ \Delta\omega\right){\rm d}\omega^{\prime}{\rm d}^{3}\mathbf{k},\] where \(c\ {\rm d}\sigma N(\omega,\mathbf{\Omega})\) represents a scattered photon number flux, that is \({\rm d}\sigma\) represents a differential cross-section, and \(\Delta\omega=\omega-\omega^{\prime}\) is frequency change due to Compton scattering. First, we consider the low-energy limit of unpolarized Compton scattering without a background magnetic field \[\Delta\omega=\frac{\hbar\omega\omega^{\prime}}{m_{\rm e}c^{2}}(1-\cos\theta), \tag{70}\] \[\frac{{\rm d}\sigma}{{\rm d}\Omega}=\frac{1}{2}r_{\rm e}^{2}\left(1+\cos^{2} \theta\right)+\mathcal{O}\left(\left(\frac{\hbar\omega}{m_{\rm e}c}\right)^{2 }\right), \tag{71}\] where the low-energy expansion of the Klein-Nishina's formula is employed as the differential cross-section. Then the evolution equation for photons due to induced Compton scattering is derived as [35; 37] \[\frac{{\rm d}N}{{\rm d}t}\simeq\frac{3\sigma_{\rm T}}{8\pi}n_{\rm e}N\int{\rm d }\Omega^{\prime}\left(1+\cos^{2}\theta\right)\frac{\hbar}{m_{\rm e}c}(1-\cos \theta)\frac{\partial}{\partial\omega}\left(\omega^{2}N^{\prime}\right). \tag{72}\] If the radiation is collimated, as in the case far from the source, the small factor \((1-\cos\theta)\) reduces the scattering within the beam. On the other hand, when the scattering from the beam into the background radiation outside the beam dominates, the scattering enhances the initially weak background exponentially [62]. We define the effective optical depth as the amplification factor of the scattered light and substitute \(N=N_{0}e^{\tau_{\rm ind}}\) as the formal solution to the basic equation (72). Then, the order evaluation reveals the following \[\frac{\tau_{\rm ind}}{\Delta t}\sim\frac{3\sigma_{\rm T}}{8\pi}n_{\rm e} \Delta\Omega\frac{\hbar}{m_{\rm e}c}\frac{1}{\omega}\omega^{2}N^{\prime}, \tag{73}\] where \(\Delta t\) and \(\Delta\Omega\) are the pulse width and opening angle of the incident electromagnetic wave, respectively. The spectral flux at the scattering point and the isotropic luminosity are expressed by \[\begin{split}& F_{\nu}\sim 2\times\frac{2\pi\hbar}{c^{2}}\Delta \Omega\ \nu^{3}N^{\prime},\\ & L_{\gamma}\sim 4\pi r^{2}\nu F_{\nu}.\end{split} \tag{74}\] Here \(r\) is the distance from the center to the scattering point, and \(\omega=2\pi\nu\). From the above, the effective optical depth of induced Compton scattering is evaluated in the following \[\tau_{\rm ind}\ \sim n_{\rm e}\sigma_{\rm T}c\Delta t\frac{3\pi L_{\gamma}}{4r^{2}m _{\rm e}\omega^{3}}. \tag{75}\] Next, we consider induced Compton scattering of X-mode waves in a strong magnetic field. We refer to the discussion by Gonthier _et al._[43] for the differential cross-section and frequency shift of Compton scattering in a strong magnetic field. We impose the following assumptions in deriving the details. * A uniform magnetic field \(\mathbf{B}_{0}=(B_{0},0,0)\) exists in the \(x\)-axis direction. * Consider an electron or positron in its rest frame and assume that the wave number vector of the incident electromagnetic wave is oriented in the direction of the magnetic field, i.e., the \(x\) direction. The direction of this wave number vector is realized by the Lorentz aberration when the particle is in ultra-relativistic motion in a laboratory system, as viewed in the rest frame of particles. * Assume that the electron or positron is in the lowest Landau level. In the strong magnetic field, the frequency shift of photons due to Compton scattering and the differential cross-section for photons polarized perpendicular to the background magnetic field can be expressed by [43] \[\Delta\omega=\omega-\frac{2\omega}{1+\frac{\hbar\omega}{m_{\rm e}c}\left(1- \cos\theta_{\rm B}\right)+\sqrt{\left\{1+\frac{\hbar\omega}{m_{\rm e}c}\left(1- \cos\theta_{\rm B}\right)\right\}^{2}-2\frac{\hbar\omega}{m_{\rm e}c}\sin^{2} \theta_{\rm B}}}, \tag{76}\] \[\frac{\mathrm{d}\sigma_{\perp}}{\mathrm{d}\Omega}\approx\frac{3\sigma_{\rm T}} {32\pi}\frac{\omega\omega^{\prime 2}}{\left(2\omega-\omega^{\prime}\right)} \left(1+\cos^{2}\theta_{\rm B}\right)\left[\frac{1}{\left(\omega-\omega_{\rm c }\right)^{2}}+\frac{1}{\left(\omega+\omega_{\rm c}\right)^{2}}\right], \tag{77}\] where, \(\omega\), \(\omega^{\prime}\), and \(\theta_{\rm B}\) represent the angular frequencies of the incident photon, scattered photon, and the angle between the background magnetic field and the direction of the scattered photon, respectively. The frequency shift due to Compton scattering can be expanded up to the first order in the quantum correction parameter \(\frac{\hbar\omega}{m_{\rm e}c}\) as \[\Delta\omega=\frac{\hbar\omega^{2}}{m_{\rm e}c}\left\{\frac{1}{2}\left(1+\cos ^{2}\theta_{\rm B}\right)-\cos\theta_{\rm B}\right\}+\mathcal{O}\left(\left( \frac{\hbar\omega}{m_{\rm e}c}\right)^{2}\right). \tag{78}\] Depending on whether a magnetic field is present, there is a significant difference in the treatment of the differential cross-section \(\mathrm{d}\sigma\) in equations (68) and (69). Without a magnetic field, the quantum correction from the Klein-Nishina's formula appears in the second order, as seen in equation (71). However, with a magnetic field, the quantum correction to the differential cross-section given in equation (77) emerges in the first order, the same as the frequency shift. Therefore, we will consider quantum corrections to the differential cross-section up to the first order in the following discussion. Substituting the differential cross-section (77) and the frequency shift (78) into equation (69), the evolution equation for photons due to induced Compton scattering is derived as \[\begin{split}\frac{\mathrm{d}N}{\mathrm{d}t}\simeq& \frac{3n_{\rm e}}{16\pi}\sigma_{\rm T}\left[\left(\frac{\omega}{ \omega-\omega_{\rm c}}\right)^{2}+\left(\frac{\omega}{\omega+\omega_{\rm c}} \right)^{2}\right]N\\ &\times\int\mathrm{d}\Omega^{\prime}\left(1+\cos^{2}\theta_{\rm B }\right)\frac{\hbar}{m_{\rm e}c}\left\{\frac{1}{2}\left(1+\cos^{2}\theta_{ \rm B}\right)-\cos\theta_{\rm B}\right\}\frac{1}{\omega}\frac{\partial}{ \partial\omega}\left(\omega^{3}N^{\prime}\right).\end{split} \tag{79}\] Focusing on the \(\theta_{\rm B}\) dependence in equation (79), we find that scattering from beam to beam is suppressed. On the other hand, scattering from the beam to the background radiation is exponentially amplified, becoming the dominant scattering, akin to the case without a magnetic field. Therefore, by estimating the order of magnitude, the effective optical depth for photons due to induced Compton scattering can be expressed by \[\tau_{\rm ind}^{\perp}\sim\left(\frac{\omega}{\omega_{\rm c}}\right)^{2}n_{\rm e }\sigma_{\rm T}c\Delta t\frac{3\pi L_{\gamma}}{4r^{2}m_{\rm e}\omega^{3}}. \tag{80}\] The effective cross-section of the induced Compton scattering of X-mode waves is roughly suppressed by \((\omega/\omega_{\rm c})^{2}\) in a strong magnetic field. ## V Summary In this study, we estimated the Thomson scattering cross-section for X-mode waves in \(e^{\pm}\) plasma, considering the collective effect in the presence of a background magnetic field. The results showed that the order of magnitude of the cross-section per particle remains unchanged compared to the case of single-particle scattering. In a strong magnetic field, the motion of electrons and positrons is dominated by drift motion, and one could think that the currents of electrons and positrons cancel with each other, and the scattering is suppressed significantly. However, since the plasma particles and density fluctuations follow a thermal distribution before the scattering, the correlation between different particles during scattering becomes negligible, and the scattering cancellation effect is absent. Furthermore, it was revealed that the collective effect in \(e^{\pm}\) plasma is not entirely canceled out when a background magnetic field is present, contrary to what was previously demonstrated in studies without a background magnetic field [56]. The effects of density fluctuations can be separated into contributions arising from the motion of particles in the direction of the incident electric field and the drift motion. The contribution from the motion in the direction of the incident electric field exhibits complete cancellation of the collective effect, similar to the case without a background magnetic field. On the other hand, in the contribution arising from the drift motion, the behavior of density fluctuations is independent of the charge sign. Thus, the collective effect is not canceled. In a background magnetic field, the radiation energy response of \(e^{\pm}\) plasma differs significantly between curvature radiation and Thomson scattering of X-mode electromagnetic waves. This distinction is rooted in the directional relationship between the background magnetic field and the plasma's response. While plasma particles can move unrestrictedly parallel to the background magnetic field, their movement is confined within the Larmor radius in the perpendicular direction. The plasma's response aligns with the background magnetic field's direction for curvature radiation due to the trajectory of radiating particles along the curved field, leading to radiation suppression. In contrast, during Thomson scattering of X-mode waves, the plasma's response direction is perpendicular to the background field, limiting its ability to counteract the scattered wave and resulting in less radiation suppression. Plotting the differential cross-section for X-mode waves in \(e^{\pm}\) plasma following the Maxwellian distribution revealed that the scattering wave spectrum exhibits peaks separated by the cyclotron frequency if the scattering wave propagates nearly perpendicular to the background magnetic field. If observed in pulsars and FRBs, such distinctive spectral features could provide valuable information about the scattering region's plasma and magnetic field strength. Moreover, considering the effects of background magnetic fields and the collective effect, we investigated the induced Compton scattering of X-mode waves in a strong magnetic field and cold plasma. For the first time, we treat the induced Compton scattering of electromagnetic waves polarized perpendicular to a strong magnetic field from the first principles. As a result, the effective optical depth was found to be suppressed by the square of the cyclotron frequency, specifically \((\nu/\nu_{\rm c})^{2}\), compared to without the magnetic field. When considering FRBs propagating through the magnetosphere of a magnetar, it is found that FRBs propagating as X-mode waves in pair plasma have an expanded region from which they can escape the magnetosphere due to the effects of the background magnetic field. Additionally, by taking into account the relativistic effects of the scattering medium, it is indicated that the required Lorentz factor is smaller than in cases without a magnetic field. ###### Acknowledgements. We want to express our gratitude to Shuichi Matsukiyo, Ryo Yamazaki, Shuta Tanaka, Masanori Iwamoto, and Shoma F. Kamijima for their invaluable feedback and insightful discussions regarding the physics presented in this paper. We also appreciate the constructive comments from Koutarou Kyutoku, and Wataru Ishizaki during our regular group meetings. The discussions with Takahiro Tanaka, Hidetoshi Omiya, Takumi Kakehi, and Masaki Nishiura were instrumental in bringing this research to fruition. This work was supported by JST SPRING, Grant Number JPMJSP2110, and MEXT/JSPS KAKENHI Grant Numbers 23H05430, 23H04900, 22H00130, 20H00158. Appendix A Thomson scattering in electron-positron plasma when \(\omega\ll\omega_{\rm e},\omega_{\rm p}\) Gil _et al._[45] estimated the curvature radiation from charged particles moving along an infinitely strong curved magnetic field, taking into account the response of the plasma. The results showed that the radiation is suppressed by a factor of roughly \((\omega/\omega_{\rm p})^{2}\) compared to curvature radiation in vacuum. In this chapter, we estimate how the response of the plasma corrects the radiation intensity in Thomson scattering of X-mode electromagnetic waves in a strong magnetic field. We impose the following assumptions in deriving the details. * The \(e^{\pm}\) plasma is uniformly distributed with a number density \(n_{\rm e}\). * A uniform magnetic field \(\mathbf{B}_{0}=(B_{0},0,0)\) exists in the \(x\)-axis direction. * Assume an electron as a scattering particle. * The X-mode wave is perpendicularly incident on the magnetic field at a wave-number vector \(\mathbf{k}_{0}=(0,0,k_{0})\) and angular frequency \(\omega_{0}\). * The motion of a particle in the wave field is approximated as non-relativistic. In plasma, the wave equation for the electromagnetic potential is modified from the case in vacuum when the response of the plasma is taken into account. Lienard-Wiechert potential, which describes radiation from charged particles in a vacuum, is not applicable. Therefore, to estimate the radiation intensity from a charged particle, the work done by the radiative electric field on the emitting particle must be calculated directly. The wave equation for the electromagnetic potential is derived by considering the response of the plasma. First, the wave equation of the Fourier-transformed electromagnetic field potential can be written as \[\begin{pmatrix}k^{2}-\frac{\omega^{2}}{c^{2}}\end{pmatrix}\widetilde{\phi}( \mathbf{k},\omega)=4\pi\widetilde{\rho}(\mathbf{k},\omega),\\ \begin{pmatrix}k^{2}-\frac{\omega^{2}}{c^{2}}\end{pmatrix}\widetilde{\mathbf{A}}( \mathbf{k},\omega)=\frac{4\pi}{c}\widetilde{\mathbf{j}}(\mathbf{k},\omega). \tag{38}\] We can substitute the current density and charge density produced by the plasma in the source term. The plasma current can be written as \[\widetilde{\mathbf{j}}_{\rm plasma}\;=\mathbf{\sigma}\cdot\widetilde{\mathbf{E}}=i\frac{ \omega}{c}\mathbf{\sigma}\cdot\widetilde{\mathbf{A}}-i\mathbf{\sigma}\cdot\mathbf{k} \widetilde{\phi}. \tag{39}\] Here, \(\mathbf{\sigma}\) is the conductivity tensor of the cold \(e^{\pm}\) plasma \[\mathbf{\sigma}\equiv i\frac{\omega_{\rm p}^{2}}{4\pi\omega}\left(\begin{array}[] {ccc}1&0&0\\ 0&\frac{1}{1-u}&0\\ 0&0&\frac{1}{1-u}\end{array}\right), \tag{40}\] where we define the following \[u\equiv\left(\frac{\omega_{\rm c}}{\omega}\right)^{2},\quad s\equiv\left( \frac{\omega_{\rm p}}{\omega}\right)^{2}. \tag{41}\] The charge density produced by the plasma is obtained from the continuity equation \[\frac{\partial\rho}{\partial t}+\nabla\cdot\mathbf{j}=0\Rightarrow\widetilde{ \rho}_{\rm plasma}\;=\frac{\mathbf{k}}{\omega}\cdot\widetilde{\mathbf{j}}_{\rm plasma}\;. \tag{42}\] By moving the plasma response term to the left-hand side and the source term to the right-hand side, the wave equations can be written as follows \[\begin{split}&\left(k^{2}-\frac{\omega^{2}}{c^{2}}\right) \widetilde{\phi}-4\pi i\left\{\frac{\mathbf{k}\cdot(\mathbf{\sigma}\cdot\widetilde{ \mathbf{A}})}{c}-\frac{\mathbf{k}\cdot(\mathbf{\sigma}\cdot\mathbf{k})}{\omega}\widetilde{\phi }\right\}\\ &=4\pi\widetilde{\rho}_{\rm particle}\;,\\ &\left(k^{2}-\frac{\omega^{2}}{c^{2}}\right)\widetilde{\mathbf{A}}- \frac{4\pi i}{c}\left\{\frac{\omega}{c}\mathbf{\sigma}\cdot\widetilde{\mathbf{A}}- \mathbf{\sigma}\cdot\mathbf{k}\widetilde{\phi}\right\}=\frac{4\pi}{c}\widetilde{\mathbf{j }}_{\rm particle}.\end{split} \tag{43}\] The above equation is a four-simultaneous equation for four-potential \(\widetilde{A}^{\alpha}\equiv\left(\widetilde{\phi},\widetilde{A}_{x}, \widetilde{A}_{y},\widetilde{A}_{z}\right)\) with four-current \(\widetilde{j}_{\rm particle}^{\alpha}\equiv\left(c\widetilde{\rho},\widetilde{ j}_{x},\widetilde{j}_{y},\widetilde{j}_{z}\right)\) as the source term, and can be solved algebraically. As a radiating particle, we assume an electron oscillating in the X-mode wave field and the background magnetic field. Then, from equation (3), the four-current produced by the radiating particle is described up to the order of \((\omega_{0}/\omega_{\rm c})^{1}\) as follows \[\begin{split}&\widetilde{j}_{z}(\mathbf{k},\omega)\simeq\frac{2 \pi e^{2}E_{0}}{m_{\rm e}\omega_{\rm c}}\delta\left(\omega-\omega_{0}\right), \\ &\widetilde{j}_{\rm particle}^{\alpha}=\left(\frac{ck_{z}}{\omega}, 0,0,1\right)\widetilde{j}_{z}.\end{split} \tag{44}\] From the above, by solving for the four-potential, we can obtain the electromagnetic field produced by the oscillating particle, taking the response of the plasma into account. We are interested in the limit where the angular frequency of the incident electromagnetic wave is sufficiently smaller than the cyclotron and plasma frequencies. That is, the following simultaneous limits can be considered \[\frac{1}{s}=\left(\frac{\omega}{\omega_{\rm p}}\right)^{2}\ll 1,\quad\frac{1}{u }=\left(\frac{\omega}{\omega_{\rm c}}\right)^{2}\ll 1.\] Define \(\alpha\) as follows \[\alpha\equiv\left(\frac{\omega_{\rm c}}{\omega_{\rm p}}\right)^{2}=\frac{u}{s}, \tag{45}\] and Taylor expansion in \(1/s\) while keeping \(\alpha\) constant. In the lowest order expansion, the radiative electric field is estimated as follows \[\widetilde{\mathbf{E}}=\left(\begin{array}{c}0\\ i\frac{k_{z}k_{y}\frac{u}{c}\alpha^{2}}{\left\{k^{2}\alpha-\frac{\omega^{2}}{c^ {2}}(1+\alpha)\right\}}\left\{k_{x}^{2}\alpha-\frac{\omega^{2}}{c^{2}}(1+ \alpha)\right\}\\ i\frac{\frac{\omega}{c}\alpha\left\{\left(k_{x}^{2}+k_{z}^{2}\right)\alpha- \frac{\omega^{2}}{c^{2}}(1+\alpha)\right\}}{\left\{k^{2}\alpha-\frac{\omega^{ 2}}{c^{2}}(1+\alpha)\right\}}\left\{k_{x}^{2}\alpha-\frac{\omega^{2}}{c^{2}}(1+ \alpha)\right\}}\end{array}\right)\frac{4\pi}{c}\widetilde{j}_{z}. \tag{46}\] The energy radiated per unit time by a particle in the X-mode electromagnetic waves is equal to the work done by the radiative electric field on the oscillating particle if the radiative energy loss of the particle is ignored. In other words, the time-averaged power of the radiation from the oscillating particle is \[\left\langle P\right\rangle_{\rm T}=-\left\langle\int{\rm d}^{3} \mathbf{r}\,{\rm Re}\left[j_{z}^{\rm part}\left(\mathbf{r},t\right)\right]{\rm Re} \left[E_{z}(\mathbf{r},t)\right]\right\rangle_{\rm T}\] \[= -\frac{1}{4(2\pi)^{8}}\int{\rm d}^{3}\mathbf{r}{\rm d}^{3}\mathbf{k}^{3}{ \rm d}^{3}{\rm d}^{\prime}{\rm d}\omega{\rm d}\omega^{\prime}\left[\left\langle e ^{i\left\{\left(\mathbf{k}^{\prime}-\mathbf{k}\right)\cdot\mathbf{r}-\left(\omega^{\prime }-\omega\right)t\right\}}\right\rangle_{\rm T}\right.\] \[\left.\times\widetilde{j_{z}}\left(\mathbf{k}^{\prime},\omega^{\prime }\right)\widetilde{E}_{z}^{*}(\mathbf{k},\omega)\right.\] \[+ \left.\left\langle e^{-i\left\{\left(\mathbf{k}^{\prime}-\mathbf{k} \right)\cdot\mathbf{r}-\left(\omega^{\prime}-\omega\right)t\right\}}\right\rangle _{\rm T}\widetilde{j}_{z}^{*}\left(\mathbf{k}^{\prime},\omega^{\prime}\right) \widetilde{E}_{z}(\mathbf{k},\omega)\right].\] The integral is calculated using the following procedure: first, apply the delta function for \(\omega\), \(\omega^{\prime}\), \(x\), \(k_{x}^{\prime}\), \(k_{y}^{\prime}\), \(z\), and \(k_{z}^{\prime}\). Second, perform complex integration for \(k_{y}\). Finally, carry out the remaining two-dimensional integrals for \(k_{x}\) and \(k_{z}\). A concrete calculation process is shown below. Substituting the radiative electric field (14) and source current (13), the integration can be written as follows \[\left\langle\frac{{\rm d}^{2}P}{{\rm d}\mu{\rm d}\nu}\right\rangle_ {\rm T} = -i\frac{e^{4}E_{0}^{2}\omega_{0}^{3}}{8\pi^{2}m_{\rm e}^{2}c^{4} \omega_{\rm c}^{2}}\frac{\left(\mu^{2}+\nu^{2}\right)-\frac{\alpha+1}{\alpha} }{\nu^{2}-\frac{\alpha+1}{\alpha}}\int{\rm d}y\delta(y)\] \[\times \int_{-\infty}^{+\infty}{\rm d}k_{y}\left[\frac{e^{ik_{y}y}}{k_{ y}^{2}-\left(\frac{\omega_{0}+i\varepsilon}{c}\right)^{2}\left(\frac{\alpha+1}{ \alpha}-\mu^{2}-\nu^{2}\right)}\right.\] \[- \left.\frac{e^{ik_{y}y}}{k_{y}^{2}-\left(\frac{\omega_{0}-i \varepsilon}{c}\right)^{2}\left(\frac{\alpha+1}{\alpha}-\mu^{2}-\nu^{2}\right) }\right].\] Here \[\mu\equiv\frac{k_{x}c}{\omega_{0}},\quad\nu\equiv\frac{k_{x}c}{\omega_{0}}, \tag{16}\] are dimensionless quantities of \(k_{z}\) and \(k_{x}\), respectively. Applying the residue theorem to the \(k_{y}\) integral, a case separation arises depending on the magnitude of \(\mu\) and \(\nu\) as follows \[\left\langle\frac{{\rm d}^{2}P}{{\rm d}\mu{\rm d}\nu}\right\rangle_ {\rm T} =-i\frac{e^{4}E_{0}^{2}\omega_{0}^{3}}{8\pi^{2}m_{\rm e}^{2}c^{4} \omega_{\rm c}^{2}}\frac{\left(\mu^{2}+\nu^{2}\right)-\frac{\alpha+1}{\alpha} }{\nu^{2}-\frac{\alpha+1}{\alpha}}\] \[\times\left\{\begin{array}{c}\frac{2\pi ic}{\omega_{0}}\frac{1 }{\sqrt{\frac{\alpha+1}{\alpha}-\mu^{2}-\nu^{2}}}\quad\left(\mu^{2}+\nu^{2} \leq\frac{\alpha+1}{\alpha}\right)\\ 0\quad\left(\mu^{2}+\nu^{2}\geq\frac{\alpha+1}{\alpha}\right)\end{array}\right..\] Finally, by performing the remaining \(\mu\) and \(\nu\) integrals, the time-averaged radiative energy per unit time can be calculated as, \[\left\langle P\right\rangle_{\rm T} = \frac{e^{4}E_{0}^{2}}{4\pi m_{\rm e}^{2}c^{3}}\left(\frac{\omega_ {0}}{\omega_{\rm c}}\right)^{2}\int_{\mu^{2}+\nu^{2}\leq\frac{\alpha+1}{ \alpha}}{\rm d}\mu{\rm d}\nu\frac{\sqrt{\frac{\alpha+1}{\alpha}-\mu^{2}-\nu^ {2}}}{\frac{\alpha+1}{\alpha}-\nu^{2}}\] \[= \frac{e^{4}E_{0}^{2}}{4m_{\rm e}^{2}c^{3}}\left(\frac{\omega_{0} }{\omega_{\rm c}}\right)^{2}\sqrt{1+\left(\frac{\omega_{\rm p}}{\omega_{\rm c }}\right)^{2}}.\] Next, we evaluate the scattering cross-section of the radiation process under consideration. In \(e^{\pm}\) plasma, the group velocity of X-mode waves deviates from the speed of light, according to the dispersion relation of X-mode waves \[\frac{c^{2}k^{2}}{\omega^{2}}=1-\frac{s}{1-u}\xrightarrow{1/s,1/u\ll 1}1+ \frac{1}{\alpha}. \tag{17}\] Therefore, we can evaluate the group velocity of X-mode waves as follows \[v_{\rm g}^{\rm X}=\frac{{\rm d}\omega}{{\rm d}k}\xrightarrow{1/s,1/u\ll 1}c \frac{1}{\sqrt{1+\frac{1}{\alpha}}}. \tag{18}\] Hence, Thomson scattering cross-section with the X-mode electromagnetic wave considering the plasma response is evaluated as follows \[\sigma =\frac{8\pi}{v_{\rm g}^{\rm X}E_{0}^{2}}\cdot\frac{e^{4}E_{0}^{2} }{4m_{\rm e}^{2}c^{3}}\left(\frac{\omega_{0}}{\omega_{\rm c}}\right)^{2} \sqrt{1+\left(\frac{\omega_{\rm p}}{\omega_{\rm c}}\right)^{2}} \tag{19}\] \[=\frac{3}{4}\sigma_{\rm T}\left(\frac{\omega_{0}}{\omega_{\rm c}} \right)^{2}\left\{1+\left(\frac{\omega_{\rm p}}{\omega_{\rm c}}\right)^{2} \right\}.\] The effect of plasma response can be evaluated as \[\frac{3}{4}\left\{1+\left(\frac{\omega_{\rm p}}{\omega_{\rm c}} \right)^{2}\right\}\sim 1, \tag{20}\] in the case of \(\omega_{\rm c}>\omega_{\rm p}\gg\omega_{0}\) such as in the magnetar magnetosphere. This value is nearly similar to the vacuum case. The effect of the response of \(e^{\pm}\) plasma in the background magnetic field on the radiation energy is quite different between curvature radiation and Thomson scattering of X-mode electromagnetic waves. This can be interpreted physically by looking at the relationship between the direction of the background magnetic field and the direction in which the plasma responds. In the presence of a background magnetic field, plasma particles are free to move in the direction parallel to the background magnetic field but only within the Larmor radius in the direction perpendicular to the background magnetic field. In the case of curvature radiation, the response of the plasma is in the direction of the background magnetic field because the radiating particles are moving along the curved magnetic field. Hence, the plasma responds so that the radiation is canceled, thereby suppressing the radiation. On the other hand, in the case of Thomson scattering of X-mode electromagnetic waves, the plasma's direction of response is perpendicular to the background magnetic field, so the plasma cannot freely respond to the scattered wave, and the radiation is not suppressed so much. ## Appendix B Misleading idea on Thomson scattering in \(e^{\pm}\) plasma with a strong magnetic field This section presents a misleading idea about the scattering in \(e^{\pm}\) plasma with a strong magnetic field. Since drift motion dominates the motion of charged particles under the incident electromagnetic field in a strong magnetic field, one could think that electrons and positrons cancel out the scattering everywhere. This incorrect picture appears where initially static free electrons and free positrons simultaneously scatter X-mode waves. Specifically, consider Thomson scattering in \(e^{\pm}\) plasma with a strong magnetic field in the following situation. * Suppose initially that an electron with charge \(-e\) at \((-\frac{d}{2},0,0)\) and a positron with charge \(+e\) at \((\frac{d}{2},0,0)\) are at rest. That is, suppose that the electron and positron are separated from each other by a microscopic distance \(d\). * A uniform magnetic field \(\mathbf{B}_{0}=(B_{0},0,0)\) exists in the \(x\)-axis direction. * The X-mode wave is perpendicularly incident on the magnetic field at a wave-number vector \(\mathbf{k}_{0}=(0,0,k_{0})\) and angular frequency \(\omega_{0}\). * The motion of a particle in the wave field is approximated as non-relativistic. As in Appendix A, determine the current density and electric field produced by the radiating particles. The current can be obtained by adding the electron and positron contributions \[\mathbf{j}_{\text{particle}}(\mathbf{r},t)=\left[e\mathbf{v}_{+}\delta\left(x-\frac{d}{2} \right)-e\mathbf{v}_{-}\delta\left(x+\frac{d}{2}\right)\right]\delta(y)\delta(z). \tag{10}\] Then, from equation (3), substituting the motion of the particles, the current density is obtained as follows \[\widetilde{j}_{y}(\mathbf{k},\omega) =i\frac{4\pi e^{2}E_{0}\omega_{0}}{m_{\text{e}}\left(\omega_{0}^ {2}-\omega_{\text{c}}^{2}\right)}\cos\left(\frac{k_{\text{x}}d}{2}\right) \delta\left(\omega-\omega_{0}\right) \tag{11}\] \[\simeq i\frac{4\pi e^{2}E_{0}\omega_{0}}{m_{\text{e}}\left(\omega_{0}^ {2}-\omega_{\text{c}}^{2}\right)}\delta\left(\omega-\omega_{0}\right)\] \[\widetilde{j}_{z}(\mathbf{k},\omega) =-i\frac{4\pi e^{2}E_{0}\omega_{0}}{m_{\text{e}}\left(\omega_{0}^ {2}-\omega_{\text{c}}^{2}\right)}\sin\left(\frac{k_{\text{x}}d}{2}\right) \delta\left(\omega-\omega_{0}\right)\] \[\simeq-i\frac{4\pi e^{2}E_{0}\omega_{0}}{m_{\text{e}}\left( \omega_{0}^{2}-\omega_{\text{c}}^{2}\right)}\frac{k_{\text{x}}d}{2}\delta \left(\omega-\omega_{0}\right).\] Here, we approximated that the wavelength of the incident electromagnetic wave is sufficiently large compared to the average distance between the electrons and positrons, that is \[k_{x}d\sim\frac{d}{\lambda}\ll 1. \tag{12}\] The electromagnetic potential is calculated to find the radiated electric field. The electromagnetic potential can be obtained by substituting the source into the following wave equation for the electromagnetic potential \[\left(k^{2}-\frac{\omega^{2}}{c^{2}}\right)\widetilde{\mathbf{A}}(\mathbf{k},\omega)= \frac{4\pi}{c}\widetilde{j}_{\text{particle}}.\] The radiated electric field is obtained as follows \[\widetilde{E_{y}}(\mathbf{k},\omega)= \frac{\left(k_{y}^{2}-\frac{\omega^{2}}{c^{2}}\right)\omega_{0}+k _{y}k_{z}\frac{k_{x}d}{2}\omega_{\text{c}}}{\frac{\omega}{c}\left\{k^{2}- \frac{(\omega+i\varepsilon)^{2}}{c^{2}}\right\}} \tag{13}\] \[\times\frac{16\pi^{2}e^{2}E_{0}}{m_{\text{e}}c\left(\omega_{0}^{ 2}-\omega_{\text{c}}^{2}\right)}\delta\left(\omega-\omega_{0}\right),\] \[\widetilde{E_{z}}(\mathbf{k},\omega)= -\frac{\left(k_{z}^{2}-\frac{\omega^{2}}{c^{2}}\right)\frac{k_{ \text{z}}d}{2}\omega_{\text{c}}+k_{y}k_{z}\omega_{0}}{\frac{\omega}{c}\left\{ k^{2}-\frac{(\omega+i\varepsilon)^{2}}{c^{2}}\right\}} \tag{14}\] \[\times i\frac{16\pi^{2}e^{2}E_{0}}{m_{\text{e}}c\left(\omega_{0}^ {2}-\omega_{\text{c}}^{2}\right)}\delta\left(\omega-\omega_{0}\right).\] The time-averaged energy radiated by an electron-positron pair per unit time can be obtained by estimating the work done by the radiative electric field on the oscillating particle as in Appendix A \[P^{(2)} =P_{y}^{(2)}+P_{z}^{(2)} \tag{15}\] \[=\frac{4e^{4}E_{0}^{2}}{3m_{\text{e}}^{2}c^{3}}\frac{\omega_{0}^ {4}}{\left(\omega_{0}^{2}-\omega_{\text{c}}^{2}\right)^{2}}+\frac{2e^{4}E_{0} ^{2}}{15m_{\text{e}}^{2}c^{3}}\frac{\omega_{0}^{2}\omega_{\text{c}}^{2}}{\left( \omega_{0}^{2}-\omega_{\text{c}}^{2}\right)^{2}}\left(\frac{\omega_{0}d}{c} \right)^{2}.\] Eventually, the scattering cross-section per particle is obtained as follows \[\sigma^{(1)}\simeq 2\left[\sigma_{\text{T}}\left(\frac{\omega_{0}}{\omega_{ \text{c}}}\right)^{4}+\sigma_{\text{T}}\left(\frac{\omega_{0}}{\omega_{\text{ c}}}\right)^{2}\frac{1}{10}\left(\frac{\omega_{0}d}{c}\right)^{2}\right]. \tag{16}\] We give a physical interpretation of the scattering cross-section for an electron-positron pair scattering with an X-mode electromagnetic wave in a background magnetic field. The first term in the equation (16) represents the contribution of the incident electromagnetic wave oscillating in the electric field direction, and the second term represents the contribution of the particles oscillating in the drift direction. Since the motion in the drift direction does not depend on the charge sign, electrons and positrons cancel each other's current, and a factor \(((\omega_{0}d)/c)^{2}\) appears in the scattering cross-section. We estimate the scattering cancellation effect due to drift for FRBs of typical frequencies propagating through the magnetar magnetosphere. Assuming that the average interparticle distance between electrons and positrons is \(d\sim n_{\text{e}}^{-1/3}\sim 2\times 10^{-5}\) cm, the cyclotron frequency is \(\omega_{\text{c}}\sim 10^{6}\) GHz and the FRB angular frequency is 1 GHz, the scattering suppression effect is estimated as follows \[\sigma^{(1)}\sim\sigma_{\text{T}}\left(\frac{\omega_{0}}{\omega_{\text{c}}} \right)^{2}\max\left\{10^{-12}\frac{\omega_{0}^{2}}{\omega_{\text{c,15}}^{2}},1 0^{-13}\frac{\omega_{0}^{2}}{d_{-5}^{2}}\right\}. \tag{17}\] Thomson scattering in \(e^{\pm}\) plasma appears significantly suppressed when the scattering cancellation effect due to drift motion is considered. However, as shown in this paper, when properly accounting for the particle statistics of the plasma, such cancellation effects do not occur (see equation (65)). ## Appendix C Detailed derivation of the positron/electron density fluctuation In this section, positron/electron density fluctuations in \(e^{\pm}\) plasma are derived from the Vlasov equation. ### Vlasov equations for charged particles First, the Vlasov equation for the positron/electron distribution function is written down by separating the equilibrium state distribution function \(F_{0\pm}\) and the first-order perturbation of the distribution function \(\delta F_{\pm}\). The Vlasov equation for the positron/electron distribution function can be written as follows \[\frac{\partial F_{0\pm}}{\partial t}+\frac{\partial\delta F_{\pm}}{\partial t }+\mathbf{v}\cdot\left(\frac{\partial F_{0\pm}}{\partial\mathbf{r}}+\frac{\partial \delta F_{\pm}}{\partial\mathbf{r}}\right)\pm\frac{e}{m_{\mathrm{e}}}\left(\mathbf{E}+ \frac{\mathbf{v}\times\mathbf{B}_{0}}{c}\right)\cdot\left(\frac{\partial F_{0\pm}}{ \partial\mathbf{v}}+\frac{\partial\delta F_{\pm}}{\partial\mathbf{v}}\right)=0. \tag{10}\] Considering the electric field \(\mathbf{E}\) produced by the plasma and the perturbation \(\delta F_{\pm}\) of the distribution function as perturbative quantities, the Vlasov equation can be divided into non-perturbative and perturbative components as follows \[\frac{\partial F_{0\pm}}{\partial t}+\mathbf{v}\cdot\frac{\partial F_ {0\pm}}{\partial\mathbf{r}}\pm\frac{e}{m_{\mathrm{e}}c}(\mathbf{v}\times\mathbf{B}_{0}) \cdot\frac{\partial F_{0\pm}}{\partial\mathbf{v}}=0, \tag{11}\] \[\frac{\partial\delta F_{\pm}}{\partial t}+\mathbf{v}\cdot\frac{ \partial\delta F_{\pm}}{\partial\mathbf{r}}\pm\frac{e}{m_{\mathrm{e}}c}\left(\mathbf{ v}\times\mathbf{B}_{0}\right)\cdot\frac{\partial\delta F_{\pm}}{\partial\mathbf{v}}\pm \frac{e}{m_{\mathrm{e}}}\mathbf{E}\cdot\frac{\partial F_{0\pm}}{\partial\mathbf{v}}=0.\] The second of (11) is the equation to be satisfied by the first-order perturbation of the distribution function. The Lorentz force term, in the second equation (11) can be simplified as follows by converting the differential variable from \(\mathbf{v}\) to the angle \(\varphi(t)\) between the velocity component perpendicular to the background magnetic field \(\mathbf{B}_{0}=(B_{0},0,0)\) and the \(y\) axis (see equation (34)) \[(\mathbf{v}\times\mathbf{B}_{0})\cdot\frac{\partial\delta F_{\pm}}{ \partial\mathbf{v}}= v_{z}B_{0}\frac{\partial\delta F_{\pm}}{\partial v_{y}}-v_{y}B_{0} \frac{\partial\delta F_{\pm}}{\partial v_{z}} \tag{12}\] \[= B_{0}\left\{v_{z}\left(\frac{v_{y}}{v_{\perp}}\frac{\partial \delta F_{\pm}}{\partial v_{\perp}}-\frac{v_{z}\cos^{2}\varphi}{v_{y}^{2}} \frac{\partial\delta F_{\pm}}{\partial\varphi}\right)-v_{y}\left(\frac{v_{z}} {v_{\perp}}\frac{\partial\delta F_{\pm}}{\partial v_{\perp}}+\frac{\cos^{2} \varphi}{v_{y}}\frac{\partial\delta F_{\pm}}{\partial\varphi}\right)\right\}\] \[= -B_{0}\frac{\partial\delta F_{\pm}}{\partial\varphi}.\] In the end, the equation for the fluctuations of the distribution function is as follows \[\frac{\partial\delta F_{\pm}}{\partial t}+\mathbf{v}\cdot\frac{\partial\delta F_{ \pm}}{\partial\mathbf{r}}\mp\frac{eB_{0}}{m_{\mathrm{e}}c}\frac{\partial\delta F _{\pm}}{\partial\varphi}\pm\frac{e}{m_{\mathrm{e}}}\mathbf{E}\cdot\frac{\partial F _{0\pm}}{\partial\mathbf{v}}=0. \tag{13}\] ### Derivation of the density fluctuations The equation (13) for the fluctuations of the distribution function is Fourier-Laplace transformed for space and time as follows \[\widetilde{\delta F}_{\pm}(\mathbf{k},\mathbf{v},\omega)=\int_{0}^{\infty}\mathrm{d}t \ e^{-i(\omega-i\varepsilon)t}\int\mathrm{d}^{3}\mathbf{r}\ \delta F_{\pm}(\mathbf{r},\mathbf{v},t)e^{i\mathbf{k}\cdot\mathbf{r}}. \tag{14}\] Then, the density fluctuations appearing in the spectral density function are represented by fluctuations in the distribution function as follows \[\widetilde{\delta n_{\pm}}(\mathbf{k},\omega) =\int_{0}^{\infty}\mathrm{d}t\ e^{-i(\omega-i\varepsilon)t}\int \mathrm{d}^{3}\mathbf{r}\ \delta n_{\pm}(\mathbf{r},t)e^{i\mathbf{k}\cdot\mathbf{r}} \tag{15}\] \[=\int\mathrm{d}^{3}\mathbf{v}\ \widetilde{\delta F}_{\pm}(\mathbf{k},\mathbf{v}, \omega).\] Fourier-Laplace transform of equation (13) leads to the following first-order differential equation for \(\widetilde{\delta F_{\pm}}\) with \(\varphi\) as a variable \[i(\omega-i\varepsilon-\mathbf{k}\cdot\mathbf{v})\widetilde{\delta F_{\pm}}(\mathbf{k}, \mathbf{v},\omega)\mp\omega_{\mathrm{e}}\frac{\partial\widetilde{\delta F_{\pm}}( \mathbf{k},\mathbf{v},\omega)}{\partial\varphi}=\widetilde{\delta F_{\pm}}(\mathbf{k},\mathbf{v },t=0)\mp\frac{e}{m_{\mathrm{e}}}\widetilde{\mathbf{E}}(\mathbf{k},\omega)\cdot\frac{ \partial F_{0\pm}}{\partial\mathbf{v}}. \tag{16}\] This differential equation can be solved using the variation of constants \[\begin{split}&\widetilde{\delta F_{\pm}}(\mathbf{k},\mathbf{v},\omega)= \frac{1}{\omega_{\mathrm{c}}}\int\mathrm{d}\varphi^{\prime}\exp\left[\mp\left(i \frac{\omega-i\varepsilon-k_{x}v_{\parallel}}{\omega_{\mathrm{c}}}\varphi^{ \prime}-ik_{\perp}r_{\mathrm{L}}\sin\varphi^{\prime}\right)\right]\\ &\times\left\{\mp\widetilde{\delta F_{\pm}}(\mathbf{k},\mathbf{v},t=0)+ \frac{e}{m_{\mathrm{e}}}\widetilde{\mathbf{E}}(\mathbf{k},\omega)\cdot\frac{\partial F _{0\pm}}{\partial\mathbf{v}}\right\}\exp\left[\pm\left(i\frac{\omega-i\varepsilon- k_{x}v_{\parallel}}{\omega_{\mathrm{c}}}\varphi-ik_{\perp}r_{\mathrm{L}}\sin \varphi\right)\right].\end{split} \tag{100}\] In the following, we will perform \(\varphi^{\prime}\) integrals and represent the fluctuations of the distribution function using special functions. First, the exponential function is expanded using the infinite sum formula of the Bessel function \[e^{iz\sin\varphi}=\sum_{l=-\infty}^{+\infty}J_{l}(z)e^{il\varphi}, \tag{101}\] as follows \[\begin{split}&\widetilde{\delta F_{\pm}}(\mathbf{k},\mathbf{v},\omega)= \frac{1}{\omega_{\mathrm{c}}}\int\mathrm{d}\varphi^{\prime}\exp\left[\mp\left( i\frac{\omega-i\varepsilon-k_{x}v_{\parallel}}{\omega_{\mathrm{c}}}\varphi^{ \prime}\right)\right]\sum_{l=-\infty}^{+\infty}J_{l}\left(\pm k_{\perp}r_{ \mathrm{L}}\right)e^{il\varphi^{\prime}}\\ &\times\left\{\mp\widetilde{\delta F_{\pm}}(\mathbf{k},\mathbf{v},t=0)+ \frac{e}{m_{\mathrm{e}}}\widetilde{\mathbf{E}}(\mathbf{k},\omega)\cdot\frac{\partial F _{0\pm}}{\partial\mathbf{v}}\right\}\exp\left[\pm\left(i\frac{\omega-i\varepsilon- k_{x}v_{\parallel}}{\omega_{\mathrm{c}}}\varphi-ik_{\perp}r_{\mathrm{L}}\sin \varphi\right)\right].\end{split} \tag{102}\] With respect to the second term in (102), we decompose the velocity derivative into directions perpendicular and parallel to the background magnetic field \[\begin{split}&\sum_{l=-\infty}^{+\infty}\int\mathrm{d}\varphi^{ \prime}\exp\left[\mp\left(i\frac{\omega-i\varepsilon-k_{x}v_{\parallel}\mp l \omega_{\mathrm{c}}}{\omega_{\mathrm{c}}}\varphi^{\prime}\right)\right] \widetilde{\mathbf{E}}(\mathbf{k},\omega)\cdot\frac{\partial F_{0\pm}}{\partial\mathbf{v}}J _{l}\left(\pm k_{\perp}r_{\mathrm{L}}\right)\\ &=\sum_{l=-\infty}^{+\infty}\int\mathrm{d}\varphi^{\prime}\exp \left[\mp\left(i\frac{\omega-i\varepsilon-k_{x}v_{\parallel}\mp l\omega_{ \mathrm{c}}}{\omega_{\mathrm{c}}}\varphi^{\prime}\right)\right]\left( \widetilde{E}_{\parallel}\frac{\partial F_{0\pm}}{\partial v_{\parallel}}+ \widetilde{E_{\perp}}\cos\varphi\frac{\partial F_{0\pm}}{\partial v_{\perp}} \right)J_{l}\left(\pm k_{\perp}r_{\mathrm{L}}\right)\\ &=\pm i\sum_{l=-\infty}^{+\infty}\frac{\omega_{\mathrm{c}}}{ \omega-i\varepsilon-k_{x}v_{\parallel}\mp l\omega_{\mathrm{c}}}\widetilde{E}_ {\parallel}\frac{\partial F_{0\pm}}{\partial v_{\parallel}}J_{l}\left(\pm k_{ \perp}r_{\mathrm{L}}\right)\exp\left[\mp\left(i\frac{\omega-i\varepsilon-k_{x}v _{\parallel}\mp l\omega_{\mathrm{c}}}{\omega_{\mathrm{c}}}\varphi\right) \right]\\ &+\sum_{l=-\infty}^{+\infty}\int\mathrm{d}\varphi^{\prime}\exp \left[\mp\left(i\frac{\omega-i\varepsilon-k_{x}v_{\parallel}}{\omega_{\mathrm{c }}}\varphi^{\prime}\right)\right]\frac{J_{l+1}\left(\pm k_{\perp}r_{\mathrm{L}} \right)+J_{l-1}\left(\pm k_{\perp}r_{\mathrm{L}}\right)}{2}\widetilde{E}_{ \perp}\frac{\partial F_{0\pm}}{\partial v_{\perp}}e^{il\varphi^{\prime}}.\end{split} \tag{103}\] Here, in the transformation from line 2 to line 3, the trigonometric functions \(\cos\varphi\) were decomposed into exponential functions and absorbed into Bessel functions using equation (101). We now define the following differential operator \[\widetilde{E}_{\parallel}\frac{\partial F_{0\pm}}{\partial v_{\parallel}}\pm \widetilde{E}_{\perp}\frac{\partial F_{0\pm}}{\partial v_{\perp}}\frac{l}{k_{ \perp}r_{\mathrm{L}}}\equiv\widetilde{\mathbf{E}}\cdot\frac{\partial F_{0\pm}}{ \partial\mathbf{v}^{*}}. \tag{104}\] Using the Bessel function formula \[J_{l+1}(z)+J_{l-1}(z)=\frac{2l}{z}J_{l}(z), \tag{105}\] the fluctuations of the distribution function can be organized as follows \[\widetilde{\delta F_{\pm}}(\mathbf{k},\mathbf{v},\omega)=-i\sum_{l=-\infty}^{+\infty} \sum_{m=-\infty}^{+\infty}\frac{J_{l}\left(\pm k_{\perp}r_{\mathrm{L}}\right)J_ {m}\left(\pm k_{\perp}r_{\mathrm{L}}\right)e^{i(l-m)\varphi}}{\omega-i \varepsilon-k_{x}v_{\parallel}\mp l\omega_{\mathrm{c}}}\left\{\widetilde{ \delta F_{\pm}}(\mathbf{k},\mathbf{v},t=0)\mp\frac{e}{m_{\mathrm{e}}}\widetilde{\mathbf{E}} \cdot\frac{\partial F_{0\pm}}{\partial\mathbf{v}^{*}}\right\}. \tag{106}\] Hence, the positron/electron density fluctuation can be written as \[\widetilde{\delta m_{\pm}}(\mathbf{k},\omega)=-i\sum_{l,m}\int\mathrm{d}^{3}\mathbf{v} \frac{J_{l}\left(\pm k_{\perp}r_{\mathrm{L}}\right)J_{m}\left(\pm k_{\perp}r_{ \mathrm{L}}\right)e^{i(l-m)\varphi}}{\omega-i\varepsilon-k_{x}v_{\parallel}\mp l \omega_{\mathrm{c}}}\left\{\widetilde{\delta F_{\pm}}(\mathbf{k},\mathbf{v},t=0)\mp \frac{e}{m_{\mathrm{e}}}\widetilde{\mathbf{E}}\cdot\frac{\partial F_{0\pm}}{ \partial\mathbf{v}^{*}}\right\}. \tag{107}\] Then, for the first term of (115), the fluctuations of the distribution function just before the plasma scatters the electromagnetic waves (at time \(t=0\)) can be written in terms of individual particle-like representations as follows \[\begin{split}\widetilde{\delta F_{\pm}}(\mathbf{k},\mathbf{v},t=0)& =\sum_{j=1}^{N_{\pm}}\int\mathrm{d}^{3}\mathbf{r}\ e^{i\mathbf{k}\cdot\mathbf{ r}}\delta\left(\mathbf{r}-\mathbf{r}_{\pm j}(0)\right)\delta(\mathbf{v}-\mathbf{v}_{\pm j}(0))\\ &=\sum_{j=1}^{N_{\pm}}e^{i\mathbf{k}\cdot\mathbf{r}_{\pm j}(0)}\delta( \mathbf{v}-\mathbf{v}_{\pm j}(0)).\end{split} \tag{116}\] Substituting this into the expression (115), we can write \[\begin{split}\widetilde{\delta n_{\pm}}(\mathbf{k},\omega)& =-i\sum_{j=1}^{N_{\pm}}\sum_{l,m}e^{i\mathbf{k}\cdot\mathbf{r}_{\pm}(0)} \frac{J_{l}(\pm k_{\perp}r_{\mathrm{L}})J_{m}\left(\pm k_{\perp}r_{\mathrm{L} }\right)e^{i(l-m)\varphi_{j}(0)}}{\omega-i\varepsilon-k_{x}v_{\parallel}\mp l \omega_{\mathrm{c}}}\\ &\pm\frac{4\pi e}{m_{\mathrm{e}}k^{2}}\sum_{l,m}\int\mathrm{d}^{ 3}\mathbf{v}\frac{J_{l}\left(\pm k_{\perp}r_{\mathrm{L}}\right)J_{m}\left(\pm k_{ \perp}r_{\mathrm{L}}\right)e^{i(l-m)\varphi}}{\omega-i\varepsilon-k_{x}v_{ \parallel}\mp l\omega_{\mathrm{c}}}\widetilde{\rho}(\mathbf{k},\omega)\mathbf{k}\cdot \frac{\partial F_{0\pm}}{\partial\mathbf{v}^{*}}.\end{split} \tag{117}\] Note that the Maxwell equation \(\widetilde{\mathbf{E}}=\frac{4\pi i}{k^{2}}\mathbf{k}\widetilde{\rho}\) is applied to the second term in (115). In the velocity integral in the second term of (117), the following orthogonal relation from the angle integral \[\int_{0}^{2\pi}e^{i(l-m)\varphi}\mathrm{d}\varphi=\left\{\begin{array}{ll}2 \pi&l=m\\ 0&l\neq m\end{array}\right. \tag{118}\] can be used to simplify the infinite sum of Bessel functions for \(l\) and \(m\). Using the equation for the electric susceptibility of electrons/positrons in magnetized plasma (equivalent to equation (43)) \[H_{\pm}(\mathbf{k},\omega)\equiv\int\mathrm{d}^{3}\mathbf{v}\frac{4\pi e^{2}n_{\mathrm{ e}}}{m_{\mathrm{e}}k^{2}}\sum_{l=-\infty}^{+\infty}\mathbf{k}\cdot\frac{\partial f_{ \pm}}{\partial\mathbf{v}^{*}}\frac{J_{l}^{2}\left(\pm k_{\perp}r_{\mathrm{L}} \right)}{\omega-i\varepsilon-k_{x}v_{\parallel}\mp l\omega_{\mathrm{c}}}, \tag{119}\] the density fluctuations can be organized as follows \[\widetilde{\delta n_{\pm}}(\mathbf{k},\omega)=-i\sum_{j=1}^{N_{\pm}}\sum_{l,m}e^{ i\mathbf{k}\cdot\mathbf{r}_{\pm j}(0)}\frac{J_{l}\left(\pm k_{\perp}r_{\mathrm{L}} \right)J_{m}\left(\pm k_{\perp}r_{\mathrm{L}}\right)e^{i(l-m)\varphi_{j}(0)} }{\omega-i\varepsilon-k_{x}v_{\parallel}\mp l\omega_{\mathrm{c}}}\mp\frac{H_{ \pm}(\mathbf{k},\omega)}{e}\widetilde{\rho}(\mathbf{k},\omega). \tag{120}\] Finally, the charge density can be obtained in a self-consistent manner because the charge density is expressed as a linear combination of electron-positron density fluctuations as follows \[\widetilde{\rho}(\mathbf{k},\omega)=e\widetilde{\delta n_{+}}(\mathbf{k},\omega)-e \widetilde{\delta n_{-}}(\mathbf{k},\omega). \tag{121}\] The longitudinal dielectric tensor is defined by the electric susceptibility of the magnetized plasma as follows \[\varepsilon_{\mathrm{L}}(\mathbf{k},\omega)=1+H_{+}(\mathbf{k},\omega)+H_{-}(\mathbf{k}, \omega). \tag{122}\] Then the charge density is expressed as follows \[\begin{split}\widetilde{\rho}(\mathbf{k},\omega)&=-i\frac {e}{\varepsilon_{\mathrm{L}}}\left[i\sum_{j=1}^{N_{+}}\sum_{l,m}e^{i\mathbf{k} \cdot\mathbf{r}_{+j}(0)}\frac{J_{l}\left(k_{\perp}r_{\mathrm{L}}\right)J_{m}\left( k_{\perp}r_{\mathrm{L}}\right)e^{i(l-m)\varphi_{j}(0)}}{\omega-i \varepsilon-k_{x}v_{\parallel}-l\omega_{\mathrm{c}}}\right.\\ &\left.-\sum_{h=1}^{N_{-}}\sum_{l,m}e^{i\mathbf{k}\cdot\mathbf{r}_{-h}(0)} \frac{J_{l}\left(-k_{\perp}r_{\mathrm{L}}\right)J_{m}\left(-k_{\perp}r_{ \mathrm{L}}\right)e^{i(l-m)\varphi_{h}(0)}}{\omega-i\varepsilon-k_{x}v_{ \parallel}+l\omega_{\mathrm{c}}}\right].\end{split} \tag{123}\] Substituting this again into equation (120), we obtain the final expression for the density fluctuation \[\begin{split}\widetilde{\delta n_{\pm}}(\mathbf{k},\omega)& =-i\left[\left(1-\frac{H_{\pm}}{\varepsilon_{\mathrm{L}}}\right) \sum_{j=1}^{N_{+}}e^{i\mathbf{k}\cdot\mathbf{r}_{\pm j}(0)}\sum_{l,m}\frac{J_{l}\left( \pm k_{\perp}r_{\mathrm{L}}\right)J_{m}\left(\pm k_{\perp}r_{\mathrm{L}} \right)}{\omega-i\varepsilon-k_{x}v_{\parallel}\mp l\omega_{\mathrm{c}}}e^{i(l -m)\phi_{0j}}\right.\\ &\left.+\frac{H_{\pm}}{\varepsilon_{\mathrm{L}}}\sum_{h=1}^{N_{\mp }}e^{i\mathbf{k}\cdot\mathbf{r}_{\mp h}(0)}\sum_{l,m}\frac{J_{l}\left(\mp k_{\perp}r_{ \mathrm{L}}\right)J_{m}\left(\mp k_{\perp}r_{\mathrm{L}}\right)}{\omega-i \varepsilon-k_{x}v_{\parallel}\pm l\omega_{\mathrm{c}}}e^{i(l-m)\phi_{0h}} \right],\end{split} \tag{124}\] where \(\varphi(0)=\phi_{0}\) is the initial phase of particles (see equation (34)).
2303.04897
Degenerate complex Monge-Ampère equations with non-Kähler forms in bounded domains
In this paper, we study weak solutions to complex Monge-Amp\`ere equations of the form $(\omega + dd^c \varphi)^n= F(\varphi,.)d\mu$ on a bounded strictly pseudoconvex domain in $\mathbb{C}^n$, where $\omega$ is a smooth $(1,1)$-form, $0\leq F$ is a continuous non-decreasing function, and $\mu$ is a positive non-pluripolar measure. Our results extend previous works of Ko{\l}odziej and Nguyen \cite{KN15,KN23a,KN23b} who study bounded solutions, as well as Cegrell \cite{Ceg98,Ceg04,Ceg08}, Czy\.z \cite{Cz09}, Benelkourchi \cite{Ben09,Ben15} and others who treat the case when $\omega=0$ and/or $F=1$.
Mohammed Salouf
2023-03-08T21:29:43Z
http://arxiv.org/abs/2303.04897v2
# Degenerate complex Monge-Ampere equations with non-Kahler forms in bounded domains ###### Abstract. In this paper, we study weak solutions to complex Monge-Ampere equations of the form \((\omega+dd^{c}\varphi)^{n}=F(\varphi,.)d\mu\) on a bounded strictly pseudoconvex domain in \(\mathbb{C}^{n}\), where \(\omega\) is a smooth \((1,1)\)-form, \(0\leq F\) is a continuous non-decreasing function, and \(\mu\) is a positive non-pluripolar measure. Our results extend previous works of Kolodziej and Nguyen [15, 16, 17] who study bounded solutions, as well as Cegrell [12, 13, 14], Czyz [15], Benelkourchi [1, 1] and others who treat the case when \(\omega=0\) and/or \(F=1\). Key words and phrases:Monge-Ampere type equations, Cegrell's classes 2010 Mathematics Subject Classification: 32W20, 32U05, 32Q15, 35A23 The author is supported by the CNRST within the framework of the Excellence Research Grants Program under grant number 18 UCD2022. ## 1. Introduction Let \(\Omega\) be a bounded strictly pseudoconvex domain in \(\mathbb{C}^{n}\). By definition, there exists a smooth strongly plurisubharmonic function \(\rho\) defined in a neighborhood of \(\bar{\Omega}\) such that \(d\rho\neq 0\) on \(\partial\Omega\) and \(\Omega=\{\rho<0\}\). The Monge-Ampere operator is defined on smooth functions \(u\) by the formula \[(dd^{c}u)^{n}:=4^{n}n!\det\left(\frac{\partial^{2}u}{\partial z_{j}\partial \bar{z}_{k}}\right)dV_{n}.\] When \(u\) is plurisubharmonic, i.e. \(dd^{c}u\) is a positive \((1,1)\)-form, the above defines a smooth volume form. In the foundational work [1], Bedford and Taylor succeeded in defining \((dd^{c}u)^{n}\) as a positive Radon measure for all locally bounded plurisubharmonic functions \(u\). As is well-known extending this operator to unbounded psh functions is a delicate task. Cegrell introduced in [12, 13, 14] several classes of (unbounded) plurisubharmonic functions \(u\), for which the Monge-Ampere operator \((dd^{c}.)^{n}\) is well defined and enjoys natural convergence properties. In this series of papers, he gave detailed study of the solutions to the Dirichlet problem. In [1, 1] Benelkourchi studied weighted energy classes in the spirit of [1] and provided a complete description of solutions to the Dirichlet problem for weights that are convex or homogeneous. Our first main result provides the same description for concave weights \(\chi\) that have polynomial-like behavior, i.e. satisfying \[-t\chi^{\prime}(t)\leq-M\chi(t)\ \ \forall t\in\mathbb{R}^{-},\] with a uniform positive constant \(M\). We let \(\mathcal{W}_{M}^{+}\) denote the set of such weights. **Theorem A** (Theorem 3.5).: _Let \(\mu\) be a positive Radon measure, and let \(\chi\in\mathcal{W}_{M}^{+}\). The following conditions are equivalent:_ 1. _there exists a unique function_ \(\phi\in\mathcal{E}_{\chi}(\Omega)\) _such that_ \(\mu=(dd^{c}\phi)^{n}\)_;_ 2. \(\chi(\mathcal{E}_{\chi}(\Omega))\subset L^{1}(d\mu)\)_;_ 3. _there exists a constant_ \(C>0\) _such that_ \[\int_{\Omega}-\chi\circ\psi d\mu\leq C,\] _for all_ \(\psi\in\mathcal{E}_{0}(\Omega)\)_,_ \(E_{\chi}(\psi)\leq 1;\)__ 4. _there exists a positive constant_ \(A\) _such that_ \[\int_{\Omega}-\chi\circ\psi d\mu\leq A\max(1,E_{\chi}(\psi)),\ \ \forall\psi\in \mathcal{E}_{0}(\Omega).\] The equivalence \((1)\Leftrightarrow(2)\) in the last theorem was conjectured in [1] in the case of compact Kahler manifolds. It has been recently solved positively in [13] and [12]. Our proof uses ideas from [12] where plurisubharmonic envelopes play a crucial role. Next, we turn our attention to solutions of complex Monge-Ampere type equations: giving a positive Radon measure \(\mu\) vanishing on pluripolar sets and a bounded measurable function \(F:\mathbb{R}\times\Omega\to[0,+\infty[\) which is continuous and non-decreasing in the first variable, we are interested in the study of the following equation \[(\omega+dd^{c}\varphi)^{n}=F(\varphi,.)d\mu. \tag{1.1}\] Here \(\omega\) is a smooth \((1,1)\)-form defined on a neighborhood of \(\bar{\Omega}\). We also stress that we do not assume \(\omega\) is closed. The Dirichlet problem for the Monge-Ampere operator \((dd^{c}.)^{n}\) corresponds to the case when \(F=1\) and \(\omega=0\). Bounded solutions to (1.1) have been studied by S. Kolodziej and N.C. Nguyen in [14, 15, 16]. To study unbounded solutions, we first introduce natural generalizations of Cegrell's classes. Let \(\phi\in\mathcal{E}(\Omega)\cap\mathcal{C}^{0}(\bar{\Omega})\) be a maximal function. For \(\mathcal{K}(\Omega)\in\{\mathcal{E}(\Omega),\,\mathcal{F}(\Omega),\,\mathcal{ E}_{p}(\Omega),\,\mathcal{E}_{\chi}(\Omega),\,\mathcal{N}(\Omega)\}\), we define the class \(\mathcal{K}(\Omega,\omega,\phi)\) by: \[u\in\mathcal{K}(\Omega,\omega,\phi)\Leftrightarrow u\in\mathrm{PSH}(\Omega, \omega)\text{ and }u+\rho\in\mathcal{K}(\Omega,\phi).\] Here \(\rho\) is a plurisubharmonic function of class \(\mathcal{C}^{2}\) in a neighborhood of \(\bar{\Omega}\) such that \(\rho=0\) on \(\partial\Omega\) and \(\omega\leq dd^{c}\rho\). The set \(\mathcal{K}(\Omega,\omega)\) corresponds to \(\mathcal{K}(\Omega,\omega,\phi)\) with \(\phi=0\). When \(\omega=dd^{c}\rho\), the sets \(\mathcal{F}(\Omega,\omega)\) and \(\mathcal{E}_{\chi}(\Omega,\omega)\) coincide with their counterparts given in [11]. The following result asserts that the Monge-Ampere operator \((\omega+dd^{c}.)^{n}\) is well defined on the classes \(\mathcal{K}(\Omega,\omega,\phi)\). **Theorem B** (Theorem 4.10).: _The Monge-Ampere measure \((\omega+dd^{c}u)^{n}\) is well defined as a Radon measure in \(\Omega\) for all \(u\in\mathcal{E}(\Omega,\omega)\). Furthermore, if \((u_{j})_{j}\) is a decreasing sequence in \(\mathcal{E}(\Omega,\omega)\) that converges to \(u\in\mathcal{E}(\Omega,\omega)\), then the sequence \(((\omega+dd^{c}u_{j})^{n})_{j}\) converges weakly to \((\omega+dd^{c}u)^{n}\)._ The direct adaptation of Cegrell's proof beaks down in our context because the integration by parts is missing. Indeed, since the reference form is not closed, applying Stokes theorem produces several torsion terms which are difficult to control even for bounded potentials. Moreover, Example 5.1 below shows that the operator \((\omega+dd^{c}.)^{n}\) may fail to verify the basic properties of the operator \((dd^{c}.)^{n}\). To prove Theorem B, we decompose \(\omega\) into a finite sum of \((1,1)\)-forms of the form \(f\alpha\), where \(f\) is a smooth function and \(\alpha\) is a positive closed \((1,1)\)-form. Then \((\omega+dd^{c}.)^{n}\) can be written as the sum of terms involving mixed Monge-Ampere operators. After defining the operator \((\omega+dd^{c}.)^{n}\), a natural question to ask is whether there are solutions to (1.1) in the classes \(\mathcal{K}(\Omega,\omega,\phi)\). This equation appeared for the first time in the problem of constructing Kahler-Einstein metrics in compact Kahler manifolds. When \(\omega=0\), Bedford and Taylor obtained bounded solutions to this equation in strictly pseudoconvex domains of \(\mathbb{C}^{n}\)[1]. This result has been generalized in many context [1, 1, 10]. Czyz proved that the equation (1.1) has a solution \(u\in\mathcal{N}^{a}(\Omega,\phi)\) if \(\mu\) is the Monge-Ampere measure of some function in \(\mathcal{N}^{a}(\Omega)\)[1]. Recently, Kolodziej and Nguyen extended this problem to the operator \((\omega+dd^{c}.)^{n}\). They proved the existence of bounded solutions to (1.1) when the measure \(\mu\) is dominated by the Monge-Ampere measure of some function \(v\in\mathcal{E}_{0}(\Omega)\)[11, Theorem 3.1]. Here we extend this result to more singular measures \(\mu\) for which we seek for unbounded weak solutions. **Theorem C** (Theorem 5.8).: _Assume \(\mu\leq(dd^{c}u)^{n}\) is dominated by the Monge-Ampere measure of some function \(u\in\mathcal{K}(\Omega)\). Then there is a uniquely determined \(\varphi\in\mathcal{K}(\Omega,\omega,\phi)\) solving (1.1)._ The uniqueness part in the previous theorem is a consequence of the following comparison principle that generalizes [1, Theorem 5.15], [1, Theorem 4.4], [11, Corollary 3.4] and [11, Proposition 2.2]. **Theorem D** (Corollary 5.7).: _Let \(\mu\leq\nu\) be positive Radon measures vanishing on pluripolar sets. Assume \(u\in\mathcal{N}(\Omega,\omega,\phi)\) and \(v\in\mathcal{E}(\Omega,\omega)\) are such that \(v\leq\phi\) in \(\partial\Omega\),_ \[(\omega+dd^{c}u)^{n}=F(u,.)d\mu\text{ and }(\omega+dd^{c}v)^{n}=F(v,.)d\nu.\] _Then \(u\geq v\)._ Let us explain briefly the idea of the proof of the comparison principle which is inspired by [11]. We consider the following plurisubharmonic envelope \[P(u-v):=\sup\{\varphi\in\operatorname{PSH}^{-}(\Omega):\varphi\leq u-v\},\] assuming, without loss of generality, that \(u\leq v\). Using that the Monge-Ampere measure of the envelope is concentrated on the contact set \(\{P(u-v)=u-v\}\), we show that \((dd^{c}P(u-v))^{n}=0\), hence \(u=v\). The paper is organized as follows. In Section 2, we recall some properties of the Cegrell classes that we shall use in the sequel. After, we move on to the study of the plurisubharmonic envelopes. Section 3 is devoted to the study of the Dirichlet problem in the weighted energy class \(\mathcal{E}_{\chi}(\Omega)\) for \(\chi\in\mathcal{W}_{M}^{+}\). In Section 4, we prove that the operator \((\omega+dd^{c}.)^{n}\) is well defined on the large set \(\mathcal{E}(\Omega,\omega)\) and that it is continuous along decreasing sequences. The proof of Theorem C and Theorem D will be the subject of Section 5. Throughout this paper, \(\Omega\) is a bounded strictly pseudoconvex domain of \(\mathbb{C}^{n}\) and \(n\geq 1\). ## Acknowledgment I would like to express my gratitude to my supervisors Omar Alehyane and Chinh H. Lu for introducing the subject, for their generosity in providing knowledge and expertise and for all that time spent. ## 2. Cegrell's classes In this section, we first present a brief introduction to the Cegrell classes, and then we study plurisubharmonic envelopes. The classes \(\mathcal{E}_{0}\), \(\mathcal{E}\), \(\mathcal{E}_{p}\), \(\mathcal{F}\) and \(\mathcal{N}\) In [5, 5], Cegrell introduced the following classes which carry his name: \[\mathcal{E}_{0}(\Omega)=\{u\in\mathrm{PSH}(\Omega)\cap L^{\infty}(\Omega):u=0 \text{ on }\partial\Omega\text{ and }\int_{\Omega}(dd^{c}u)^{n}<+\infty\},\] \[\mathcal{E}(\Omega)=\{u\in\mathrm{PSH}^{-}(\Omega):\forall z\in\Omega,\exists V \in\mathcal{V}(z),\exists(u_{j})_{j}\subset\mathcal{E}_{0}(\Omega),\] \[u_{j}\searrow u\text{ on }V\text{ and }\sup_{j}\int_{\Omega}(dd^{c}u_{j})^{n}<+\infty\},\] \[\mathcal{F}(\Omega)=\{u\in\mathrm{PSH}^{-}(\Omega):\exists u_{j}\in\mathcal{E }_{0}(\Omega),\;u_{j}\searrow u\text{ and }\sup_{j}\int_{\Omega}(dd^{c}u_{j})^{n}<+\infty\},\] and for \(p>0\), \[\mathcal{E}_{p}(\Omega)=\{u\in\mathrm{PSH}(\Omega):\exists(u_{j})_{j}\subset \mathcal{E}_{0}(\Omega),\;u_{j}\searrow u\text{ \ and }\sup_{j}\int_{\Omega}|u_{j}|^{p}(dd^{c}u_{j})^{n}<+\infty\}.\] We have the inclusions \[\mathcal{E}_{0}(\Omega)\subset\mathcal{E}_{p}(\Omega)\cap\mathcal{F}(\Omega) \subset\mathcal{E}_{p}(\Omega)\cup\mathcal{F}(\Omega)\subset\mathcal{E}( \Omega).\] If \(u\in\mathcal{E}(\Omega)\), then \((dd^{c}u)^{n}\) defines a positive Radon measure by [5, Theorem 4.2]. The set \(\mathcal{E}(\Omega)\) is the largest set for which the Monge-Ampere operator \((dd^{c}.)^{n}\) is well defined and continuous along decreasing sequences [5, Theorem 4.5]. We recall the class \(\mathcal{N}(\Omega)\) defined in [5]. Let \(u\in\mathcal{E}(\Omega)\), and let \((\Omega_{j})\) be a fundamental sequence of strictly pseudoconvex subdomains of \(\Omega\). Define \[u_{j}=\sup\{\varphi\in\mathrm{PSH}^{-}(\Omega):\varphi\leq u\text{ on } \mathcal{C}\Omega_{j}\}.\] Note that we have \(u\leq u_{j}\leq u_{j+1}\) for every \(j\). The class \(\mathcal{N}(\Omega)\) is the set of \(u\in\mathcal{E}(\Omega)\) such that \(\tilde{u}:=(\lim u_{j})^{*}=0\). We have \[\mathcal{F}(\Omega)\subset\mathcal{N}(\Omega)\text{ and }\mathcal{E}_{p}( \Omega)\subset\mathcal{N}(\Omega),\;\forall p.\] We denote by \(\mathcal{E}^{a}(\Omega)\) the set of \(u\in\mathcal{E}(\Omega)\) such that \((dd^{c}.)^{n}\) vanishes on pluripolar sets. The following result is known as the comparison principle. **Theorem 2.1** (Theorem 3.12 and Corollary 3.13 in [5]).: _Let \(u\in\mathcal{N}^{a}(\Omega)\) and let \(v\in\mathcal{E}(\Omega)\). We have_ \[\int_{\{u<v\}}(dd^{c}v)^{n}\leq\int_{\{u<v\}}(dd^{c}u)^{n}.\] _In particular, if \((dd^{c}u)^{n}\leq(dd^{c}v)^{n}\) then \(u\geq v\)._ The following theorem gives an idea about the range of the Monge-Ampere operator on \(\mathcal{N}^{a}(\Omega)\). **Theorem 2.2** (Proposition 5.2 in [11]).: _Let \(\mu\) be a positive Radon measure vanishing on pluripolar sets. Suppose there is \(\psi\in\mathcal{E}(\Omega)\) with \(\psi\neq 0\) and \(\int_{\Omega}\psi d\mu>-\infty\). Then there is a uniquely determined \(u\in\mathcal{N}^{a}(\Omega)\) such that_ \[(dd^{c}u)^{n}=\mu.\] Note that the converse of this theorem is not true as Cegrell showed in [11, Example 5.3]. ### Plurisubharmonic envelopes This subsection is devoted to the study of the following plurisubharmonic envelope: For a measurable function \(f\), the envelope \(P(f)\) is defined by \[P(f)=\left(\sup\{\varphi\in\mathrm{PSH}(\Omega):\varphi\leq f\}\right)^{*}.\] The study of this object has attracted the interest of several authors over the last decade (see [1, 1, 1, 10], and the references therein for more information). We shall use this envelope to prove a general comparison principle (see the proof of Theorem 5.6), and to prove Theorem A in the introduction. First, we should prove the following proposition. **Proposition 2.3**.: _If \(f\) is a measurable function, then_ \[P(f)=\sup\{\varphi\in\mathrm{PSH}(\Omega):\varphi\leq f\text{ quasi-everywhere }\},\] _where the term quasi-everywhere means outside a pluripolar set. In particular if \((f_{j})\) is a decreasing sequence of measurable functions converging to \(f\), then \(P(f_{j})\) decreases to \(P(f)\)._ Proof.: Let us denote by \(h\) the function on the right hand side. We have to prove that \(P(f)=h\). It follows from [1, Proposition 5.1] that \(P(f)\leq f\) quasi-everywhere and hence \(P(f)\leq h\). In the other hand, by Choquet's lemma, there is \(\varphi_{j}\in\mathrm{PSH}^{-}(\Omega)\) such that \(\varphi_{j}\leq f\) quasi-everywhere and \(h^{*}=(\sup\varphi_{j})^{*}\). Since countable union of pluripolar sets is pluripolar, it follows that \(h^{*}\leq f\) quasi-everywhere and hence \(h=h^{*}\in\mathrm{PSH}(\Omega)\). Since \(h\leq f\) quasi-everywhere, there is \(\phi\in\mathrm{PSH}^{-}(\Omega)\) such that \(h+\varepsilon\phi\leq f\) everywhere for all \(\varepsilon>0\). It follows that \(h+\varepsilon\phi\leq P(f)\). Letting \(\varepsilon\to 0\), we get \(h\leq P(f)\) quasi-everywhere and hence everywhere because these are psh functions. We conclude that \(P(f)=h\). Let \((f_{j})\) be a decreasing sequence of measurable functions converging to \(f\). Obviously, the sequence \((P(f_{j}))_{j}\) is decreasing and \(P(f)\leq P(f_{j})\) for all \(j\). In the other hand, we have \(P(f_{j})\leq f_{j}\) quasi-everywhere for every \(j\). Therefore \(\lim P(f_{j})\leq f\) quasi-everywhere and hence \(\lim P(f_{j})=P(f)\). The following theorem is a generalization of [1, Corollary 9.2] to quasi-continuous functions \(f\). **Theorem 2.4**.: _Assume \(f\) is quasi-continuous, \(f\leq 0\), and there is \(\psi\in\mathcal{E}^{a}(\Omega)\) such that \(\psi\leq f\). Then \(P(f)\in\mathcal{E}^{a}(\Omega)\) and \((dd^{c}P(f))^{n}\) is concentrated on \(\{P(f)=f\}\)._ Before proving the theorem, we should recall the definition of the Monge-Ampere capacity defined in [1]: Giving a Borel subset \(E\subset\Omega\), we set \[\mathrm{Cap}(E):=\sup\left\{\int_{E}(dd^{c}u)^{n}:\ u\in\mathrm{PSH}(\Omega), \ -1\leq u\leq 0\right\}.\] A function \(f\) is called quasi-continuous if for every \(\varepsilon>0\), there is a Borel set \(E\subset\Omega\) such that \(\mathrm{Cap}(E)\leq\varepsilon\) and the restriction of \(f\) on \(\Omega\setminus E\) is continuous. We now proceed to the proof of Theorem 2.4. Proof.: Assume first that \(f\) is bounded from below. For each \(j\geq 1\), there is an open set \(U_{j}\subset\Omega\) such that \(\mathrm{Cap}(U_{j})\leq 2^{-j-1}\) and the restriction of \(f\) on \(\Omega\setminus U_{j}\) is continuous. By taking \(\cup_{k\geq j}U_{k}\) we can assume that the sequence \(U_{j}\) is decreasing. By the Tietze extension theorem, there is a function \(f_{j}\) continuous on \(\Omega\) such that \(f_{j}=f\) on \(D_{j}:=\Omega\setminus U_{j}\). We can assume that there is a constant \(C_{0}\) such that \(-C_{0}\leq f_{j}\leq 0\), for all \(j\). For each \(j\) we define \[g_{j}:=\sup_{k\geq j}f_{k}.\] We observe that \(g_{j}\) is lower-semicontinuous in \(\Omega\), \(g_{j}=f\) on \(D_{j}\), and \(g_{j}\searrow g\) in \(\Omega\). Since the sequence \((D_{j})\) is increasing, it follows that \(g_{j}=f\) on \(D_{k}\) for all \(k\leq j\). Thus letting \(j\to+\infty\) gives \(g=f\) on \(D_{k}\) for all \(k\). We then infer \(g=f\) quasi-everywhere in \(\Omega\), hence \(P(f)=P(g)\) by Proposition 2.3. Since \(g_{j}\) is lower-semicontinuous in \(\Omega\), by the balayage method, [1, Corollary 9.2], we have \[\int_{\Omega}(g_{j}-P(g_{j}))(dd^{c}P(g_{j}))^{n}=0.\] From this we get \[\int_{\Omega}|f-P(g_{j})| (dd^{c}P(g_{j}))^{n}\] \[=\int_{D_{j}}|f-P(g_{j})|(dd^{c}P(g_{j}))^{n}+\int_{U_{j}}|f-P(g_ {j})|(dd^{c}P(g_{j}))^{n}\] \[=\int_{D_{j}}(g_{j}-P(g_{j}))(dd^{c}P(g_{j}))^{n}+\int_{U_{j}}|f- P(g_{j})|(dd^{c}P(g_{j}))^{n}\] \[\leq 2C_{0}\int_{U_{j}}(dd^{c}P(g_{j}))^{n}\] \[\leq 2(C_{0})^{n+1}\int_{U_{j}}(dd^{c}P(g_{j})/C_{0})^{n}\] \[\leq 2(C_{0})^{n+1}\mathrm{Cap}(U_{j})\leq(C_{0})^{n+1}2^{-j}.\] The functions \(|f-P(g_{j})|\) are uniformly bounded and converge in capacity to the quasi-continuous function \(f-P(f)\) because \[||f-P(g_{j})|-f+P(f)|\leq|P(g_{j})-P(f)|,\] and the sequence \((P(g_{j}))_{j}\) decreases to \(P(f)\). It thus follows from [1, Theorem 4.26] that \(|f-P(g_{j})|(dd^{c}P(g_{j}))^{n}\) weakly converges to \((f-P(f))(dd^{c}P(f))^{n}\). Hence \[0=\liminf_{j\to+\infty}\int_{\Omega}|f-P(g_{j})|(dd^{c}P(g_{j}))^{n}\geq\int_ {\Omega}(f-P(f))(dd^{c}P(f)^{n}\geq 0.\] From this we infer that \((dd^{c}P(f))^{n}\) is concentrated on the contact set \(\{P(f)=f\}\). To prove the general case we approximate \(f\) by \(f_{j}:=\max(f,-j)\). Then \[\int_{\{P(f_{j})<f_{j}\}}(dd^{c}P(f_{j}))^{n}=0.\] Fixing \(C>0\), we have by [13, Theorem 4.1] \[\int_{\{P(f_{j})<f_{j}\}\cap\{P(f)>-C\}}(dd^{c}\max(P(f_{j}),-C))^{n}=0.\] Fixing an integer \(k\in\mathbb{N}\), we have \[\int_{\{P(f_{k})<f\}\cap\{P(f)>-C\}}(dd^{c}\max(P(f_{j}),-C))^{n}=0\ \forall j \geq k,\] because in this case \(\{P(f_{k})<f\}\subset\{P(f_{j})<f_{j}\}\). Set \[h_{k}=\left(\max(f,P(f_{k}))-P(f_{k})\right)\times\left(\max(P(f),-C)+C\right).\] The function \(h_{k}\) is positive, bounded, quasi-continuous, and satisfies \[\int_{\Omega}h_{k}(dd^{c}\max(P(f_{j}),-C))^{n}=0\ \ \forall j\geq k.\] Letting \(j\to+\infty\) we obtain again by [1, Theorem 4.26] \[\int_{\{P(f_{k})<f\}\cap\{P(f)>-C\}}(dd^{c}\max(P(f),-C))^{n}=0.\] Next, letting \(k\to+\infty\) we arrive at \[\int_{\{P(f)<f\}\cap\{P(f)>-C\}}(dd^{c}\max(P(f),-C))^{n}=0.\] By [13, Theorem 4.1] we then have \[\int_{\{P(f)<f\}\cap\{P(f)>-C\}}(dd^{c}P(f))^{n}=0.\] We finally let \(C\to+\infty\) to obtain the result since \((dd^{c}P(f))^{n}\) does not charge pluripolar sets. **Remark 2.5**.: We use the hypothesis \(\psi\leq f\), for certain \(\psi\in\mathcal{E}^{a}(\Omega)\), to ensure that \(P(f)\in\mathcal{E}^{a}(\Omega)\). If \(P(f)\in\mathcal{E}(\Omega)\) charges the pluripolar set \(\{P(f)=-\infty\}\), then the same proof shows that the measure \((dd^{c}P(f))^{n}\) vanishes on \(\{P(f)<f\}\cap\{P(f)>-\infty\}\). ## 3. High energy classes In this section, we study the existence of solutions to the following Dirichlet problem \[(dd^{c}u)^{n}=\mu\quad u\in\mathcal{E}_{\chi}(\Omega), \tag{3.1}\] where \(\mu\) is a positive Radon measure vanishing on pluripolar sets. The equation (3.1) has been studied by Benelkourchi [1, 1] in the case of convex or homogeneous weights. We extend these results to a special type of concave functions \(\chi\). First, we recall the class \(\mathcal{E}_{\chi}(\Omega)\) defined in [1]; we denote by \(\mathcal{W}^{-}\) (resp \(\mathcal{W}^{+}\)) the set of convex (resp concave) increasing functions \(\chi:\mathbb{R}^{-}\to\mathbb{R}^{-}\) such that \(\chi(-\infty)=-\infty\). The set \(\mathcal{W}^{+}_{M}\) consists of functions \(\chi\in\mathcal{W}^{+}\) with the property \[|t\chi^{\prime}(t)|\leq M|\chi(t)|,\ \ \forall t\in\mathbb{R}^{-}.\] Let \(\chi\in\mathcal{W}:=\mathcal{W}^{-}\cup\mathcal{W}^{+}\). The set \(\mathcal{E}_{\chi}(\Omega)\) is defined by \[\mathcal{E}_{\chi}(\Omega)=\left\{u\in\mathrm{PSH}(\Omega):\exists(u_{j})_{j} \subset\mathcal{E}_{0}(\Omega),\;u_{j}\searrow u\;\;\text{and}\;\sup_{j}\int_{ \Omega}-\chi\circ u_{j}(dd^{c}u_{j})^{n}<+\infty\right\}.\] For \(u\in\mathcal{E}_{\chi}(\Omega)\), we use the notation \[E_{\chi}(u)=\int_{\Omega}-\chi(u)(dd^{c}u)^{n}.\] The following lemma will be very helpful in the sequel. **Lemma 3.1** (Lemma 2.2 in [10]).: _If \(\chi\in\mathcal{W}^{+}_{M}\), then_ \[-\chi(ct)\leq-c^{M}\chi(t),\] _for all \(t\leq 0\) and all \(c\geq 1\)._ The last lemma allows us to prove the following proposition: **Proposition 3.2**.: _Fix \(\chi\in\mathcal{W}^{+}_{M}\)._ 1. _We have_ \(\chi(t)<0\) _for all_ \(t<0\)_; in particular_ \[\mathcal{E}_{\chi}(\Omega)=\{u\in\mathcal{N}^{a}(\Omega):\chi(u)\in L^{1}((dd^ {c}u)^{n})\}.\] 2. _The set_ \(\mathcal{E}_{\chi}(\Omega)\) _is a convex cone._ Proof.: 1. By contradiction, assume that there is \(t_{0}<0\) such that \(\chi(t_{0})=0\). If \(t_{0}\leq-1\) then \(\chi(-1)=0\). If \(t_{0}>-1\) then \[-\chi(-1)=-\chi(t_{0}/|t_{0}|)\leq\frac{1}{|t_{0}|^{M}}\chi(t_{0})=0.\] It follows that \(\chi(-1)=0\) in both cases. Let \(t\in\mathbb{R}_{-}\). We have \[-\chi(t)=-\chi((-t)\times(-1))\leq-\max(|t|^{M},|t|)\chi(-1)=0.\] We conclude that \(\chi=0\) and this is absurd because \(\chi(-\infty)=-\infty\). The second affirmation follows from [1, Corollary 3.3]. 2. The set \(\mathcal{E}_{\chi}(\Omega)\) is convex by [1, Proposition 4.3]. It suffices to prove that if \(u\in\mathcal{E}_{\chi}(\Omega)\) then so is \(2u\). This statement follows easily from the last lemma: \[E_{\chi}(2u)=2^{n}\int_{\Omega}-\chi(2u)(dd^{c}u)^{n}\leq 2^{n+M}E_{\chi}(u).\] In [1, Definition 4.1], the authors defined the following class for an increasing function \(\chi\) \[\tilde{\mathcal{E}}_{\chi}(\Omega)=\{u\in\mathrm{PSH}^{-}(\Omega):\int_{0}^{+ \infty}t^{n}\chi^{\prime}(-t)\mathrm{Cap}_{\Omega}(u<-t)dt<+\infty\}.\] They proved the inclusions \[\tilde{\mathcal{E}}_{\chi}(\Omega)\subset\mathcal{E}_{\chi}(\Omega)\subset \tilde{\mathcal{E}}_{\tilde{\chi}}(\Omega),\] where \(\tilde{\chi}(t)=\chi(t/2)\)[1, Proposition 4.2]. We have the following observation: **Proposition 3.3**.: _If \(\chi\in\mathcal{W}^{+}_{M}\), then \(\tilde{\mathcal{E}}_{\chi}(\Omega)=\mathcal{E}_{\chi}(\Omega)\)._ Proof.: We have \[\mathcal{E}_{\chi}(\Omega)\subset\tilde{\mathcal{E}}_{\tilde{\chi}}(\Omega)\subset \mathcal{E}_{\tilde{\chi}}(\Omega).\] So it suffices to prove that \(\mathcal{E}_{\tilde{\chi}}(\Omega)\subset\mathcal{E}_{\chi}(\Omega)\). Let \(u\in\mathcal{E}_{\tilde{\chi}}(\Omega)\). We have \[E_{\chi}(u)=\int_{\Omega}-\chi(u)(dd^{c}u)^{n}\leq 2^{M}\int_{\Omega}-\chi(u/2) (dd^{c}u)^{n}=2^{M}E_{\tilde{\chi}}(u)<+\infty.\] **Theorem 3.4**.: _Fix \(\chi\in\mathcal{W}_{M}^{+}\), and let \(u,v\in\mathcal{E}_{\chi}(\Omega)\). We have_ \[\int_{\Omega}-\chi\circ u(dd^{c}v)^{n}\leq\lambda^{-n}E_{\chi}(2\lambda v)+2^{ M}\lambda^{-n}E_{\chi}(u),\] _for all \(\lambda>0\)._ Proof.: The proof uses ideas from [1, Theorem 5.1]. Fix \(\lambda>0\). Using the fact that \[(u<-t)\subset(u<\lambda v-t/2)\cup(\lambda v<-t/2),\] we get \[\lambda^{n}\int_{\Omega}-\chi(u)(dd^{c}v)^{n} =\int_{\Omega}-\chi(u)(dd^{c}\lambda v)^{n}\] \[=\int_{0}^{+\infty}(dd^{c}\lambda v)^{n}(\chi\circ u<-t)dt\] \[=\int_{0}^{+\infty}\chi^{\prime}(-t)(dd^{c}\lambda v)^{n}(u<-t)dt\] \[\leq\int_{0}^{+\infty}\chi^{\prime}(-t)(dd^{c}\lambda v)^{n}(u< \lambda v-t/2)dt\] \[+\int_{0}^{+\infty}\chi^{\prime}(-t)(dd^{c}\lambda v)^{n}(\lambda v <-t/2)dt.\] In one hand \[\int_{0}^{+\infty}\chi^{\prime}(-t)(dd^{c}\lambda v)^{n}(\lambda v<-t/2)dt \leq\int_{0}^{+\infty}\chi^{\prime}(-t)(dd^{c}2\lambda v)^{n}(2\lambda v<-t) dt=E_{\chi}(2\lambda v).\] In the other hand, we have by [1, Corollary 3.13] \[\int_{0}^{+\infty}\chi^{\prime}(-t)(dd^{c}\lambda v)^{n}(u< \lambda v-t/2)dt \leq\int_{0}^{+\infty}\chi^{\prime}(-t)(dd^{c}u)^{n}(u<\lambda v -t/2)dt\] \[\leq\int_{0}^{+\infty}\chi^{\prime}(-t)(dd^{c}u)^{n}(u<-t/2)dt\] \[\leq\int_{\Omega}-\chi(2u)(dd^{c}u)^{n}\] \[\leq 2^{M}E_{\chi}(u).\] The following result corresponds to [1, Theorem 6] in the case of convex or homogeneous weight \(\chi\) (see also [1, Theorem A] for the case \(\chi(t)=t\)). **Theorem 3.5**.: _Let \(\mu\) be a positive Radon measure, and let \(\chi\in\mathcal{W}_{M}^{+}\). The following conditions are equivalent:_ 1. _there exists a unique function_ \(\phi\in\mathcal{E}_{\chi}(\Omega)\) _such that_ \(\mu=(dd^{c}\phi)^{n}\)_;_ 2. \(\chi(\mathcal{E}_{\chi}(\Omega))\subset L^{1}(d\mu)\)_;_ 3. _there exists a constant_ \(C>0\) _such that_ \[\int_{\Omega}-\chi\circ\psi d\mu\leq C,\] _for all_ \(\psi\in\mathcal{E}_{0}(\Omega)\)_,_ \(E_{\chi}(\psi)\leq 1;\)__ 4. _there exists a positive constant_ \(A\) _such that_ \[\int_{\Omega}-\chi\circ\psi d\mu\leq A\max(1,E_{\chi}(\psi)),\ \ \forall\psi\in\mathcal{E}_{0}(\Omega).\] Proof.: The implication \((1)\Rightarrow(2)\) is obvious because \(\mathcal{E}_{\chi}(\Omega)\) is a convex cone and that \[\int_{\Omega}-\chi\circ u(dd^{c}v)^{n}\leq E_{\chi}(u+v)<+\infty,\ \ \forall u,v\in \mathcal{E}_{\chi}(\Omega).\] The proof of the implication \((2)\Rightarrow(3)\) resembles to that of \((2)\Rightarrow(3)\) in [1, Theorem 6]; we repeat it for the reader's convenience. By contradiction, let \((u_{j})\in\mathcal{E}_{0}(\Omega)\) such that \(E_{\chi}(u_{j})\leq 1\) and \[\int_{\Omega}-\chi(u_{j})d\mu\geq 2^{3Mj}.\] Consider the function \[u:=\sum_{j}\frac{1}{2^{2j}}u_{j}.\] Since \((u<-s)\subset\cup_{j}(u_{j}<-2^{j}s)\), we get \[\mathrm{Cap}_{\Omega}(u<-s)\leq\sum_{j}\mathrm{Cap}_{\Omega}(u_{j}<-2^{j}s)\] and therefore \[\int_{0}^{\infty}s^{n}\chi^{\prime}(-s)\mathrm{Cap}_{\Omega}(u<-s)ds\] \[\leq\int_{0}^{\infty}s^{n}\chi^{\prime}(-s)\sum_{j}\mathrm{Cap}_{ \Omega}(u_{j}<-2^{j}s)ds\] \[=\sum_{j}1/2^{nj}\int_{0}^{\infty}(2^{j}s)^{n}\chi^{\prime}(-s) \mathrm{Cap}_{\Omega}(u_{j}<-2^{j}s)ds.\] By the change of variables \(t=2^{j}s\), we obtain \[\int_{0}^{\infty}s^{n}\chi^{\prime}(-s)\text{Cap}_{\Omega}(u<-s)ds\] \[=\sum_{j}1/2^{nj}\int_{0}^{\infty}t^{n}\chi^{\prime}(-2^{-j}t)\text{ Cap}_{\Omega}(u_{j}<-t)2^{-j}dt\] \[(*) \leq\sum_{j}1/2^{(n+1)j}\int_{0}^{\infty}t^{n}\chi^{\prime}(-t) \text{Cap}_{\Omega}(u_{j}<-t)dt\] \[\leq\sum_{j}1/2^{(n+1)j}<\infty.\] \((*)\) is justified by the fact that \(\chi\) is concave, so \(\chi^{\prime}\) is non-increasing. That proves \(u\in\tilde{\mathcal{E}}_{\chi}(\Omega)\) and therefore \(u\in\mathcal{E}_{\chi}(\Omega)\) by Proposition 3.3. Since \(u\leq 2^{-2j}u_{j}\), we get \(\chi(u)\leq\chi(2^{-2j}u_{j})\) because \(\chi\) is increasing. It follows from Lemma 3.1 that \(\chi(u)\leq 2^{-2Mj}\chi(u_{j})\), and therefore \[\int_{\Omega}-\chi(u)d\mu\geq 2^{-2Mj}\int_{\Omega}-\chi(u_{j})d\mu\geq 2^{Mj}.\] Which is in contradiction with (2). We move on to the proof of the implication (3) \(\Rightarrow\) (4). Consider \(\psi\in\mathcal{E}_{0}(\Omega)\) such that \(E_{\chi}(\psi)\geq 1\). If \(E_{\chi}(\psi)\leq 2^{n+1}\), then \(E_{\chi}(1/2\;\psi)\leq 1\) and therefore \[\int_{\Omega}-\chi(\psi)d\mu\leq 2^{M}\int_{\Omega}-\chi(1/2\;\psi)d\mu\leq 2^{M }C.\] Suppose \(E_{\chi}(\psi)\geq 2^{n+1}\), and denote by \(\varepsilon=1/E_{\chi}(\psi)\). For \(f=\chi^{-1}(\varepsilon\chi(\psi))\), consider the envelope \[P(f)=\sup\{h\in\text{PSH}(\Omega):h\leq f\}.\] It is clear that \(P(f)\in\mathcal{E}_{0}(\Omega)\) since \(f\) is upper-semicontinuous and satisfies \(f\geq\psi\). Theorem 2.4 implies \[I_{\{P(f)<f\}}(dd^{c}P(f))^{n}=0.\] It thus follows that \[E_{\chi}(P(f)) =\int_{\Omega}-\chi(P(f))(dd^{c}P(f))^{n}\] \[=\int_{\{P(f)=f\}}-\chi(f)(dd^{c}P(f))^{n}\] \[=\varepsilon\int_{\Omega}-\chi(\psi)(dd^{c}P(f))^{n}.\] Applying Theorem 3.4 for \(\lambda=1/2\), we get \[E_{\chi}(P(f)))\leq 2^{n}\varepsilon E_{\chi}(P(f))+2^{M+n}.\] That implies \[E_{\chi}(P(f))\leq\frac{2^{M+n}}{1-2^{n}\varepsilon}\leq 2^{M+n+1},\] and therefore \[E_{\chi}(1/2^{M+1}\times P(f))\leq 1.\] Furthermore, since \(P(f)\leq f=\chi^{-1}(\varepsilon\chi(\psi))\), we obtain \[\int_{\Omega}-\chi(\psi)d\mu\leq\varepsilon^{-1}\int_{\Omega}-\chi(P(f))d\mu\leq 2 ^{M(M+1)}CE_{\chi}(\psi).\] We conclude that, for all \(\psi\in\mathcal{E}_{0}(\Omega)\), we have \[\int_{\Omega}-\chi(\psi)d\mu\leq 2^{M}C+2^{M(M+1)}CE_{\chi}(\psi)\leq A\max(1,E_ {\chi}(\psi)),\] where we have taken \(A=2^{M(M+1)+1}C\). For the implication (4) \(\Rightarrow\) (1), setting \(\tilde{\mu}=1/(2A)\;\mu\), we have \[\int_{\Omega}-\chi(\psi)d\tilde{\mu}\leq 1/2\max(1,E_{\chi}(\psi)),\;\;\forall \psi\in\mathcal{E}_{0}(\Omega).\] Since \[\limsup_{t\to+\infty}\frac{\max(1,t)}{2t}=1/2<1,\] we can construct a function \(\tilde{u}\in\mathcal{E}_{\chi}(\Omega)\) such that \(\tilde{\mu}=(dd^{c}\tilde{u})^{n}\) (the argument is the same as that of the proof of the implication (5) \(\Rightarrow\) (1) in [1, Theorem 6]). The result follows by taking \(u=(2A)^{1/n}\tilde{u}\). ## 4. Definition of the Monge-Ampere operator \((\omega+dd^{c}.)^{n}\) In this section, we study the operator \((\omega+dd^{c}.)^{n}\) for a smooth real \((1,1)\)-form \(\omega\) non necessarily closed. It follows from the work of Bedford and Taylor that this operator is well defined on \(\mathrm{PSH}(\Omega,\omega)\cap L^{\infty}(\Omega)\)[1] (a detailed construction is given in [15]). Here we extend this definition to unbounded functions \(u\in\mathcal{E}(\Omega,\omega)\). ### The current \((dd^{c}.)^{k}\) First, we show that the current \((dd^{c}u)^{k}\) is well defined for any \(u\in\mathcal{E}(\Omega)\) and any \(1\leq k\leq n\). The following proposition will be essential for our work. **Proposition 4.1**.: _Fix \(p\in\{1,..,n\}\), and let \(\alpha\) be a smooth \((p,p)-\)form defined in a neighborhood of \(\bar{\Omega}\). One can write_ \[\alpha=\sum_{j\in J}f_{j}T_{j},\] _where_ * \(J\) _is a finite set;_ * \((f_{j})_{j\in J}\) _are smooth functions with complex values;_ * \(T_{j}=dd^{c}u_{1}^{j}\wedge..\wedge dd^{c}u_{p}^{j}\)_, where, for every_ \(j\in J\) _and every_ \(i=1,..,p\)_,_ \(u_{i}^{j}\) _is a smooth negative plurisubharmonic function defined in a neighborhood of_ \(\bar{\Omega}\)_._ Proof.: Write \[\alpha=i^{p^{2}}\sum_{|I|=|K|=p}\alpha_{IK}dz_{I}\wedge d\bar{z}_{K}.\] It is thus enough to show that, for all \(l,k\), the \((1,1)\)-form \(dz_{l}\wedge d\bar{z}_{k}\) can be written as a linear combination of closed positive \((1,1)\)-forms with smooth coefficients. It is clear for \(l=k\). For \(l\neq k\), it follows from [1, Lemma 1.4] that \[4dz_{l}\wedge d\bar{z}_{k} =(dz_{l}+dz_{k})\wedge\overline{(dz_{l}+dz_{k})}-(dz_{l}-dz_{k}) \wedge\overline{(dz_{l}-dz_{k})}\] \[+i(dz_{l}+idz_{k})\wedge\overline{(dz_{l}+idz_{k})}-i(dz_{l}-idz_ {k})\wedge\overline{(dz_{l}-idz_{k})}.\] Note that \[T_{j}=dd^{c}u_{1}^{j}\wedge...\wedge dd^{c}u_{p}^{j},\] where the functions \(u_{1}^{j},....,u_{p}^{j}\) are taken as \[|z_{k}|^{2}-R,\ |z_{l}\pm z_{k}|^{2}-R\text{ and }|z_{l}\pm iz_{k}|^{2}-R,\] where \(1\leq l,k\leq n\), and \(R>0\) is large enough. This proof is thus complete. We now define \((dd^{c}u)^{k}\) as a closed positive \((k,k)\)-current when \(u\in\mathcal{E}(\Omega)\). Let \(u\in\mathcal{E}(\Omega)\), and let \(\alpha\) be a smooth \((n-k,n-k)\)-form defined in a neighborhood of \(\bar{\Omega}\). We write \[\alpha=\sum f_{j}T_{j},\] where \(f_{j}\) and \(T_{j}\) are as in Proposition 4.1. By [1, Theorem 4.2], \((dd^{c}u)^{k}\wedge T_{j}\) defines a Radon measure. We define \((dd^{c}u)^{k}\wedge\alpha\) by \[(dd^{c}u)^{k}\wedge\alpha=\sum f_{j}(dd^{c}u)^{k}\wedge T_{j}.\] By [1, Lemma 3.2], if \((v_{s})_{s}\in\mathcal{E}(\Omega)\), \(v_{s}\searrow u\), then the sequence \(\left((dd^{c}v_{s})^{k}\wedge T_{j}\right)_{s}\) converges weakly to \((dd^{c}u)^{k}\wedge T_{j}\). It follows that \[(dd^{c}v_{s})^{k}\wedge\alpha\longrightarrow(dd^{c}u)^{k}\wedge\alpha\quad \text{weakly}.\] Suppose now that \[\alpha=\sum f_{j}T_{j}=\sum g_{l}S_{l},\] for \(f_{j},g_{l}\in\mathcal{C}^{\infty}(\bar{\Omega})\) and \(T_{j},S_{l}\) are as in Proposition 4.1. We prove that \[\sum f_{j}(dd^{c}u)^{k}\wedge T_{j}=\sum g_{l}(dd^{c}u)^{k}\wedge S_{l}.\] Let \((u_{s})_{s}\) be the standard regularization of \(u\). If \(D\subset\subset\Omega\), then \(u_{s}\in\mathcal{E}(D)\) for \(s\) large enough, and \(u_{s}\searrow u\) on \(D\). Since \[\sum f_{j}(dd^{c}u_{s})^{k}\wedge T_{j}=\sum g_{l}(dd^{c}u_{s})^{k}\wedge S_{l },\quad\forall s,\] [1, Lemma 3.2] gives \[\sum f_{j}(dd^{c}u)^{k}\wedge T_{j}=\sum g_{l}(dd^{c}u)^{k}\wedge S_{l}.\] That proves the following theorem. **Theorem 4.2**.: _Let \(u\in\mathcal{E}(\Omega)\). For all \(k\in\{1,..,n\}\), the current \((dd^{c}u)^{k}\) is well defined. Furthermore, if \((u_{j})_{j}\) is a decreasing sequence in \(\mathcal{E}(\Omega)\) that converges to \(u\), then the sequence \(\left((dd^{c}u_{j})^{k}\right)_{j}\) converges weakly to \((dd^{c}u)^{k}\)._ ### The classes \(\mathcal{K}(\Omega,\omega,\phi)\) Let \(\omega\) be a smooth real \((1,1)\)-form defined in a neighborhood of \(\bar{\Omega}\). We denote by \(\mathcal{P}_{\omega}(\Omega)\) the set of \(\rho\in\mathcal{C}^{2}(\bar{\Omega})\cap\mathrm{PSH}(\Omega)\) such that \(\rho=0\) on \(\partial\Omega\) and \(\omega\leq dd^{c}\rho\). In the sequel, we denote by \(\phi\) a psh maximal function in \(\mathcal{E}(\Omega)\cap\mathcal{C}^{0}(\bar{\Omega})\); the term maximal means that \((dd^{c}\phi)^{n}=0\). Recall that for \(\mathcal{K}(\Omega)\in\{\mathcal{E}(\Omega)\), \(\mathcal{F}(\Omega)\), \(\mathcal{E}_{p}(\Omega)\), \(\mathcal{E}_{\chi}(\Omega),\mathcal{N}(\Omega)\}\), the set \(\mathcal{K}(\Omega,\phi)\) is defined by \[u\in\mathcal{K}(\Omega,\phi)\Leftrightarrow u\in\mathrm{PSH}(\Omega)\text{ and }\phi\geq u\geq\phi+\tilde{u},\] for \(\tilde{u}\in\mathcal{K}(\Omega)\)[2, 1, 15]. Fix \(\rho\in\mathcal{P}_{\omega}(\Omega)\). We define the set \(\mathcal{K}(\Omega,\omega,\phi)\) by \[u\in\mathcal{K}(\Omega,\omega,\phi)\Leftrightarrow u\in\mathrm{PSH}(\Omega, \omega)\text{ and }u+\rho\in\mathcal{K}(\Omega,\phi).\] **Remark 4.3**.: Note that the last definition does not depend on \(\rho\). Indeed, let \(\rho^{\prime}\in\mathcal{P}_{\omega}(\Omega)\). If \(u+\rho\in\mathcal{K}(\Omega,\phi)\), then we have \[\phi+\tilde{u}\leq u+\rho\leq\phi\] for some \(\tilde{u}\in\mathcal{K}(\Omega)\). Since \(\phi\) is maximal, we get \(u+\rho^{\prime}\leq\phi\). In the other hand \[u+\rho^{\prime}\geq u+\rho+\rho^{\prime}\geq\phi+\tilde{u}+\rho^{\prime}.\] That means \(u+\rho^{\prime}\in\mathcal{K}(\Omega,\phi)\). **Remark 4.4**.: Taking \(\phi=0\), the class \(\mathcal{K}(\Omega,\omega):=\mathcal{K}(\Omega,\omega,0)\) is a generalization of the usual Cegrell class \(\mathcal{K}(\Omega)\). We have the following observation: **Proposition 4.5**.: \[\mathcal{K}(\Omega,\omega,\phi)\subset\mathcal{E}(\Omega,\omega).\] Proof.: Let \(\rho\in\mathcal{P}_{\omega}(\Omega)\). If \(u\in\mathcal{K}(\Omega,\omega,\phi)\), then \(u+\rho\in\mathcal{K}(\Omega,\phi)\). It follows that \[0\geq\phi\geq u+\rho\geq\phi+\tilde{u},\] for some \(\tilde{u}\in\mathcal{K}(\Omega)\). Hence \(u\in\mathcal{E}(\Omega,\omega)\). ### Basic properties In this subsection we study some basic properties of the operator \((\omega+dd^{c}.)^{n}\). We first show that the set \(\mathcal{E}(\Omega,\omega)\) is always a non-empty subset of \(\mathrm{PSH}(\Omega,\omega)\). **Proposition 4.6**.: \[\mathrm{PSH}^{-}(\Omega,\omega)\cap L^{\infty}_{loc}(\Omega)\subset\mathcal{E} (\Omega,\omega).\] Proof.: Let \(\rho\in\mathcal{P}_{\omega}(\Omega)\). If \(u\in\mathrm{PSH}^{-}(\Omega,\omega)\cap L^{\infty}_{loc}(\Omega)\), then \(u+\rho\in\mathrm{PSH}^{-}(\Omega)\cap L^{\infty}_{loc}(\Omega)\subset\mathcal{E} (\Omega)\) and therefore \(u\in\mathcal{E}(\Omega,\omega)\). Functions in \(\mathcal{E}(\Omega,\omega)\) are not necessarily negative. However, we have the following observation: **Proposition 4.7**.: _There is a constant \(C\) such that \(u\leq C,\) for every \(u\in\mathcal{E}(\Omega,\omega)\)._ Proof.: Let \(\rho\in\mathcal{P}_{\omega}(\Omega)\). If \(u\in\mathcal{E}(\Omega,\omega)\), then \(u+\rho\leq 0\) and therefore \(u\leq\sup(-\rho)\). We take \(C=\sup(-\rho)\). Other basic facts are given in the following proposition: **Proposition 4.8**.: \(\bullet\)_\(\mathcal{E}(\Omega,0)=\mathcal{E}(\Omega)\)._ _._ * _Let_ \(\omega^{\prime}\) _be another smooth_ \((1,1)\)_-form defined in a neighborhood of_ \(\bar{\Omega}\)_. We have_ \[\omega\leq\omega^{\prime}\Rightarrow\mathcal{E}(\Omega,\omega)\subset\mathcal{E} (\Omega,\omega^{\prime}).\] _In particular, if_ \(\omega\geq 0\) _then_ \(\mathcal{E}(\Omega)\subset\mathcal{E}(\Omega,\omega)\)_._ Proof.: For the first statement, we take \(\rho=0\) in the definition of \(\mathcal{E}(\Omega,0)\). To prove the second statement, we take \(\rho\in\mathcal{P}_{\omega^{\prime}}(\Omega)\). ### Definition of the Monge-Ampere operator \((\omega+dd^{c}.)^{n}\) on \(\mathcal{E}(\Omega,\omega)\) Let \(u\in\mathcal{E}(\Omega,\omega)\), and let \(\rho\in\mathcal{P}_{\omega}(\Omega)\). Write \[(\omega+dd^{c}u)^{n}=(\omega-dd^{c}\rho+dd^{c}(u+\rho))^{n}=\sum_{k=0}^{n} \binom{n}{k}(-1)^{n-k}(dd^{c}(u+\rho))^{k}\wedge\gamma^{n-k},\] where \(\gamma=dd^{c}\rho-\omega\) is a semi-positive \((1,1)\)-form. From Theorem 4.2, \((dd^{c}(u+\rho))^{k}\wedge\gamma^{n-k}\) defines a positive Radon measure. By linearity, we can define the operator \((\omega+dd^{c}u)^{n}\). We have to check that \((\omega+dd^{c}u)^{n}\) is independent of \(\rho\): Let \(\rho^{\prime}\in\mathcal{P}_{\omega}(\Omega)\). It remains then to prove that \[\sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}(dd^{c}(u+\rho))^{k}\wedge(dd ^{c}\rho-\omega)^{n-k}\] \[= \sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}(dd^{c}(u+\rho^{\prime}))^{k} \wedge(dd^{c}\rho^{\prime}-\omega)^{n-k}.\] Let \(D\subset\subset\Omega\), and let \((u_{j})\) denote the standard regularization of \(u\). We have \[\sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}(dd^{c}(u_{j}+\rho))^{k} \wedge(dd^{c}\rho-\omega)^{n-k}\] \[= \sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}(dd^{c}(u_{j}+\rho^{\prime}) )^{k}\wedge(dd^{c}\rho^{\prime}-\omega)^{n-k}\] on \(D\) for \(j\) large enough. Letting \(j\rightarrow+\infty\), the result follows from Theorem 4.2. **Definition 4.9**.: Let \(u\in\mathcal{E}(\Omega,\omega)\). We define the operator \((\omega+dd^{c}u)^{n}\) by the formula \[(\omega+dd^{c}u)^{n}=\sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}(dd^{c}(u+\rho))^{k} \wedge(dd^{c}\rho-\omega)^{n-k},\] where \(\rho\in\mathcal{P}_{\omega}(\Omega)\). As a consequence of Theorem 4.2, we obtain the following result: **Theorem 4.10**.: _The Monge-Ampere operator \((\omega+dd^{c}.)^{n}\) is well defined on the class \(\mathcal{E}(\Omega,\omega)\). Furthermore, if \((u_{j})_{j}\subset\mathcal{E}(\Omega,\omega)\) is a decreasing sequence that converges to \(u\in\mathcal{E}(\Omega,\omega)\), then the sequence of Radon measures \(((\omega+dd^{c}u_{j})^{n})_{j}\) converges weakly to \((\omega+dd^{c}u)^{n}\)._ **Remark 4.11**.: The construction that we gave to the operator \((\omega+dd^{c}.)^{n}\) yields more useful information. It shows, in particular, that the study of the measure \((\omega+dd^{c}u)^{n}\), for \(u\in\mathcal{E}(\Omega)\), relies on the study of the mixed Monge-Ampere measures \[dd^{c}u_{1}\wedge...\wedge dd^{c}u_{k},\ \ u_{1},...u_{k}\in\mathcal{E}(\Omega).\] This remark will be very helpful in the sequel. ## 5. Degenerate complex Monge-Ampere equations In this section, we investigate the existence of solutions to the Dirichlet problem \[(\omega+dd^{c}u)^{n}=F(u,.)d\mu,\] for a positive Radon measure \(\mu\) vanishing on pluripolar sets and a bounded measurable function \(F:\mathbb{R}\times\Omega\to[0,+\infty[\) which is continuous and non-decreasing in the first variable. First, we discuss some properties of the operator \((\omega+dd^{c}.)^{n}\). ### Properties of the operator \((\omega+dd^{c}.)^{n}\) on \(\mathcal{E}(\Omega,\omega)\) The following example shows that there is a large difference between the operator \((dd^{c}.)^{n}\) and the operator \((\omega+dd^{c}.)^{n}\). **Example 5.1**.: Let \(\Omega\) denote the unit ball in \(\mathbb{C}^{2}\), and set \(\omega=|z_{1}|^{2}\,dd^{c}|z_{2}|^{2}\). The \((1,1)-\)form \(\omega\) is semi-positive and \[dd^{c}\omega=dd^{c}|z_{1}|^{2}\wedge dd^{c}|z_{2}|^{2}=4/\pi\;dV_{2},\] where \(dV_{2}\) is the Lebesgue measure on \(\mathbb{C}^{2}\). For \[u=\max(\log|z|,-1)\text{ and }v=\max(\log|z|,-1/2),\] we have \(u,v\in\mathcal{E}_{0}(\Omega)\), \(u\leq v\) on \(\Omega\) and \(u=v\) in a neighborhood of \(\partial\Omega\). However \[\int_{\Omega}(\omega+dd^{c}u)^{2}-(\omega+dd^{c}v)^{2}\] \[=\int_{\Omega}(dd^{c}u)^{2}-(dd^{c}v)^{2}+8/\pi\int_{\Omega}(u-v )d\lambda(z)\] \[=8/\pi\int_{\Omega}(u-v)dV_{2}<0.\] This is contrary to [1, Corollary 4.3] and proves that several inequalities of Cegrell do not extend to the operator \((\omega+dd^{c}.)^{n}\). We denote by \(\mathcal{K}^{a}(\Omega,\omega)\) the set of \(u\in\mathcal{K}(\Omega,\omega)\) such that the measure \((\omega+dd^{c}u)^{n}\) vanishes on pluripolar sets. We have the following proposition: **Proposition 5.2**.: _If \(u\in\mathcal{E}^{a}(\Omega,\omega)\), then \(u+\rho\in\mathcal{E}^{a}(\Omega)\) for all \(\rho\in\mathcal{P}_{\omega}(\Omega)\)._ Proof.: Let \(A\) be a pluripolar subset of \(\Omega\). We have \[0 =\int_{A}(\omega+dd^{c}u)^{n}\] \[=\int_{A}(dd^{c}(u+\rho))^{n}+\sum_{k=1}^{n}(-1)^{k}C_{n}^{k} \int_{A}(dd^{c}\rho-\omega)^{k}\wedge(dd^{c}(u+\rho))^{n-k}.\] Let \(\sigma\in\mathcal{P}_{-\omega}(\Omega)\). [1, Lemma 4.4] gives for all \(1\leq k\leq n\): \[\int_{A}(dd^{c}\rho-\omega)^{k}\wedge(dd^{c}(u+\rho))^{n-k} \leq\int_{A}(dd^{c}(\rho+\sigma)))^{k}\wedge(dd^{c}(u+\rho))^{n-k}\] \[\leq\left(\int_{A}(dd^{c}(\rho+\sigma))^{n}\right)^{k/n}\left( \int_{A}(dd^{c}(u+\rho))^{n}\right)^{(n-k)/n}\] \[=0.\] Hence \((dd^{c}(u+\rho))^{n}(A)=0\) and \(u+\rho\in\mathcal{E}^{a}(\Omega)\) The operator \((\omega+dd^{c}.)^{n}\) verifies a maximum principle that is similar to that of the operator \((dd^{c}.)^{n}\)[10, Theorem 4.1] (see also [1, Theorem 2.2]). **Theorem 5.3**.: _If \(u,v\in\mathcal{E}(\Omega,\omega)\), then_ \[\mathit{1}_{\{u>v\}}(\omega+dd^{c}u)^{n}=\mathit{1}_{\{u>v\}}(\omega+dd^{c} \max(u,v))^{n}.\] Proof.: Fix \(\rho\in\mathcal{P}_{\omega}(\Omega)\). Write \[(\omega+dd^{c}u)^{n}=\sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}(dd^{c}(u+\rho))^{k} \wedge(dd^{c}\rho-\omega)^{n-k},\] and \[(dd^{c}\rho-\omega)^{n-k}=\sum f_{j}T_{j},\] where \(f_{j}\) and \(T_{j}\) are as in Proposition 4.1. By linearity, it suffices to prove that \[\mathit{1}_{\{u>v\}}(dd^{c}\max(u+\rho,v+\rho))^{k}\wedge T_{j}=\mathit{1}_{ \{u>v\}}(dd^{c}(u+\rho))^{k}\wedge T_{j}.\] But this is exactly [10, Theorem 4.1]. We shall derive several consequences from the previous theorem. **Corollary 5.4**.: _Let \(u,v\in\mathcal{E}(\Omega,\omega)\), and let \(\mu\) be a positive Radon measure vanishing on pluripolar sets. If_ \[(\omega+dd^{c}u)^{n}\geq\mu\ \ \mathrm{and}\ (\omega+dd^{c}v)^{n}\geq\mu,\] _then_ \[(\omega+dd^{c}\max(u,v))^{n}\geq\mu.\] Proof.: Since the situation is local, there is no loss of generality in assuming \(\mu(\Omega)\) finite. By Theorem 5.3, we have \[(\omega+dd^{c}\max(u,v))^{n}\geq\mathit{1}_{\{u>v\}}(\omega+dd^{c}u)^{n}+ \mathit{1}_{\{u<v\}}(\omega+dd^{c}v)^{n}\geq\mathit{1}_{\{u\neq v\}}\mu.\] If \(\mu(\{u=v\})=0\), then the result follows. The proof of [1, Corollary 1.10] shows that \[\mu(\{u=v+t\})=0,\ \ \forall t\in\mathbb{R}\setminus I,\] where \(I\) is at most countable. Take \(\varepsilon_{j}\in\mathbb{R}\setminus I\), \(\varepsilon_{j}\searrow 0\). We have \[(\omega+dd^{c}\max(u,v+\varepsilon_{j}))^{n}\geq\mu.\] It suffices then to let \(\varepsilon_{j}\to 0\). The following corollary return to J. P. Demailly [1, Proposition 11.9] in the case \(\omega=0\). **Corollary 5.5**.: _Let \(u,v\in\mathcal{E}(\Omega,\omega)\). If \((\omega+dd^{c}v)^{n}\) vanishes on pluripolar sets, then_ \[(\omega+dd^{c}\max(u,v))^{n}\geq\mathit{1}_{\{u>v\}}(\omega+dd^{c}u)^{n}+ \mathit{1}_{\{u\leq v\}}(\omega+dd^{c}v)^{n}.\] _In particular, if moreover \(u\geq v\) then_ \[\mathit{1}_{\{u=v\}}(\omega+dd^{c}u)^{n}\geq\mathit{1}_{\{u=v\}}(\omega+dd^{ c}v)^{n}.\] Proof.: The proof of the first affirmation is the same as that of Corollary 5.4. The second one follows easily from the first. ### The comparison principle in \(\mathcal{N}^{a}(\Omega,\omega,\phi)\) The classical comparison principle (for the operator \((dd^{c}.)^{n}\) acting on bounded psh functions) is proven in [1, Corollary 4.4]. This result has been generalized to the class \(\mathcal{F}^{a}(\Omega)\)[1, Theorem 5.15], to the large class \(\mathcal{N}^{a}(\Omega,\phi)\)[1, Theorem 4.4] and to the operator \((\omega+dd^{c}.)^{n}\)[13, Corollary 3.4]. We propose the following general version: **Theorem 5.6**.: _Let \(u\in\mathcal{N}^{a}(\Omega,\omega,\phi)\), and let \(v\in\mathcal{E}(\Omega,\omega)\). If_ \[(\omega+dd^{c}u)^{n}\leq(\omega+dd^{c}v)^{n}\ \text{on}\ \{u<v\},\] _then \(u\geq v\)._ Proof.: Set \(\psi=\max(u,v)\). We have by Corollary 5.5 \[(\omega+dd^{c}\psi)^{n} \geq\mathit{I}_{\{u\geq v\}}(\omega+dd^{c}u)^{n}+\mathit{I}_{\{u <v\}}(\omega+dd^{c}v)^{n}\] \[\geq(\omega+dd^{c}u)^{n}.\] Setting \(f:=u-\psi\), we prove that \(f=0\). Define \[P(f):=\sup\{h\in\mathrm{PSH}(\Omega)\ ;\ h\leq f\}.\] It follows from [1, Proposition 5.1] that \(P(f)^{*}=P(f)\) almost everywhere, hence \(P(f)^{*}\leq u-v\) a.e. From this we get \(P(f)^{*}+v\leq u\) a.e., hence everywhere because these are psh functions. It thus follows that \(P(f)^{*}=P(f)\). Let \(\rho\in\mathcal{P}_{\omega}(\Omega)\). We have \(u+\rho\in\mathcal{N}^{a}(\Omega,\phi)\) by Proposition 5.2. Thus we get \(\tilde{u}+\phi\leq u+\rho\leq\phi\) for some \(\tilde{u}\in\mathcal{N}^{a}(\Omega)\). From this we get \(\tilde{u}\leq u+\rho-\phi\leq f\) because \(\psi+\rho=\max(u+\rho,v+\rho)\leq\phi\) by maximality of \(\phi\). Therefore \(\tilde{u}\leq P(f)\) and \(P(f)\in\mathcal{N}^{a}(\Omega)\) according to [1, Corollary 3.14]. Setting \(D=\{P(f)=f\}\), since \(P(f)\leq f\), we obtain by Corollary 5.5 \[\mathit{I}_{D}(\omega+dd^{c}(\psi+P(f)))^{n}\leq\mathit{I}_{D}(\omega+dd^{c} u)^{n}\leq\mathit{I}_{D}(\omega+dd^{c}\psi)^{n}.\] It follows that \(\mathit{I}_{D}(dd^{c}P(f))^{n}=0\), and hence \((dd^{c}P(f))^{n}=0\) according to Theorem 2.4. It follows from [1, Lemma 3.12] that \(P(f)=0\). Thus, we conclude that \(f=0\) and therefore \(u\geq v\). Theorem D in the introduction follows immediately from the last theorem. **Corollary 5.7** (Theorem D in the introduction).: _Let \(\mu\leq\nu\) be positive Radon measures vanishing on pluripolar sets. Assume \(u\in\mathcal{N}(\Omega,\omega,\phi)\) and \(v\in\mathcal{E}(\Omega,\omega)\) are such that \(v\leq\phi\) in \(\partial\Omega\),_ \[(\omega+dd^{c}u)^{n}=F(u,.)d\mu\ \text{and}\ (\omega+dd^{c}v)^{n}=F(v,.)d\nu.\] _Then \(u\geq v\)._ Proof.: It follows from Corollary 5.5 that \[(\omega+dd^{c}\max(u,v))^{n} \geq\mathit{I}_{\{u>v\}}(\omega+dd^{c}u)^{n}+\mathit{I}_{\{u\leq v \}}(\omega+dd^{c}v)^{n}\] \[\geq\mathit{I}_{\{u>v\}}F(u,.)d\mu+\mathit{I}_{\{u\leq v\}}F(v,.)d\mu\] \[=F(\max(u,v),.)d\mu\] \[\geq F(u,.)d\mu=(\omega+dd^{c}u)^{n},\] where the last inequality follows from the fact that the function \(F\) is non-decreasing in the first variable. Theorem 5.6 implies \(u\geq v\) ### Solution to the Dirichlet problem In this subsection, we study the main question of solving the complex Monge-Ampere type equations in \(\mathcal{K}(\Omega,\omega,\phi)\), where \(\phi\in\mathcal{E}(\Omega)\cap\mathcal{C}^{0}(\Omega)\) is maximal and \(\mathcal{K}\in\{\mathcal{F}^{a},\,\mathcal{E}_{p},\,\mathcal{E}_{\chi},\, \mathcal{N}^{a}\}\). The following theorem extends the results of Czyz [20] to the operator \((\omega+dd^{c}.)^{n}\) for any smooth real \((1,1)\)-form \(\omega\). **Theorem 5.8** (Theorem C in the introduction).: _Let \(\mu\) be a positive Radon measure vanishing on pluripolar sets, and consider a bounded measurable function \(F:\mathbb{R}\times\Omega\to[0,+\infty[\) which is continuous and non-decreasing in the first variable. If \(\mu\leq(dd^{c}w)^{n}\) for some \(w\in\mathcal{K}(\Omega)\), then there is a uniquely determined \(u\in\mathcal{K}(\Omega,\omega,\phi)\) such that_ \[(\omega+dd^{c}u)^{n}=F(u,.)d\mu.\] Before giving the proof of the last theorem, we need to generalize the subsolution theorem of Kolodziej and Nguyen [23, Theorem 4.1] to the case when \(\omega\) is merely real. **Lemma 5.9** (The subsolution theorem of Kolodziej and Nguyen).: _Let \(\varphi\in\mathcal{C}^{0}(\partial\Omega)\) and let \(\mu\) be a positive Radon measure. Suppose \(\mu\leq(dd^{c}v)^{n}\) for a bounded psh function \(v\) such that \(v=0\) on \(\partial\Omega\). Then, there exists \(u\in\mathrm{PSH}(\Omega,\omega)\cap L^{\infty}(\Omega)\), \(u=\varphi\) on \(\partial\Omega\) and such that_ \[(\omega+dd^{c}u)^{n}=\mu.\] Proof.: Let \(\sigma\in\mathcal{C}^{\infty}(\bar{\Omega})\) be such that \(dd^{c}\sigma>-\omega\). Applying [23, Theorem 2.3] to \(\omega+dd^{c}\sigma\) and \(F(.+\sigma,.)\) yields \(w\in\mathrm{PSH}(\Omega,\omega+dd^{c}\sigma)\), \(w=\varphi-\sigma\) on \(\partial\Omega\) and such that \[(\omega+dd^{c}\sigma+dd^{c}w)^{n}=F(w+\sigma,.)d\mu.\] Thus, it suffices to take \(u=w+\sigma\). **Remark 5.10**.: The same proof shows that [23, Corollary 3.4] and [23, Proposition 2.2] hold when \(\omega\) is merely real. We move on to the proof of Theorem 5.8. Proof of Theorem 5.8.: By [20, Theorem 5.11], there is \(\psi\in\mathcal{E}_{0}(\Omega)\) and \(f\in L^{1}_{loc}((dd^{c}\psi)^{n})\) such that \(\mu=f(dd^{c}\psi)^{n}\). It follows from [23, Theorem 2.3] that, for every \(j\), there exists \(u_{j}\in\mathrm{PSH}(\Omega,\omega)\cap L^{\infty}(\Omega)\) such that \(u_{j}=\phi\) on \(\partial\Omega\) and \[(\omega+dd^{c}u_{j})^{n}=F(u_{j},.)\min(f,j)(dd^{c}\psi)^{n}.\] The sequence \((u_{j})_{j}\) is decreasing by [23, Proposition 2.2]. Fix \(\rho\in\mathcal{P}_{\omega}(\Omega)\cap\mathcal{P}_{-\omega}(\Omega)\). It follows from [1, Theorem 4.14] that there is \(h\in\mathcal{K}(\Omega,\phi)\) such that \[F(\phi-\rho,.)d\mu=(dd^{c}h)^{n},\] because \(F\) is bounded and \(\mu\leq(dd^{c}w)^{n}\) for a \(w\in\mathcal{K}(\Omega)\). Using the fact that the function \(F\) is non-decreasing in the first variable, we have \[(\omega+dd^{c}u_{j})^{n}\leq F(u_{j},.)d\mu\leq F(\phi-\rho,.)d\mu\leq(\omega +dd^{c}(h+\rho))^{n}.\] Corollary 5.4 gives \[(\omega+dd^{c}u_{j})^{n}\leq(\omega+dd^{c}\max(u_{j},h+\rho))^{n}.\] Since \(u_{j}=\phi\geq h+\rho\) on \(\partial\Omega\), it follows from [15, Corollary 3.4] that \(u_{j}\geq h+\rho\) everywhere in \(\Omega\), and hence the function \(u:=\lim_{j}u_{j}\) belongs to \(\mathcal{K}(\Omega,\omega,\phi)\). Theorem 4.10 gives \[(\omega+dd^{c}u)^{n}=\lim(\omega+dd^{c}u_{j})^{n}\;\;\text{weakly}.\] Since \(F\) is bounded and continuous in the first variable, we have by Lebesgue's dominated convergence theorem \[(\omega+dd^{c}u_{j})^{n}=F(u_{j},.)\min(f,j)(dd^{c}\psi)^{n}\to F(u,.)d\mu,\] in the weak sense of measure. That implies \[(\omega+dd^{c}u)^{n}=F(u,.)d\mu.\] The solution \(u\) is uniquely determined by Corollary 5.7. As a consequence, taking \(F=1\) in the last theorem yields the following description of the range of the operator \((\omega+dd^{c}.)^{n}\). **Corollary 5.11**.: _Let \(\mu\) be a positive Radon measure. The following statements are equivalent:_ 1. _the equation_ \(\mu=(dd^{c}u)^{n}\) _has a solution_ \(u\in\mathcal{K}(\Omega)\)_;_ 2. _the equation_ \((\omega+dd^{c}v)^{n}=\mu\) _has a unique solution_ \(v\in\mathcal{K}(\Omega,\omega)\)_;_ 3. _the equation_ \((\omega+dd^{c}\varphi)^{n}=\mu\) _has a unique solution_ \(\varphi\in\mathcal{K}(\Omega,\omega,\phi)\)_._ Proof.: The implications (i) \(\Rightarrow\) (ii) and (i) \(\Rightarrow\) (iii) follows from Theorem 5.8. The equivalence is achieved by [11, Theorem 8.2], [1, Theorem 3.9], [1, Theorem 4.14] and [1, Theorem 11].
2308.08307
Integrating cognitive map learning and active inference for planning in ambiguous environments
Living organisms need to acquire both cognitive maps for learning the structure of the world and planning mechanisms able to deal with the challenges of navigating ambiguous environments. Although significant progress has been made in each of these areas independently, the best way to integrate them is an open research question. In this paper, we propose the integration of a statistical model of cognitive map formation within an active inference agent that supports planning under uncertainty. Specifically, we examine the clone-structured cognitive graph (CSCG) model of cognitive map formation and compare a naive clone graph agent with an active inference-driven clone graph agent, in three spatial navigation scenarios. Our findings demonstrate that while both agents are effective in simple scenarios, the active inference agent is more effective when planning in challenging scenarios, in which sensory observations provide ambiguous information about location.
Toon Van de Maele, Bart Dhoedt, Tim Verbelen, Giovanni Pezzulo
2023-08-16T12:10:23Z
http://arxiv.org/abs/2308.08307v1
# Integrating cognitive map learning and active inference for planning in ambiguous environments ###### Abstract Living organisms need to acquire both cognitive maps for learning the structure of the world and planning mechanisms able to deal with the challenges of navigating ambiguous environments. Although significant progress has been made in each of these areas independently, the best way to integrate them is an open research question. In this paper, we propose the integration of a statistical model of cognitive map formation within an active inference agent that supports planning under uncertainty. Specifically, we examine the clone-structured cognitive graph (CSCG) model of cognitive map formation and compare a naive clone graph agent with an active inference-driven clone graph agent, in three spatial navigation scenarios. Our findings demonstrate that while both agents are effective in simple scenarios, the active inference agent is more effective when planning in challenging scenarios, in which sensory observations provide ambiguous information about location. Keywords:Cognitive map Active inference Navigation Planning ## 1 Introduction Cognitive maps [1] are mental representations of spatial and conceptual relationships. They are considered essential components for intelligent reasoning and planning, as they are often associated with navigation in humans and rodents [2]. For this reason, a lot of recent developments in both neuroscience and computer science have been building computational models of cognitive maps [3]. These advances in the field [4, 5] are very impressive in learning abstract representations and even show that biological patterns such as grid cells [4], or splitter cells [5] can emerge from learning. However, these works typically do not focus on complex planning tasks and only consider naive or greedy strategies. In this paper, we investigate the potential of active inference as a planning mechanism for these cognitive maps. Active inference is a corollary of the free energy principle which states that intelligent agents infer actions that minimize their expected free energy. This is a proxy or bound on expected surprise, yielding a natural trade-off between exploration and goal-driven exploitation [6, 7]. We aim to investigate the impact of active inference as a planning mechanism on the performance of cognitive maps in spatial navigation strategies, especially in terms of disambiguating the "mental position" and decision-making efficiency. In particular, we look at the clone-structured cognitive graph (CSCG) [5]: a unifying model for two essential properties of cognitive maps. First, flexible planning behavior, i.e. if observations are not consistent with the expected observation in the plan, the plan can be adapted. Second, the model is able to disambiguate aliased observations depending on the context in which it is encountered, e.g. in spatial alternation tasks at the same location different decisions are made depending on context [8]. Given the CSCG's inherent mechanism for disambiguating aliased observations, we hypothesize that coupling it with active inference as a planning system will enable the identification of the optimal sequence that accurately represents the agent's location. To investigate this hypothesized benefit of active inference, we compare both a naive clone graph and an active inference-driven clone graph for navigating toward goals on two separate metrics: the number of steps it takes for an agent to reach the goal and the overall success rate. We design three distinct spatial navigation scenarios, each with a different complexity. First, we consider a slightly ambiguous (open room) environment described by [5] where we evaluate the structure learning mechanism and planning algorithms for both models. We then increase the level of ambiguity in a maze described in [9] where we believe that information-seeking behavior will be crucial for self-localization. Finally, we evaluate the performance in the T-maze, where an agent is punished for making the wrong choice by ending the episode. To summarize, the contributions of this paper are: (i) we show how to use the learned structure of a CSCG as the generative model within the active inference framework, (ii) we show that active inference agents are significantly faster in disambiguating the state in highly ambiguous environments than greedy planning agents, and (iii) we show that active inference agents make more careful decisions by first gathering evidence, yielding higher success rates for finding the reward in the T-maze environment. ## 2 Methods In this section, we first describe the mechanisms driving standard clone-structured cognitive graphs for structure learning. Then we provide a brief summary of the active inference framework and how the action is driven through Bayesian inference. Finally, we conclude this section by showing how the CSCG can be used as a generative model within the active inference framework. ### Clone-Structured Cognitive Graphs Clone-structured cognitive graphs (CSCG) [5] are a computational implementation of a cognitive map that models the joint probability of a sequence of action and observation pairs. They are a variation of the action-augmented hidden Markov model, where the next state and action are conditioned on the current state and action. The crucial difference is that these clone-structured cognitive graphs are able to disambiguate aliased observations based on the context (e.g. the previously visited trajectory), which is a property that is also observed in hippocampal splitter cells. In order for a CSCG to be able to disambiguate observations, it needs distinct states for each observation based on its context - in this case, the previous observations and actions. All states corresponding to a single observation are called the clones of this observation, and by design, each state deterministically maps to a single observation. In essence, a CSCG is a hidden Markov model in which multiple different values of the hidden state predict identical observations (i.e. their corresponding columns in the transition matrix are non-identical). A pair of the clone states in a CSCG is therefore a set of two values that a hidden state might take which share identical likelihood contingencies, but differ in Figure 1: (a) A mapping of a sequence of observations to distinct clone states in the clone-structured cognitive graph. The color indicates clones belonging to a specific observation, i.e. for each colored observation there are two clones states from which it can transition into either clone state belonging to the next observation. (b) The factor graph describing an active inference driven partially observable Markov decision process (POMDP). \(\pi\) denotes the policy, which is sampled according to the expected free energy \(G\), dependent on the preference matrix \(C\). The hidden states of the agent \(\mathbf{s}_{t}\) are initialized using the prior matrix \(D\). These states are then transitioned according to the \(B\) matrix, conditioned on the selected policy. Finally, the observed outcome variables are generated through the likelihood factor (\(A\) matrix). Observed variables are denoted in light blue circles, while unobserved variables are denoted in white circles. The factors describing the generative model are denoted in a dark blue square. their transition probabilities. A depiction of the clone graph, as described in [5] is shown in Figure 0(a). The CSCGs are optimized by minimizing the variational free energy over a sequence of observation-action pairs using the Baum-Welch algorithm [10], an expectation-maximization scheme for hidden Markov models. Through this optimization and random initialization, the model will converge to use distinct clone states for different sequences in the data. This distinction between clones is further improved by optimizing the learned model parameters through a Viterbi decoding step, only keeping the states necessary for the maximum likelihood paths in the learned model. ### Clone graph agent We define a clone graph agent that uses a greedy planning approach to select the actions. Planning using the clone-structured cognitive graphs is done by setting a fixed target state (or states), and forward propagating the messages starting from the current state. When one of the target states is assigned a non-zero probability, a path is found and the maximum likelihood states are backward propagated to retrieve the corresponding action sequence, or policy. The probability of each policy is computed as the belief over the current state \(Q(\mathbf{s}|\mathbf{\tilde{o}},\mathbf{\tilde{a}})\). Once the agent's belief over state collapses to a single state, the planning mechanism falls back to the one described in [5], where the current state is known. ### Active inference agent Actionable agents, whether biological or artificial, are separated by their environment through sensory inputs (perception) and action. The agent's observations are indirectly observed through its different sensory modalities, while the world state is also only indirectly affected by the agent's actions. This separation between the hidden variables (action, observation, agent state, and world state) is commonly referred to as the Markov blanket. The free energy principle proposes that an agent possesses a generative model that describes how outcomes are generated from the world state and how the world state is affected by the agent's actions. The principle states that the agents will minimize their surprise, bounded by the variational free energy by updating the parameters of the generative model (learning) or inferring the hidden state (perception). Active inference agents can infer the action that minimizes the "expected free energy (\(G\))" (or in other words, the free energy of the future courses of actions) [6]. Active inference assumes that actions are inferred through the minimization of the expected free energy \(G\). This means that the posterior over a policy is proportional to the expected free energy \(G\), which can be computed for each policy. More specifically, approximate posterior over policy \(Q(\pi)\) is computed as the softmax (\(\sigma\)) over the categorical over all the policies with a value of the respective expected free energy \(G\), \(\gamma\) is a temperature variable: \[Q(\pi)=\sigma(-\gamma G(\pi)),\] Where the expected free energy G of this model, for a fixed time horizon \(T\), is defined as in [11]: \[G(\pi)=\sum_{\tau=t+1}^{T}G(\pi,\tau)\] \[G(\pi,\tau)\geq-\underbrace{\mathbb{E}_{Q(\mathbf{o}_{\tau}|\pi)}\big{[}D_{KL}[ Q(\mathbf{s}_{\tau}|\mathbf{o}_{\tau},\pi)||Q(\mathbf{s}_{\tau}|\pi)]\big{]}}_{\text{ Epistemic value}}-\underbrace{\mathbb{E}_{Q(\mathbf{o}_{\tau}|\pi)}\big{[}\log P( \mathbf{o})\big{]}}_{\text{Pragmatic Value}}\] This equation decomposes in two distinct terms: an epistemic value computing the information gain term over the belief over the state, and a pragmatic value (or utility) term with respect to a preferred distribution over the observation \(P(\mathbf{o})\). In active inference, the goal of an agent is encoded in this prior belief as a preference. In a CSCG, planning is done by setting a preferred state, whereas in active inference this is typically done by setting the preferred observation. In order to make both approaches comparable, here we always plan by setting preferred states (and assume an identity mapping between the state and observation). Evaluating the expected free energy \(G\) for all the considered policies is exponential w.r.t. the time horizon \(T\). This limits the tree depth to low values for which this is practically computable. To mitigate this limitation, we set the preference for each state proportional to the distance toward the goal state (in the cognitive map). While this system simplifies computing the utility to be sufficient for a depth of one, the planning mechanism still requires larger depths for achieving (non-greedy) long-term information-seeking behavior. #### 3.2.2 CSCG as the generative model for active inference We consider active inference in the discrete state space formulation [12], as shown in the factor graph in Figure 0(b). The generative model is therefore described by a set of four specific matrices: the \(A\) matrix defines the likelihood model, or how observations are generated from states: \(P(\mathbf{o}|\mathbf{s})\), the \(B\) matrix defines the transition model, or how the belief over state changes conditioned on an action \(\mathbf{a}_{t}\): \(P(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a}_{t})\). The \(C\) matrix describes the preference of the agent \(P(\mathbf{s})\), and finally, the \(D\) matrix describes the prior belief over the initial state \(P(\mathbf{s})\). First, we learn the world structure using a CSCG through the minimization of the evidence lower bound with respect to the model parameters as described in [5]. We then map the parameters of the learned hidden Markov model to the four matrices describing the active inference model. First, we reduce the model by only considering the states for which the transition probability marginalized over action and next state \(\sum_{\mathbf{s}}\sum_{\mathbf{a}}p(s_{t}|\mathbf{s},\mathbf{a})\), assuming a uniform distribution over \(\mathbf{s}\) and \(\mathbf{a}\), is larger than the threshold of \(0.0001\). The \(A\) matrix can be directly constructed by setting \(P(\mathbf{o}_{i}|\mathbf{s}_{j})=1\) for all remaining clones \(\mathbf{s}_{j}\) of observation \(\mathbf{o}_{i}\). To construct the \(B\) matrix, the transition matrix from the trained CSCG can be taken directly. A crucial difference between the POMPD in discrete time active inference and the CSCG is that the actions are state-conditioned in the latter. This means that starting in some states, an action can not be taken. In the learned transition matrix, the following condition does not always hold: \(\sum_{\mathbf{s}_{t+1}}P(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a}_{t})=1\). We convert this transition matrix to proper probabilities by adding a novel dispreferred state \(\mathbf{s}_{d}\), for which we set the transition probability to \(1\) in these illegal cases, and for which this state transitions to itself for each possible action. We then normalize the transition matrix such that probabilities sum to \(1\). We also add a \(P(\mathbf{o}_{d}|\mathbf{s}_{d})=1\) mapping in the \(A\) matrix. The preference of the agent, or \(C\) matrix, is not present in the standard formulation of the CSCG. However, the agent is able to plan toward a goal that is set in state space. We model this by setting a preference over this state, or set of states in case of an observation-space preference or multiple target goals. Additionally, for the newly added state \(\mathbf{s}_{d}\) to which the illegal actions are mapped, we set a very low value (as if it would drive you to a state that is farther away from the goal than the maximum distance) in order to drive the agent to avoid these actions when planning according to its expected free energy. The prior distribution over the initial state, matrix \(D\), is initialized as a uniform prior over all the states. The agent thus starts with no knowledge about the state it is in and has to gather evidence to change this belief. ## 3 Results In this work, we compare the behavior of two agents that select their actions using a CSCG: the former ("clone graph", Section 2.2) agent plans using a greedy approach, whereas the latter ("active inference", Section 2.3) agent uses active inference and expected free energy to plan ahead. We also compare these two agents with a random ("random") agent baseline. In particular, we look at goal-driven behavior in three distinct environments each requiring a different level of information-seeking behavior. First, we consider an open room as proposed in [5] in which the agent has to reach a uniquely defined corner, for which the goal is provided as a goal observation. Second, we consider a more ambiguous environment in which the agent has to reach the uniquely defined center of a room, but it first needs to localize itself within the room. Finally, we evaluate the approach on the T-maze, where the agent should first observe a cue, as a wrong decision is "fatal". In each experiment, we first train the generative models as CSCGs and then convert them to discrete state space matrices for active inference within the PyMDP framework [11]. ### Navigating in an open room environment In this first experiment, we investigate the performance of all agents in a simple environment where we hypothesize that there is no immediate gain in using the active inference framework for information-seeking behavior. As the clone graph agent is still able to integrate observations to improve its belief over its current state, we expect both agents to gather enough evidence to accurately plan toward the goal. For this maze, we consider an open room environment based on the one described in [5]. We recreate the environment within the Minigrid [13] framework. The room is defined by a four-by-four grid in which the agent can freely navigate by selecting actions like "turn left", "turn right" or "move forward". The agent observes a three-by-three patch around its current position, as shown in Figure 1(b). Each corner of the environment is uniquely defined by an observable colored patch, as shown in Figure 1(a) and Figure 1(b). Each observed patch is mapped to a unique index as observation. In this environment, this corresponds to 21 observations. We learn the structure of the room by first training a CSCG, initialized with 20 clones for each observation, as described in Section 2. The model parameters were learned using a random-walk sequence consisting of 100k observation-action pairs. We then set the preference of the agent to the two observations reaching the corner, e.g. for the bottom right corner this is the observation of reaching it from the left and from the top. As described in Section 2, we select the clone states for which the likelihood of this observation is 1 and set the preference for all these states for both the clone graph and active inference planning schemes. We run an experiment for all three agents where the agent starts in a random (ambiguous, i.e. looking at the center) pose and has to reach a randomly selected corner as the goal. We run this for 400 separate trials, where each trial was seeded with the same random seed, ensuring that the different agents start with the same starting position and goal. We provide the agents with 25 timesteps to reach the goal and report the success rate and episode length for each of the agents. Qualitatively, in Figure 1(a), we observe that the behavior between the clone graph agent and the active inference agent is very similar; it first picks a corner which is either the goal and the episode ends or an informative landmark, and then the agent moves towards the goal. Quantitatively, we observe the duration of the episode and see that the average episode length shown in Figure 1(c) is significantly larger for the random agent with respect to both the clone graph agent (2-sample independent t-test, p-value=\(7.6\cdot 10^{-6}\)) and the active inference agent (2-sample independent t-test, p-value=\(3.6\cdot 10^{-5}\)), illustrating that the model has learned the structure of the world and is not moving randomly. Secondly, we observe that the average episode length of the clone graph agent does not significantly differ from the active inference agent (2-sample independent t-test, p-value=\(0.237\)), illustrating that for this environment the information-seeking behavior does not benefit performance. This is further evidenced by the success rate shown in Figure 1(d), where the performance of both agents does not significantly differ as they are identical at a 100% success rate. From this experiment, we conclude that in an environment where the agent can quickly find an unambiguous landmark such as the corners in the open room, both agents have similar performance. ### Self-localization in an ambiguous maze In the previous environment, the agent was able to quickly self-localize as random actions would easily disambiguate where in the environment they are. In this experiment, we increase the level of ambiguity and evaluate whether the active inference agent is able to self-localize faster than the clone graph agent. For this experiment, we consider the highly ambiguous maze from Friston et al. [9] shown in Figure 2(a). In this environment, the agent is only able to observe Figure 2: (a) Qualitative results of navigating the open room maze for the different agents with different random seeds. The agent is tasked with reaching a particular corner in the maze. The trajectory of the agent is marked, and the arrow points the direction in which the agent is looking. (b) The two three-by-three observations defining a goal in a corner of the open room maze. (c) A box plot representing the statistics of the amount of time until the goal is reached (only the success scenarios are considered) over 400 trials. (d) The success rate of the agent in reaching the goal observation (computed over 400 trials). the one-by-one tile the agent is currently standing on, i.e. if it is a red, white, or green tile. While the red and white tiles are highly ambiguous, there is only a single green tile at the center of the maze. The agent is able to navigate the maze through actions like "up", "down", "left" or "right", and is only limited by a wall around the maze. Unique observation tiles are again mapped to categorical indices. We construct a CSCG with 40 clones per observation and optimize it over a sequence of 10k steps in the environment until convergence. We then set the preference for this environment as the green tile, in a similar fashion as we did Figure 3: (a) Qualitative results of navigating the ambiguous maze with the three different agents. The green square marks the goal observation, the trajectory of the agent is marked in black. In this maze, the agent can only observe the current tile, and the color of the tile represents the observation the agent receives. (b) Shows the amount of steps needed for reaching the target, only measured for the success cases. (c) Shows the success rate, computed over 400 trials for the three agents. in the experiment in Section 3.1 for both the clone graph agent and the active inference agent. In this environment, the agent's goal is always to go to the green tile in the center of the room. However, the agent starts at a random position on a white tile. We again run this experiment for 400 trials for each agent, seeded over trials such that the starting position is the same for each agent. Each episode has a max duration of 25 steps, and we record the episode length and the success rate of the agents. Qualitatively, we can see the trajectories taken by the clone and active inference agents in Figure 2(a). We observe that both agents are able to solve the task, seemingly moving randomly in the maze. However, we also observe the random agent navigating in the maze, which typically does not reach the goal. Quantitatively, we again measure that the clone graph agent (2-sample independent t-test, p-value=\(1\cdot 10^{-99}\)) and active inference agent (2-sample independent t-test, p-value=\(1\cdot 10^{-168}\)) significantly differ from the random agent, showing goal-directed behavior. However, we now observe that the clone graph agent with a mean episode duration of 10.92 steps is significantly slower than the active inference agent with a mean episode duration of 7.92 steps (2-sample independent t-test, p-value=\(3.46\cdot 10^{-22}\)) even though their success rate is similar with 98.5% for the clone graph agent and 100% for the active inference agent. From this experiment, we conclude that in highly ambiguous environments, agents using active inference for goal-driven behavior disambiguate their location and reach the goal faster than agents who do not. ### Solving the T-Maze In this final experiment, we consider an environment where making informative decisions is crucial. We compare the performance of the agents in the quintessential active inference environment: the T-maze [14]. In this environment, the agent must make a choice to go either in the left or the right corridor without being able to observe the location of the reward (we hide it behind a door), and the episode ends when it makes a decision. The agent is, however, able to disambiguate the location of the reward by observing a colored cue behind itself. We create the environment again in the Minigrid environment [13], and the agent has three-by-three patches as observations and can act by either "turning left", "turning right" or "moving forward". The agent always starts in an upwards-looking position, looking away from the cue. Additionally, when the agent wants to walk through a door, it immediately goes to the tile behind the door, ending the episode either in reward or not. We train a CSCG with 5 clones per observation on 500 distinct episodes with a maximum length of 50 steps, however, these episodes are typically shorter as the agent goes through a door. Similar to the open room environment, we map each three-by-three observation patch to a unique index and additionally, we also map the reward to a separate observation. This yields 17 unique observations the agent can observe. We then set the preference to the rewarding observation for both the clone and active inference agents, and depending on context, the agent should be able to infer a different path towards the goal. We again conduct 400 random trials, where the seed is again fixed for each trial within an agent, ensuring that for each trial the goal location is the same. When we evaluate the behavior of the agents qualitatively (Figure 3(a)), we observe that the active inference agent always moves forward, turns around and checks for the cue, and then moves towards the correct goal location. In contrast, the clone graph agent randomly picks a direction as it has not accurately inferred in which state it currently is. Interestingly, when the stochasticity of the action sampling forces the agent to turn around and it observes the cue, it chooses the correct action. This explains the 56.75% success rate, which is slightly higher than the expected 50% of selecting actions randomly. In this environment, where thoughtless decisions are punished, the active inference agent is significantly more accurate with a success rate of 100% (2-sample independent z-test for proportions, p-value=\(6.25\cdot 10^{-50}\)). Interestingly, the clone graph agent Figure 4: (a) Qualitative results of navigating the T-maze with the three different agents. The green square marks the goal observation, and the black arrows the trajectory followed by the agent. At the bottom of the T, there is a colored cue, blue marks that the goal is on the right, while red marks that the goal is on the left. (b) Shows the number of steps needed for reaching the target, only measured for the success cases. (c) Shows the success rate, computed over 400 trials for the three agents. is significantly faster with an average of 4.5 steps than the active inference agent with an average of 5 steps (2-sample independent t-test, p-value=\(2.86\cdot 10^{-5}\)). This is attributed to the fact that the agent does not take the time to observe the cue and moves towards wherever it believes the goal is. From this experiment, we conclude that in information-critical decision-making environments using active inference provides a significant benefit over greedy planning strategies. ## 4 Discussion We relate our work to representation learning in complex environments. In the context of learning cognitive maps, work has been done that explicitly separates the underlying spatial structure of the environments with the specific items observed [4]. While this model does not entail a generative model, other approaches do consider the hippocampus as a generative model [15] and show that through generative processes novel plans can be created. Model-based reinforcement learning systems learn similar world models directly from pixels [16] and are able to achieve high performance on RL benchmarks. All these approaches typically treat planning as a trivial problem that can be solved through forward rollouts, or by value optimization using the Bellman equation, however, they do not consider the belief over the state as a parameter. Within the active inference community, a lot of work has been applied to planning in different types of environments. Casting navigation as inferring the sequence of actions under the generative model using deep neural networks has been done before in [17, 18], where the approximate posterior is implemented through a variational deep neural network. The active inference framework has also been successful in solving various RL benchmarks [19, 20]. These approaches show that inferring action through surprise minimization is powerful in solving a wide range of tasks, although they do not explicitly deal with aliasing in observations. We believe that the combination of both approaches can yield a promising avenue for building cognitive maps in silico that can be used to solve important real-world tasks such as navigation. The CSCG has been shown to be a powerful model for flexible planning and disambiguating aliased observation, making it the perfect candidate for integration within the active inference framework. Through this interaction with the inherent uncertainty-resolving behavior of active inference, we have observed significant improvements in terms of success rate or episode lengths depending on the specific environment. Another open issue that we plan to resolve in the future is the fact that the CSCG is currently learned in an offline fashion. Therefore our current approach is not benefitting from the curiosity- or novelty-based scheme of active inference [21, 7], which we hypothesize to improve the training efficiency with respect to the number of required samples. ## 5 Conclusion We first propose a mechanism for using the clone-structured cognitive graph within the active inference framework. This allows us to use the naturally context-dependent disambiguating of aliased observations in the generative model within the active inference framework that naturally will seek the sequence best aligned with this purpose. Through evaluation in three distinct environments, we have highlighted the advantages of active inference compared to more simplistic and greedy planning methods. We show that in naturally unambiguous environments, the active inference and clone agents perform similarly in both success rate and time to reach the goal. Additionally, we have observed that the active inference agent exhibits a significantly higher success rate in environments requiring informed decision-making. Finally, we show that in environments where an agent has to make an informed decision, the active inference agent has a significantly higher success rate. These results corroborate the benefits of using an active inference approach. #### Acknowledgments This research received funding from the Flemish Government (AI Research Program). This research was supported by a grant for a research stay abroad by the Flanders Research Foundation (FWO).
2301.04036
Deep Reinforcement Learning for Autonomous Ground Vehicle Exploration Without A-Priori Maps
Autonomous Ground Vehicles (AGVs) are essential tools for a wide range of applications stemming from their ability to operate in hazardous environments with minimal human operator input. Effective motion planning is paramount for successful operation of AGVs. Conventional motion planning algorithms are dependent on prior knowledge of environment characteristics and offer limited utility in information poor, dynamically altering environments such as areas where emergency hazards like fire and earthquake occur, and unexplored subterranean environments such as tunnels and lava tubes on Mars. We propose a Deep Reinforcement Learning (DRL) framework for intelligent AGV exploration without a-priori maps utilizing Actor-Critic DRL algorithms to learn policies in continuous and high-dimensional action spaces directly from raw sensor data. The DRL architecture comprises feedforward neural networks for the critic and actor representations in which the actor network strategizes linear and angular velocity control actions given current state inputs, that are evaluated by the critic network which learns and estimates Q-values to maximize an accumulated reward. Three off-policy DRL algorithms, DDPG, TD3 and SAC, are trained and compared in two environments of varying complexity, and further evaluated in a third with no prior training or knowledge of map characteristics. The agent is shown to learn optimal policies at the end of each training period to chart quick, collision-free exploration trajectories, and is extensible, capable of adapting to an unknown environment without changes to network architecture or hyperparameters. The best algorithm is further evaluated in a realistic 3D environment.
Shathushan Sivashangaran, Azim Eskandarian
2023-01-10T15:38:59Z
http://arxiv.org/abs/2301.04036v2
# Deep Reinforcement Learning for Autonomous Ground Vehicle Exploration Without A-Priori Maps ###### Abstract Autonomous Ground Vehicles (AGVs) are essential tools for a wide range of applications stemming from their ability to operate in hazardous environments with minimal human operator input. Efficient and effective motion planning is paramount for successful operation of AGVs. Conventional motion planning algorithms are dependent on prior knowledge of environment characteristics and offer limited utility in information poor, dynamically altering environments such as areas where emergency hazards like fire and earthquake occur, and unexplored subterranean environments such as tunnels and lava tubes on Mars. We propose a Deep Reinforcement Learning (DRL) framework for intelligent AGV exploration without a-priori maps utilizing Actor-Critic DRL algorithms to learn policies in continuous and high-dimensional action spaces, required for robotics applications. The DRL architecture comprises feedforward neural networks for the critic and actor representations in which the actor network strategies linear and angular velocity control actions given current state inputs, that are evaluated by the critic network which learns and estimates Q-values to maximize an accumulated reward. Three off-policy DRL algorithms, DDPG, TD3 and SAC, are trained and compared in two environments of varying complexity, and further evaluated in a third with no prior training or knowledge of map characteristics. The agent is shown to learn optimal policies at the end of each training period to chart quick, efficient and collision-free exploration trajectories, and is extensible, capable of adapting to an unknown environment with no changes to network architecture or hyperparameters. ## I Introduction Autonomous Ground Vehicles (AGVs) are indispensable tools for mapping uncharted terrain, search & rescue missions, disaster response, military operations, mining, and extraterrestrial planetary exploration owing to their ability to operate in hazardous, unstructured environments reliably with minimal input from a human operator. Conventional AGV navigation algorithms are dependent on specific environmental configurations [1] which limits their effectiveness in adapting to dynamically changing environments such as areas where emergency hazards like fire and earthquake occur, and unexplored subterranean environments such as tunnels, caves and lava tubes on Mars. Recent advancements in Artificial Intelligence (AI), sensors, communication and computer technology facilitate intelligent AGVs capable of high autonomy. Simultaneous Localization And Mapping (SLAM) enables AGVs to simultaneously estimate vehicle state utilizing on-board sensors and construct a model of the environment the sensors perceive [2, 3]. The inclusion of LIDAR-centric SLAM in the perception pipeline is a key enabler for AGV navigation in environments that are GPS-denied with no access to a-priori maps [4]. Mobile robot trajectories require optimization for shortest path, minimum energy consumption and training time [5]. Conventional motion planning algorithms offer limited utility in information poor, dynamically altering environments. These comprise graph search algorithms such as Dijkstra, A* and D* [6] that are well-defined and simple to use but are inefficient in complex, dynamic environments and have poor robustness to noise interference and errors in the environment model, random sampling algorithms such as Probability Graph Method (PGM) and Rapid exploration Random Tree (RRT) [7] that select random scatter points in the entire environment space to search for the optimal path between the starting and end points making them susceptible to poor real-time performance, sub-optimal solutions and high computation cost, Artificial Potential Field (APF) [8] that has low computation cost and is efficient but prone to local minima traps, and nature inspired algorithms such as fuzzy logic that is robust, but requires prior knowledge in the form of user defined knowledge based logic and rules, and Genetic Algorithm (GA) [9] which is ideal for the global optimal solution and suitable for complex problems, but has poor local search ability and slow convergence rate. Motion planning models that incorporate Artificial Neural Networks (ANN) and Actor-Critic Reinforcement Learning (RL) enable robotic systems to learn optimal, end-to-end policies in continuous and high-dimensional action spaces directly from characteristics of high-dimensional sensory input data to intelligently select goal driven actions in dynamically changing, obstacle filled unstructured terrain in the absence of prior knowledge and detailed maps [10, 11, 12, 13]. On-policy Actor-Critic Deep Reinforcement Learning (DRL) algorithms such as Trust Region Policy Optimization (TRPO) [14] and Proximal Policy Optimization (PPO) [15] are robust to hyperparameter tuning and straightforward to implement, but are sample inefficient as these require new training samples for every policy update, which makes learning an effective policy for complex tasks computationally exorbitant. Off-policy Actor-Critic DRL algorithms such as Deep Deterministic Policy Gradient (DDPG) [16], Twin Delayed Deep Deterministic Policy Gradient (TD3) [17] and Soft Actor-Critic (SAC) [18] reuse past experience for learning, thus have good sample efficiency. Given the potential of DRL for AGV navigation in information poor environments, this paper presents and evaluates a DRL architecture for intelligent AGV navigation, and compares state-of-the-art off-policy DRL algorithms' ability to safely navigate and explore obstacle filled terrain without prior knowledge of environment characteristics. Moreover, this paper answers research questions related to effective policy transfer between environments, and sheds light on the importance, and benefits of simulation training for complex DRL tasks. These questions are answered through multiple simulations and analyses of learning and post-training performances in environments of varying complexity. ## II Background on Deep Reinforcement Learning RL is a Machine Learning (ML) framework inspired by trial-and-error animal learning to train agents that interact with the surrounding environment by promoting or discouraging actions utilizing reward feedback signals designed to gauge effectiveness of executed actions. Deep Learning (DL), a key ML component, utilizes ANNs to form an abstract, distinguishable high-level representation from low-level input features. Deep Reinforcement Learning (DRL) algorithms combine DL and RL to extract unknown environment features from high-dimensional input data utilizing ANNs, and decide control actions using RL. Figure 1 portrays the DRL framework. A RL agent observes its state \(s_{i}\) at each time step \(t\), and selects an action \(a_{i}\) from action space \(A\), conforming to a learned policy \(\pi(a_{i}\mid s_{i})\) that maps states to actions. The expectation of a discounted, accumulated reward \(R_{i}=\Sigma_{k=0}^{\infty}\gamma^{k}rw_{i+k}\) at each state is maximized during learning, where \(\gamma\in\) (0,1] is the discount factor, and \(rw_{i}\) is the scalar reward signal for selecting action \(a_{i}\)[19]. ### _Actor-Critic Framework_ An actor-critic framework utilizing deep function approximators that combines both value-based and policy-based RL is the preferred method to learn policies in continuous and high-dimensional action spaces, required for robotics applications. This method leverages the joint computing and decision-making abilities of the actor and critic neural networks to yield low variance and fast speeds when updating gradients. Figure 2 illustrates the Actor-Critic framework. The actor network strategies an action output selected from a continuous action space using policy gradient, utilizing the current state as the input. The critic evaluates the chosen actions and outputs the associated approximate Q-value for the current state and selected action using an approximated value function to counter the large variance in the policy gradients. In off-policy algorithms, sample data accumulated in a replay buffer is utilized to update and approximate the value function yielding higher sample efficiency than on-policy algorithms. The two networks compute the action prediction for the current state at each time step to generate a temporal-difference error signal. ### _Deep Deterministic Policy Gradient_ DDPG is a model-free, off-policy actor-critic RL algorithm that combines ANNs with the actor-critic representation of standard Deterministic Policy Gradient (DPG) [20] to successfully implement control sequences in a continuous action space. The actor, \(\pi(s\mid\theta)\) and critic, \(Q(s,a\mid\phi)\) each comprise fully-linked, two-layer feedforward ANNs with a Rectified Linear Unit (ReLU) activation function. The loss \(L\) is minimized across all sampled experiences to update the critic parameters, \(\phi\), \[L=\frac{1}{M}\sum_{i=1}^{M}(y_{i}-Q(s_{i},a_{i}\mid\phi))^{2} \tag{1}\] Here \(M\) is a random mini-batch of experiences, and \(y_{i}\) is the target value function computed as follows, \[y_{i}=R_{i}+\gamma Q_{t}(s_{i+1},\pi_{t}(s_{i+1}\mid\theta_{t}) \mid\phi_{t}) \tag{2}\] \(\theta_{t}\) and \(\phi_{t}\) are parameters of the target actor \(\pi_{t}\) and target critic \(Q_{t}\) respectively, that have the same structure and parameterization as \(\pi\) and \(Q\). The agent periodically updates \(\theta_{t}\) and \(\phi_{t}\) using the latest \(\theta\) and \(\phi\) values to improve the stability of the optimization. The actor parameters, \(\theta\) are updated using a sampled policy gradient \(\nabla_{\theta}J\) to maximize the expected discounted reward, \[\nabla_{\theta}J\approx\frac{1}{M}\sum_{i=1}^{M}G_{ai}G_{\pi i} \tag{3}\] Here \(G_{ai}\) is the gradient of the critic output with respect to the action selected by the actor network computed as follows, Fig. 1: Schematic of deep reinforcement learning framework. Fig. 2: Schematic of Actor-Critic framework. \[G_{ai}=\nabla_{a}Q(s_{i},\pi(s_{i}\mid\theta)\mid\phi) \tag{4}\] \(G_{\pi i}\) is the gradient of the actor output with respect to its parameters, \[G_{\pi i}=\nabla_{\theta}\pi(s_{i}\mid\theta) \tag{5}\] ### _Twin-Delayed Deep Deterministic Policy Gradient_ TD3 is designed to improve learned policies by preventing overestimation of the value function. Two Q-value functions are learned simultaneously, and the minimum is used for policy updates. Moreover, the policy is updated less frequently than the Q-value function to further improve learned policies. The parameters of the critic, \(Q_{k}(s,a\mid\phi_{k})\), where \(k=2\) is the number of critics, are updated by minimizing the loss \(L_{k}\) as follows, \[L_{k}=\frac{1}{M}\sum_{i=1}^{M}(y_{i}-Q_{k}(s_{i},a_{i}\mid\phi_{k}))^{2} \tag{6}\] The target value function \(y_{i}\) is computed as follows, \[y_{i}=R_{i}+\gamma\min_{k}(Q_{tk}(s_{i+1},clip(\pi_{t}(s_{i+1}\mid\theta_{t}) +\varepsilon)\mid\phi_{tk})) \tag{7}\] Here \(\theta_{t}\) and \(\phi_{tk}\) are parameters of the target actor \(\pi_{t}\) and target critics \(Q_{tk}\), and \(\varepsilon\) is noise added to the computed action to promote exploration. The action is clipped based on the noise limits. The actor parameters are updated similar to DDPG using Equation (3) where \(G_{ai}\) is computed as follows and \(G_{\pi i}\) is computed as in Equation (5). \[G_{ai}=\nabla_{a}\min_{k}(Q_{k}(s_{i},\pi(s_{i}\mid\theta)\mid\phi)) \tag{8}\] ### _Soft Actor-Critic_ SAC, similar to DDPG and TD3, is a model-free, off-policy actor-critic RL algorithm. In addition to maximizing the long-term expected reward, SAC maximizes the entropy of the policy, which is a measure of the policy uncertainty at a given state. A higher policy entropy promotes exploration, hence the learned policy balances exploitation and exploration of the environment. The agent utilizes a stochastic actor that outputs mean and standard deviation, using which an unbounded action is randomly selected from a Gaussian distribution. The entropy of the policy is computed during training for the given observation using this unbounded probability distribution. Bounded actions that comply with the action space are generated from the unbounded action by applying \(tanh\) and scaling operations. The critic parameters are updated at specific time step periods by minimizing the loss function in Equation (6), similar to TD3 for \(k\) critics. The target value function \(y_{i}\) is computed as the sum of the minimum discounted future reward from the critic networks \(R_{i}\), and the weighted entropy as follows, \[y_{i}=R_{i}+\gamma\min_{k}(Q_{tk}(s_{i+1},\pi(s_{i+1}\mid\theta)\mid\phi_{tk})) \tag{9}\] \[-\alpha ln\pi(s_{i+1}\mid\theta)\] Here \(\alpha\) is the entropy loss weight. The entropy weight is updated by minimizing the loss function, \(L_{\alpha}\) where \(H\) is the target entropy as follows, \[L_{\alpha}=\frac{1}{M}\sum_{i=1}^{M}(-\alpha ln\pi(s_{i}\mid\theta)-\alpha H) \tag{10}\] The stochastic actor parameters are updated by minimizing the objective function \(J_{\pi}\), \[J_{\pi}=\frac{1}{M}\sum_{i=1}^{M}(-\min_{k}(Q_{tk}(s_{i},\pi(s_{i}\mid\theta) \mid\phi_{tk}))+\alpha ln\pi(s_{i}\mid\theta)) \tag{11}\] ## III Methodology This section presents the DRL architecture, reward design, training, and evaluation methodologies for collision-free AGV exploration in unknown environments. The MATLAB Robotics System [21] and Reinforcement Learning [22] Toolboxes, and Simulink are utilized to model the AGV, and train the DRL agent. ### _Network Architecture_ In order to maximize the long-term reward, designed to encourage quick, efficient, and collision-free exploration of the environment, the DRL agent makes strategic linear and angular velocity action decisions for the current time step, \(v_{t}\) and \(\omega_{t}\). These decisions are based on LiDAR range measurements \(r\), the AGV's state \(s=(x,y,\psi)\), the previous time step's action \(a=(v,\omega)\), and the corresponding reward value, \(R\). The proposed DRL architecture for AGV exploration is shown in Figure 3. Fig. 3: Ubiquitous Deep Reinforcement Learning architecture for Autonomous Ground Vehicle exploration. ### _Reward Function_ The reward function is designed to encourage the agent to explore its environment efficiently, quickly and safely, without collisions. It computes a scalar reward value as follows, \[R=0.005r^{2}+1.3v^{2}-0.5\omega^{2} \tag{12}\] A positive reward is applied to the square of the minimum measurement obtained by the LiDAR sensor, \(r\) to incentivize obstacle avoidance. This reward is highest when the agent is at a greater distance from obstacles, encouraging the generation of paths devoid of obstacles. The agent is additionally rewarded for swift navigation through positive reinforcement of linear velocity, \(v\). To encourage efficient exploration, a negative reward is applied to angular velocity, \(\omega\) to discourage repeated circular motion in the same vicinity. High coefficients for \(r^{2}\) and \(v^{2}\) lead to a compromise between obstacle avoidance ability and exploratory behavior, hence an optimal balance was determined through experimentation to prioritize both exploration, and collision avoidance. ### _AGV Model_ XTENTH-CAR [23], a proportionally scaled experimental vehicle platform, designed with similar hardware and software architectures as the full-size X-CAR [24] connected autonomous vehicle, is modeled and trained in simulation. The XTENTH-CAR AGV, shown in Figure 4, has a wheelbase of 0.32 \(m\) and utilizes the Ackermann steering mechanism. The AGV's kinematics are computed using a bicycle model, portrayed in Figure 5, where the front and rear wheels are represented by a single wheel located at the center of each axle. This model is accurate for use at low speeds and offers a good balance between model accuracy and computation cost [25] for evaluation of the DRL agent. The bicycle model is represented by the following equations, \[\dot{x}=v\,cos(\psi+\beta) \tag{13}\] \[\dot{y}=v\,sin(\psi+\beta) \tag{14}\] \[\dot{\psi}=\frac{v}{l_{r}}\,sin(\beta) \tag{15}\] \[\beta=tan^{-1}\left(\frac{l_{r}}{l_{f}+l_{r}}\,tan(\delta)\right) \tag{16}\] Here \(x\) and \(y\) are position coordinates of the AGV's center of mass, \(\psi\) is the angle of the AGV's heading with respect to the inertial reference frame, \(\beta\) is the angle between the velocity vector of the AGV's center of mass and its longitudinal axis, \(l_{f}\) and \(l_{r}\) are distances from the center of mass to the front and rear axles respectively, and velocity, \(v\) and steering angle, \(\delta\) are control inputs. ### _Environments for Training and Evaluation_ The DRL agent is trained in two distinct environments of varying complexity. The first environment, depicted in Figure 6, is a 25 \(m\) x 25 \(m\) space with walls that the agent must steer clear of. The second environment, illustrated in Figure 7, is a more complex 40 \(m\) x 40 \(m\) space with walls and various obstacles, marked in black, that the agent must additionally avoid. The AGV, identified with a red symbol on the training maps, is set to a random starting position at the start of each training episode to enhance policy learning. This reset ensures that the agent is not biased towards any particular initial location. Fig. 4: XTENTH-CAR Ackermann steered AGV platform. Fig. 5: Schematic of kinematic bicycle model. Fig. 6: First training environment with DRL agent marked in red at a randomized initial location. The trained agent is evaluated in a third environment, illustrated in Figure 8 to evaluate the robustness, and performance of the learned policy in a new, unknown environment with the same network architecture and hyperparameters. ### _Training Conditions and Hyperparameters_ A training episode is concluded when the agent encounters an obstacle or completes the maximum number of steps permitted in a single episode. Subsequently, the agent is reset to a randomly determined starting location to initiate the next episode. The DRL agent is trained to a total of 10,000 episodes, each with a maximum of 1000 steps in the first environment, and 20,000 episodes, each with a maximum of 2000 steps in the second, to facilitate rapid iterative learning. Modified hyperparameters with non-default values are listed in Table I. ## IV Results and Discussion In this section, we present DRL training results, including post-training exploration trajectories and corresponding average return and steps achieved by the agent each episode iteration during the training period, utilizing DDPG, TD3 and SAC algorithms. We further evaluate each trained policy in a new environment with no prior knowledge of environment characteristics. ### _Training Results_ An Intel i7 11700K CPU and GeForce RTX 3070 Ti GPU were used for training. Table II summarizes the training times for each DRL algorithm in the evaluated environments. SAC required the longest training time, followed by TD3 and DDPG which required the least. On average, training in the second, more complex environment required 31% longer training time than in the first, over twice the number of training episodes. DDPG required 28.5%, TD3 34.4% and SAC 30.1% longer to train in the second environment. In the first environment, TD3 required 44.4% longer to train than DDPG, and SAC 137.4% longer than DDPG and 64.4% longer TD3. In the second environment, TD3 required 51.1% longer to train than DDPG, and SAC 140.4% longer than DDPG and 59.2% longer than TD3. On average, TD3 required 47.8% longer training time than DDPG, and SAC 138.9% longer than DDPG and 61.8% longer training time than TD3. Training times ranged from 2.75 days in the first environment for DDPG to 8.5 days in the second environment for SAC. More optimal policies require longer training times to accommodate increased episode steps in the first environment, and more training episodes in the second. #### Iv-A1 First Environment The order 50 moving average return and agent steps during training in the first environment are illustrated in Figures 9 and 10. The training results in the first environment are summarized in Table III. DDPG converges first at 170 episodes with an average return of 318 and 865 average steps. TD3 converges last at 960 episodes with an average return of 435 and 1000 average steps, and SAC converges at 390 episodes with an average return of 320 and the maximum 1000 average steps. DDPG learned the least optimal policy with the lowest average return and agent steps. TD3 achieves the highest return, and the maximum 1000 steps, however SAC achieves Fig. 8: Evaluation environment with DRL agent marked in red at a randomized initial location. Fig. 7: Second training environment with DRL agent marked in red at a randomized initial location. 1000 exploration steps more consistently post training convergence. Unlike TD3 which solely maximizes the long-term expected reward, SAC additionally maximizes the entropy of the policy to promote exploration. Consequently, TD3 learns a policy with a higher return, but SAC learns the better policy for agent exploration. The trajectories in the first environment for each algorithm after the first training episode are illustrated in Figure 11. All three agents collide having no prior training experience. SAC covers the most ground after one training episode. The trajectories in the first environment for each algorithm post training completion are illustrated in Figure 12. Each algorithm achieves 1000 episode steps without collision. SAC covers the most ground, and exhibits the most efficient exploratory behavior which will result in the greatest energy savings. TD3 is next best, followed by DDPG which is the most inefficient, covering the same region multiple times. #### V-B2 Second Environment The order 50 moving average return and agent steps during training in the second environment are illustrated in Figures 13 and 14. The training results in the second environment are summarized in Table IV. Training for 20,000 episodes is insufficient for the DRL algorithms to learn an optimal policy in the second environment. At the end of the training period, DDPG achieves an average return of 125 and 530 average steps, TD3 obtains an average return of 230 and 715 average steps, and SAC converges to a local maximum at 10,620 episodes with an average return of 210 and 1050 average steps. Training was limited to 20,000 episodes to gauge performance in a reasonable time frame, however, continued training over 75,000 to 100,000 episodes will enable the agents to learn an optimal policy to traverse the more complex terrain over an indefinite number of exploration steps. Training DDPG, TD3 and SAC algorithms in the second Fig. 11: Trajectories in the first environment post first training episode. Fig. 12: Trajectories in the first environment post training completion. Fig. 10: Order 50 moving average agent steps during training in the first environment. Fig. 9: Order 50 moving average return during training in the first environment. environment for 20,000 episodes required a total of 416.8 hours, as such, it is infeasible to evaluate the algorithms for 75,000+ episodes with the existing setup. More powerful computer hardware is required. Similar to the training results in the first environment, DDPG learned the least optimal policy achieving the lowest return and agent steps. TD3 achieved the highest return, however SAC learned a more optimal policy achieving the highest agent steps. The trajectories in the second environment for each algorithm after the first training episode are illustrated in Figure 15. DDPG covers the least ground after one training episode. SAC and TD3 cover a similar distance. All three agents collide after travelling a short distance with no prior training experience. The trajectories in the second environment for each algorithm post training completion are illustrated in Figure 16. SAC achieves the best performance, learning a trajectory that covers the most distance. TD3 and DDPG yield similar performance, with TD3 being a marginal improvement. ### _Trained Policy Evaluation_ The six agents, DDPG, TD3 and SAC, trained in two different environments are evaluated in a third unknown environment with no prior training or knowledge of environment characteristics, to evaluate the extensibility of the ubiquitous DRL architecture for AGV exploration in information poor environments. Figure 17 portrays the trajectories for each agent in the third environment. The evaluation results are summarized in Table V. The SAC agents demonstrate the best performance, covering the most ground, efficiently. The DDPG agent trained in the first environment covers more ground than either TD3 agent, however is more inefficient. DDPG trained in the second environment covers the second highest distance, but yields the worst exploratory behavior, repeatedly traversing a circular trajectory in the same vicinity. TD3 agents cover less ground, and exhibit less efficient exploratory behavior Fig. 16: Trajectories in the second environment post training completion. Fig. 17: Trained DRL agents evaluated in the third environment. Fig. 14: Order 50 moving average agent steps during training in the second environment. Fig. 15: Trajectories in the second environment post first training episode. than SAC. The DRL agents trained in the first environment performed better than those trained in the second, as the characteristics of the evaluated environment are more similar to the first than the second. The SAC agents are most robust to differences in environment characteristics with both achieving near identical performance. The reward function weights and network hyperparameters can be further engineered for this application, and the agent trained over a longer period with more episode steps each episode iteration to learn an improved policy that explores the surrounding environment indefinitely. Bridging the simulation to reality gap to transfer policies learned in simulation to real-world robotic systems is a current area of active research. The large number of episodes required to sufficiently train the agent renders simulation training an essential component for DRL in robotics applications to minimize cost and possible physical damage caused by collisions during training. Substantial computation cost is required for training, however, post-training implementation of DRL agents is significantly less expensive, which makes DRL a powerful tool for real-time AGV motion planning and control in environments without a-priori maps. ## V Conclusions This paper presented an ubiquitous DRL architecture for intelligent AGV exploration without a-priori maps. Three actor-critic DRL algorithms, DDPG, TD3 and SAC, were trained in two environments of varying complexity, and further evaluated in a third with no prior knowledge of map characteristics. Simulation results demonstrate the effectiveness of the proposed DRL architecture, reward function and training conditions for quick, efficient and collision-free AGV navigation. SAC achieves the best performance, yielding trajectories that cover the highest distance, and demonstrate the most efficient exploratory behavior. Learning requires substantial computation cost, requiring up to 8.5 days for SAC in the second, complex environment using an Intel i7 11700K CPU and GeForce RTX 3070 Ti GPU. Improved policies with higher post-training episode steps require greater training times. Despite the high training cost, post-training implementation of DRL agents is significantly less expensive, which makes DRL a powerful tool for real-time AGV exploration in information poor, dynamically altering environments. For future work, the simulation to reality gap will be bridged to transfer policies learned in simulation to the physical AGV.
2304.01790
VC Set Systems in Minor-free (Di)Graphs and Applications
A recent line of work on VC set systems in minor-free (undirected) graphs, starting from Li and Parter, who constructed a new VC set system for planar graphs, has given surprising algorithmic results. In this work, we initialize a more systematic study of VC set systems for minor-free graphs and their applications in both undirected graphs and directed graphs (a.k.a digraphs). More precisely: - We propose a new variant of Li-Parter set system for undirected graphs. - We extend our set system to $K_h$-minor-free digraphs and show that its VC dimension is $O(h^2)$. - We show that the system of directed balls in minor-free digraphs has VC dimension at most $h-1$. - On the negative side, we show that VC set system constructed from shortest path trees of planar digraphs does not have a bounded VC dimension. The highlight of our work is the results for digraphs, as we are not aware of known algorithmic work on constructing and exploiting VC set systems for digraphs.
Hung Le, Christian Wulff-Nilsen
2023-04-04T13:34:13Z
http://arxiv.org/abs/2304.01790v2
# VC Set Systems in Minor-free (Di)Graphs and Applications ###### Abstract A recent line of work on VC set systems in minor-free (undirected) graphs, starting from Li and Parter [1], who constructed a new VC set system for planar graphs, has given surprising algorithmic results [1, 2, 1, 1]. In this work, we initialize a more systematic study of VC set systems for minor-free graphs and their applications in both undirected graphs and directed graphs (a.k.a _digraphs_). More precisely: 1. We propose a new variant of Li-Parter set system for _undirected_ graphs. Our set system settles two weaknesses of Li-Parter set system: the terminals can be anywhere, and the graph can be \(K_{h}\)-minor-free for any fixed \(h\). We obtain several algorithmic applications, and notably: (i) the first exact distance oracle for unweighted and undirected \(K_{h}\)-minor-free graphs that has truly subquadratic space and constant query time, and (ii) the first truly subquadratic time algorithm for computing Wiener index of \(K_{h}\)-minor-free graphs, resolving an open problem posed by Ducoffe, Habib, and Viennot [1]. 2. We extend our set system to \(K_{h}\)-minor-free _digraphs_ and show that its VC dimension is \(O(h^{2})\). We use this result to design the first subquadratic time algorithm for computing (unweighted) diameter and all-vertices eccentricities in \(K_{h}\)-minor-free digraphs. 3. We show that the system of _directed_ balls in minor-free digraphs has VC dimension at most \(h-1\). We then present a new technique to exploit the VC system of balls, giving the first exact distance oracle for unweighted minor-free digraphs that has truly subquadratic space and logarithmic query time. 4. On the negative side, we show that VC set system constructed from shortest path trees of planar digraphs does not have a bounded VC dimension. This leaves an intriguing open problem: determine a necessary and sufficient condition for a set system derived from a minor-free graph to have a bounded VC-dimension. The highlight of our work is the results for digraphs, as we are not aware of known algorithmic work on constructing and exploiting VC set systems for digraphs.
2307.09844
Reinforcement Learning for Credit Index Option Hedging
In this paper, we focus on finding the optimal hedging strategy of a credit index option using reinforcement learning. We take a practical approach, where the focus is on realism i.e. discrete time, transaction costs; even testing our policy on real market data. We apply a state of the art algorithm, the Trust Region Volatility Optimization (TRVO) algorithm and show that the derived hedging strategy outperforms the practitioner's Black & Scholes delta hedge.
Francesco Mandelli, Marco Pinciroli, Michele Trapletti, Edoardo Vittori
2023-07-19T09:03:41Z
http://arxiv.org/abs/2307.09844v1
# Reinforcement Learning for Credit Index Option Hedging ###### Abstract In this paper, we focus on finding the optimal hedging strategy of a credit index option using reinforcement learning. We take a practical approach, where the focus is on realism _i.e._ discrete time, transaction costs; even testing our policy on real market data. We apply a state of the art algorithm, the Trust Region Volatility Optimization (TRVO) (Bisi, Sabbioni, Vittori, Papini, & Restelli, 2019) algorithm and show that the derived hedging strategy outperforms the practitioner's Black & Scholes delta hedge. **Keywords:** Credit Default Swap index, option hedging, risk aversion, transaction costs, model misspecification. ## 1 Introduction Hedging consists in investing to reduce the risk of adverse price movements of financial instruments, and it is one of the main concerns in finance. In this paper we focus on the concept of option hedging, where an option is a contract which offers the buyer the opportunity to buy or sell the underlying asset at a predefined strike price in the future. In particular, the options considered here are credit index options i.e., the underlying is a Credit Default Swap (CDS) index. Option hedging is based on a mathematical theory started with Black & Scholes (B&S) (Black & Scholes, 1973). This theory is motivated by a strong set of assumptions which tend to be unrealistic (Yalincak, 2012). In particular, hedging is assumed to be done costlessly and continuously. Several approaches have been proposed to extend the B&S model to account for transaction costs, starting with (Leland, 1985) and more recently (Gueant & Pu, 2017) which uses stochastic optimal control. The main difference with respect to these approaches is that is data-driven and model-free _i.e.,_ it thus does not require any assumptions on the dynamics of the assets. Credit index options market makers have the target of making profit without keeping open risk positions. The most straightforward strategy is to buy and sell the same amount of the same option in order to have a return from the difference between the two prices (the difference between the buy and sell price of a security is called bid-ask spread). Given the low liquidity of these options, most of the time this is not possible, so the market maker's portfolio results in a combination of different options, and she needs to hedge at least the risk given by the underlying instrument _i.e._ the delta risk. It is possible to hedge this risk following blindly the B&S delta hedge, perhaps with an automated software connected directly to the market, but, specifically in cases with high transaction such as in the case of the CDS index which we are analyzing, this can quickly become quite expensive; the resulting costs may obfuscate the returns of the market maker. With other asset classes, one could concentrate on optimizing the execution thus reducing transaction costs, but in the CDS index markets this is not the case as execution costs and impact are known. So, the only way to reduce transaction costs is to minimize the transaction amount, ideally without increasing the risk related to an open delta exposure. Similar types of behavior can be found in other OTC instruments such as interest rate swaptions, so while focusing on a specific instrument, our approach remains general. Finally, this approach is certainly interesting for XVA traders and more generically bank resource managers, who typically have to deal with portfolios with hybrid and convex risks, and experience high rebalancing costs. ContributionsThe contribution of this paper is a robust instrument capable of giving the trader a hedging signal, or even capable of autonomously trading if fitted with a market access, which is more accurate than the B&S delta which is currently widely used, as it is optimized in discrete time and with transaction costs. Such an instrument can be created through the use of Reinforcement Learning (RL), specifically by applying TRVO (Bisi et al., 2019), an algorithm capable of optimizing together the hedging (i.e. risk reduction) and Profit and Loss (p&l) objectives. By controlling the risk adverseness parameter, we are capable of creating a frontier, thus the job of the trader can be reduced to simply deciding on which point of the frontier to place himself. To our knowledge, this is the first time the problem of hedging credit index options is analyzed from a RL perspective and the first time this approach is tested using real data for the underlying instrument. Related WorksThe issue of delta hedging using RL has been analyzed by various authors. Among the most recent approaches we mention (Du et al., 2020; Kolm & Ritter, 2019; Buehler, Gonon, Teichmann, & Wood, 2019; Halperin, 2017, 2019; Cao, Chen, Hull, & Poulos, 2019). These papers can be subdivided into two categories, one addresses the problem from a practitioner's perspective and is focused on the details of the hedging strategies chosen by the agent; the other builds on the formal mathematical structure of option pricing and uses machine learning techniques to overcome the problems posed by realistic features such as transaction costs. The distinction is faint as a hedging strategy implies a price, and vice versa. The first category includes (Kolm & Ritter, 2019; Cao et al., 2019) and is also pertinent for this chapter. The most comparable, regarding the financial environment, are (Du et al., 2020; Kolm & Ritter, 2019), which use the same MDP formulation considered in this dissertation. The main difference consists in the use of an approximate variance formulation in the RL objective, compared to the full variance used in this chapter. Furthermore, (Kolm & Ritter, 2019), uses a one-step SARSA update, a value based approach, instead of a policy search method, while (Du et al., 2020) considers both DQN (Mnih et al., 2013) and PPO (Schulman, Wolski, Dhariwal, Radford, & Klimov, 2017). (Cao et al., 2019) also consider an environment very close to ours, but with a transaction costs size that is larger than what we considered. Regarding the RL algorithm, they use value-function methods and, in particular, risk-averse deep Q-learning. It is an advanced approach taken from the risk-averse reinforcement learning literature (Tamar, Di Castro, & Mannor, 2016). They consider two Q functions, one for the first moment and another for the second moment. The paper then focuses on the agent's efficiency as a function of the rebalancing frequency. Differently to this chapter where we also analyze what happens when changing the risk-aversion parameter, in (Kolm & Ritter, 2019; Cao et al., 2019), only a single value of risk aversion is tested. The second category includes (Halperin, 2017, 2019; Buehler et al., 2019). In (Halperin, 2017, 2019), the problem of option pricing in discrete time has been addressed from a machine learning perspective, neglecting hedging costs. In (Buehler et al., 2019), the option pricing problem is undertaken by considering a class of convex risk measures and embedding them in a deep neural network environment. Initially, the dependence of the option price and hedge on the risk aversion parameter is studied in the absence of transaction costs. Then, a study of the option price dependence on transaction costs is discussed and the functional dependence of the price on the cost parameter is reconstructed. What distinguishes our approach is the algorithm we considered: the risk-averse policy search algorithm TRVO. One of the advantages of TRVO compared to value based algorithms like the ones used by (Kolm & Ritter, 2019; Cao et al., 2019; Halperin, 2017) is the fact that being policy search, TRVO is natively compatible with continuous states and actions and thus does not suffer from the problems of using a function approximator. Furthermore, being risk-averse, it is not necessary to apply any transformation to the reward differently from what is done for example in (Kolm & Ritter, 2019) and it is able to create a policy specific on the risk aversion of the user. Moreover, an advantage of model free RL algorithms, is that the policy learned is independent from the model used to generate the data. Thus TRVO can be used as is in an option hedging framework, and only requires the standard hyperparameter tuning typical of RL algorithms. Paper OutlineIn Section 2 we present in detail the financial framework, describing CDS indexes and options before explaining how the hedging problem can be described using Markov Decision Processes (MDPs). In Section 3 we will describe which reinforcement learning algorithm was used and for which reasons. Finally in Section 4 we will see and evaluate the experimental performance. ## 2 Financial Environment In this section we introduce the relevant financial instruments, the Markit iTraxx Europe Senior Financial index and the credit index options built on it. We show how to price them in a standard Black & Scholes environment and how options can be managed via standard delta hedging. We will consider a financial environment where interest rates are set to zero for simplicity. Since both the options and the underlying are derivatives, trading them does not attract significant cash needs, and we can assume a substantial decoupling between rates and credit without loss of generality. ### Markit iTraxx Europe Senior Financial index The Markit iTraxx Europe Senior Financial index is a basket of credit default swaps on 30 european financial institutions (banks and insurances), equally weighted, with standardized maturities, coupons and payment dates. Every 6 months, on the 20/09 and 20/03, or the Business Day immediately thereafter if it is not a Business Day, a new Series of the index is originated (or "rolled"). The new Series will be called "on-the-run", until a new one is generated. Different maturities are traded for this CDS index (3, 5 and 10 years) with the maturity date that is the 20/12 or 20/06, respectively. For our purposes we consider the CDS index with 5Y maturity because it is the most liquid and there are many more options compared to the other maturities. The index composition may be different from one Series to the other either in the number of constituents 1 or in the CDS reference entities considered. At the present time the Markit iTraxx Europe Senior Financial index on-the-run is the Series 35, started on 22 March 2021 and with maturity date 20 June 2026. Footnote 1: For index versions originated before March 2015 the number of constituents was 25. Each CDS index has a premium leg and a protection leg. The premium leg has standardized coupon dates: 20/03, 20/06, 20/09 and 20/12 (or the Business Day immediately thereafter if it is not a Business Day). The coupon equals N 1% \(\tau(t_{i-1},t_{i})\), where N is the notional, expressed in Euro, \(\tau(t_{i-1},t_{i})\) is the year fraction, equal to the number of days between the present \(t_{i}\) and the previous \(t_{i-1}\) coupon date, divided by 360, while 1% is the standardized coupon; the coupon is paid on \(t_{i}\).2 The protection leg pays, in case of a default of one of the \(j\)-th Series constituents occurred before the Series' maturity, an amount equal to LGD\({}_{j}\) N 1/\(n_{j}\), where LGD\({}_{j}\) is the loss given default3 and \(1/n_{j}\) is the constituent weight with \(n_{j}\) the number of constituents at the default time (at the first default \(n_{1}=30\)). Upon default of a constituent and settlement of the relative protection leg a new version of the Series is spin-offed including the surviving constituents, the notional N is rescaled accordingly. Footnote 2: The only caveat is about the last coupon date, which corresponds to the index maturity equal to the 20/06 or 20/12 even in case that day is a holiday, with a year fraction including an extra day. Footnote 3: LGD\({}_{j}\) is equal to \(1-R_{j}\), where \(R_{j}\) is the recovery rate determined at the end of the ISDA CDS auction triggered by the credit event. Since the premium leg has a standardized 1% coupon, the two legs are unbalanced by an amount that is exchanged at inception as a premium, this is referred to as upfront. Even though the upfront amount is precisely the price of the derivative, the market does not quote it directly. Rather, following the standard single name CDS convention, what is traded is the running coupon of a par (i.e. upfront equal to zero) CDS. The relation between the traded spread \(S\) and the upfront, assuming the latter to be received by the protection buyer from the protection seller, is: \[\mathrm{Upf}(t,S_{t})=(1\%-S_{t})\,A_{S}(t)+1\%\tau(t_{acc},t), \tag{1}\] where t is the evaluation date, \(\tau(t_{acc},t)\) is the year fraction, \(t_{acc}\) the coupon date immediately before \(t\), and \(A_{S}(t)\) the annuity at time \(t\).4 The latter quantity is defined as: Footnote 4: In the computation of the accrual term the year fraction is modified adding an extra day. \[A_{S}(t)=\sum_{t^{+}<\{t_{i}\}\leq t_{n}}\tau(\max(t_{i-1},t),t_{i})\frac{P_{ S}(t,t_{i-1})+P_{S}(t,t_{i})}{2}, \tag{2}\] where \(t^{+}=t+1\) day, \(\{t_{i}\}\) is the strip of index coupon dates, \(t_{n}\) is the index maturity, \(P_{S}(t,\theta)\) the survival probability between the present time \(t\), and any future time \(\theta\), given the current credit spread \(S=S_{t}\) (\(A_{S}(t)\) does not depend on \(S_{t}\) directly but through \(P_{S}(t)\)).5 The survival probability can be approximated as in (Jarrow & Turnbull, 1995): Footnote 5: Notice that if \(t\) is the day before a coupon date, this coupon is excluded from the strip. \[P_{S}(t,\theta)=e^{-S_{t}\tau(t,\theta)\mathrm{LGD}^{-1}}, \tag{3}\] with LGD usually set to 60% by convention. Making trading decisions based on the credit spread is convenient as the upfront amount has jumps at the coupon dates due to \(\tau(t_{acc},t)\), while the credit spread maintains a smoother behavior. In the following we will consider the traded spread \(S\) as a sort of underlying, having it's own specific dynamics, which we will simulate with Geometric Brownian Motion (GBM). The dynamics of the index will be inherited by the dynamics of \(S\). Thus, let \(S_{t}\) be the underlying at time \(t\), then it can be described as: \[\mathrm{d}S_{t}=\mu S_{t}\mathrm{d}t+\sigma S_{t}\mathrm{d}W_{t} \tag{4}\] where \(W_{t}\) is Brownian motion, \(\mu\) the drift (which we assume to be 0 throughout the paper without loss of generality) and \(\sigma\) the volatility. For an initial value \(S(0)\), the SDE has the analytic solution: \[S_{t}=S_{0}\exp\left(\left(\mu-\frac{\sigma^{2}}{2}\right)t+\sigma W_{t}\right) \tag{5}\] where: \(W_{t+u}-W_{t}\sim N(0,u)=N(0,1)\sqrt{u}\). ### Options on the CDS index In this section, we consider options on the CDS index. A _receiver_ option gives the buyer the possibility of selling protection on the index at the expiry date at a spread equal to the strike. Conversely, a _payer_ option gives the buyer the choice of buying protection at the expiry date at a spread equal to the strike. Upon exercise in case of a payer (receiver) option, the option seller (buyer) physically delivers the underlying. In terms of the strike K and the traded spread \(S(T)\) at expiry, the payoff at expiry is: \[\max\left((S_{T}\,A_{S}(T)-K\,A_{K}(T)),\,0\right) \tag{6}\] \[\max\left((K\,A_{K}(T)-S_{T}\,A_{S}(T)),\,0\right) \tag{7}\] respectively for a _payer_ (_Pay_) and a _receiver_ (_Rec_) option and where \(A_{K}(T)\) is the same expression as \(A_{S}(T)\) that considers \(S_{t}=K\) in \(P_{S}(t)\). In this paper, for simplicity we consider the payoff \[\max\left((S_{T}-K)\,A_{S}(T),\,0\right), \tag{8}\] \[\max\left((K-S_{T})\,A_{S}(T),\,0\right), \tag{9}\] which allows a treatment a la Black & Scholes on \(S_{t}\)6, since the payoff of Equation (8) can be seen as a call on the underlying \(S_{t}\). Footnote 6: We focus on this simplification since the extension to the payoff of Equation (6) and (7), which is trivial from a numerical/RL perspective, complicates the analytical treatment in a way beyond our interest. Considering an option traded at time \(t\) with expiry \(T\) and strike \(K\), where for ease of notation we may write \(S_{t}\) instead of \(S(t)\): \(K\): \[\mathrm{Pay}(t,S_{t})=\left[\Phi(d_{t})S_{t}(T)-\Phi(e_{t})K \right]A_{S}(T), \tag{10}\] \[\mathrm{Rec}(t,S_{t})=\left[\Phi(-e_{t})K-\Phi(-d_{t})S_{t}(T) \right]A_{S}(T),\] (11) \[d_{t}=\frac{1}{\sigma\sqrt{\tau(t,T)}}\left[\log\left(\frac{S_{t }(T)}{K}\right)+\left(\frac{\sigma^{2}}{2}\right)\tau(t,T)\right],\] \[e_{t}=d_{t}-\sigma\sqrt{\tau(t,T)},\] where \(S_{T}(t)\) is the forward value of \(S_{t}\), \(\sigma\) is the volatility and \(\bar{\tau}(t,T)\), the number of days between \(t\) and \(T\) divided by 36578. Footnote 7: ACT/365 convention. Footnote 8: \(T\) for the annuity is the settlement date, \(T\) for \(d_{1}\), \(d_{2}\) is the expiry date. Of course, as is common for options, Equation (10) and Equation (11) are a way of mapping the option price into a volatility surface, which is convenient since the latter is a smoother function of the expiries and the strikes than the price is. When trading a payer option, the buyer pays the option premium upfront to the seller, which delivers the underlying in case of exercise. There are no extra payments. In case a name in the index defaults between \(t\) and \(T\), the option doesn't _knock out_ i.e. there is not an automatic close out and, if exercised, it delivers _both_ the protection leg of the defaulted name and the spin-offed index. In this sense the underlying of the option remains unchanged even if a default occurs, so that the relation between the underlying and the option is default-neutral. Hence, one can neglect, as we will do, jump-to-default effects in modeling the underlying and option dynamics. Finally, since the buyer of a payer (receiver) index option receives (pays) protection substantially from trading time \(t\) and not from expiry \(T\), the option price needs to be adjusted consequently in order to consider any losses due to the default before \(T\). This is done by a proper adjustment to the forward spread. Assuming zero interest rates for simplicity, the adjusted forward \(S_{T}(t)\) is \[S_{t}(T)=S_{t}+\text{LGD}(1-P_{S}(t,T))\frac{1}{A_{S}(T)}. \tag{12}\] In the limit \(S(t)\to 0\), assuming the option is traded at time \(t\) \[S_{T}(t)\sim S_{t}\left(1+\frac{\tau(t,T)}{\tau(T,t_{n})}\right). \tag{13}\] ### Trading the Index The iTraxx Europe Senior Financial index is traded on the so-called "over-the-counter" (OTC) market. One of the most important differences with regulated exchanges is the trade execution; in regulated markets anonymous orders are sorted and matched through an order book managed directly by the exchange. In OTC markets, CDS indexes like the Senior Financial are traded against dealers through a Multilateral Trade Facility (MTF). Dealers contribute continuously a bid and an ask price for a given notional, ranging from 10 to up to 400 mln Eur, with most of the contributions ranging between 50 and 200 mln Eur. Trading times are not strictly regulated but generally go between 9am CET and 18pm CET. Each contributor has a typical bid/ask span, which depends on the market conditions and on the level of the index spread (a wide spread is typically correlated with a larger bid-ask). The market spread published by the dealers can be applied, but acceptance from the dealer is not always ensured: some dealers ensure that the spread level will be always confirmed, some retain the right to review it. Moreover, dealers can also decide to publish bid and ask spreads which cannot be executed at all, and are often off market. This is another difference from the market makers' quotes in the regulated markets, which are binding. Another consequence is that it is difficult to define an order book for OTC traded objects, to distinguish if a quote is applicable or not and what would have been the maximum size executable. Figure 1: The evolution of the iTraxx Europe Financial Senior 5y mid spread (in basis points, left axis) and bid/ask (in basis points, right axis) from mid 2020 to mid 2021. Defining trading costsWe approach the trading costs problem from a statistical perspective. The starting point is a dataset containing the most recent bid and ask spread quoted by all the dealers (about 20 in the dataset) every 30 minutes, during the most liquid trading hours (9:30am CET and 17:30pm CET). In order to use this data it is necessary to discard quotes that are most likely typos, not executable or technological problems. Thus for each time-step and for both the bid and ask we consider the mean and standard deviation of the quotes of all the dealers and discard from the set the spreads which differ from the mean by more than two standard deviations. Considering the processed data, we define as applicable bid the average of the remaining bid spreads, and as applicable ask the average of the remaining ask spreads. Finally we obtain the mid spread as the average of the applicable bid and applicable ask, and the bid/ask spread as the difference between them. An alternative approach we considered to calculate trading costs was to use the median of the unfiltered bid and ask quotes. The resulting bid/ask spreads did not differ significantly from the first method, which we ultimately considered robust enough for our purpose. In Figure 1 we show the mid and bid/ask spreads from the cleaned dataset, on a one-year time horizon, considering intra-day data with 30-minutes time-steps. In the rest of the paper, we identify the mid with the spread \(S\) introduced in Equation 5. The unit of measurement of bid-ask spread and mid spread is the Basis Point (bp) _i.e._\(1\text{bp}=\frac{1}{10000}\). The way in which applicable bid and ask spreads are built ensures that notionals up to hundreds of millions of the index can be traded at that spread, so that we can discard execution-related issues, such as slippage and assume that the trading/rebalancing costs can be computed from the bid/ask shown in Figure 1, as a linear functional of the traded notional N: \[c(\text{N})=\text{N}\left|\text{Upf}_{t}\left(S_{t}\pm\frac{\text{ba}}{2} \right)-\text{Upf}_{t}(S_{t})\right|, \tag{14}\] where \(ba\) is the bid/ask, \(\text{Upf}(S,t)\) the index upfront at the execution time \(t\), as per Equation 1. In Equation 14 the \(+\) (\(-\)) sign should be considered when buying (selling) protection. ### Embedding in a Markov Decision Process The hedging problem is a sequential decision problem, where the trader needs to decide at each time-step how much of the underlying instrument to trade based on information coming from the market. This sequential decision problem can be described through a Markov Decision Process (MDP). The p&l in one timestep of a trader long a paper option and holding \(h(t)\) in the hedging portfolio is: \[p\&l =\text{Pay}_{t+1}-\text{Pay}_{t} \tag{15}\] \[-h_{t}\cdot(\text{Upf}_{t+1}-\text{Upf}_{t})-c(a_{t}-a_{t-1})\] We can define the delta hedge as: \[N_{h}(t)=\left(\frac{\partial P_{S}(t,T)}{\partial S}\right)\left(\frac{ \partial\text{ Upf}(t,S_{t})}{\partial S}\right)^{-1}.\] The B&S model assures that \(p\&l\to 0\) when \(p\to 0\), \(h(t)=N_{h}(t)\) and there are no transaction costs (\(c(N)=0\)). From now on, we consider p>0, in particular we take as a reference point 17 rebalances per day and c(N) as defined in Equation (14). We can now transition to a reinforcement learning scenario, which will be rigorously defined in the next section. We shall define this hedging environment as a sequential decision problem, specifically as a Markov Decision Problem (MDP): * the action \(a_{t}=h_{t}\in[0,1]\) * the state \(s_{t}=(S_{t},\mathrm{Pay}_{t},N_{h}(t),a_{t-1})\) * the reward is Equation (15) The above formulation is similar to what is used in (Vittori, Trapletti, & Restelli, 2020), and is called _accounting P&L formulation_ in (Cao et al., 2019). ## 3 Reinforcement Learning In this section, we give a brief introduction of reinforcement learning, focusing on the algorithm which we based our analysis on. A discrete-time Markov Decision Process (MDP) is defined as a tuple \(\langle\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma,\mu\rangle\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) the (continuous) action space, \(\mathcal{P}(\cdot|s,a)\) is a Markovian transition model that assigns to each state-action pair \((s,a)\) the probability of reaching the next state \(s^{\prime}\), \(\mathcal{R}(s,a)\) is a bounded reward function, \(\gamma\in[0,1)\) is the discount factor, and \(\mu\) is the distribution of the initial state. The policy of an agent is characterized by \(\pi(\cdot|s)\), which assigns to each state \(s\) an action with a certain probability. We consider infinite-horizon problems in which future rewards are exponentially discounted with \(\gamma\). Following a trajectory \(\tau\coloneqq(s_{0},a_{0},s_{1},a_{1},s_{2},a_{2},...)\), let the returns be defined as the discounted cumulative reward: \(G=\sum_{t=0}^{\infty}\gamma^{t}\mathcal{R}(s_{t},a_{t}).\) For each state \(s\) and action \(a\), the action-value function is defined as: \[Q_{\pi}(s,a)\coloneqq\mathop{\mathbb{E}}_{\begin{subarray}{c}s_{t+1}\sim \mathcal{P}(\cdot|s_{t},a_{t})\\ a_{t+1}\sim\pi(\cdot|s_{t+1})\end{subarray}}\left[\sum_{t=0}^{\infty}\gamma^ {t}\mathcal{R}(s_{t},a_{t})|s_{0}=s,a_{0}=a\right], \tag{16}\] The typical RL objective is to maximize the action value function, given the initial state distribution. This objective can be maximized in two main ways, the first is by learning the action-value function for each state and action, in general using the Bellman Equation. Once the action-value function is known, the policy is: \(\pi(a|s)=\mathrm{argmax}_{a}Q(s,a)\), these algorithms are called value-based (Sutton & Barto, 1998), and are used in (Kolm & Ritter, 2019; Cao et al., 2019). This approach becomes cumbersome in a hedging environment where both states and actions are (almost) continuous. There are approaches which use function approximation to interpolate the action value function, but it is necessary to discretize the state-action space loosing precision. The other family is instead policy search methods (Deisenroth, Neumann, Peters, et al., 2013), which optimize the objective by searching directly in the policy space. They can easily handle continuous actions, learn stochastic policies in partially observable, non-Markovian environments and are robust when working in datasets with large amounts of noise (Moody & Saffell, 2001). For all these reasons, we focused on policy search algorithms (Peters & Schaal, 2008). Risk Average Reinforcement LearningThe typical objective is to maximize the expected cumulative rewards, which in our context means maximizing the expected cumulative p&l. But maximizing this quantity is not the correct objective for this type of problem, in fact in an ideal B&S model, this quantity is as close as possible to zero, which translates to optimizing a risk averse objective. Given the great experimental results achieved in (Vittori et al., 2020), we decided to use the Trust Region Volatility Optimization (TRVO) algorithm defined in in (Bisi et al., 2019). The risk averse objective is: \(\eta=J+\lambda\nu^{2}\), where: \[\nu_{\pi}^{2}\coloneqq(1-\gamma)\mathop{\mathbb{E}}\left[\sum_{t=0}^{\infty} \gamma^{t}\left(\mathcal{R}(s_{t},a_{t})-J_{\pi}\right)^{2}\right]. \tag{17}\] One of the interesting things of this risk metric, to which we will refer to as _reward-volatility_ is the it bounds the return-variance (). We would like to bring the reader's attention to the meaning of this reward-volatility term: it is minimizing the variations between one step and the next, in contrast to the return-variance which is minimizing the variance at the end of each path. In this paper we aim at training agents with different risk aversions, in order to find target balances between risk (volatility) and reward. In a static environment, this can be achieved by training each agent with a specific value of \(\lambda\) and algorithm will find a minimum with a specific, \(\lambda\)-dependent risk-reward ratio. Instead, in an evolving environment with variable bid ask spread, a given specific value for \(\lambda\) may induce different risk-reward targets, due to the fact that the terms in the risk averse objective \(\nu\) will change in value even if the market conditions remain equal (_i.e._ the same action will induce different transaction costs). This is the case in our problem as we can see from Figure 1, where the bid/ask varies significantly. Intuitively, we can see that the dependence of the \(J\) on \(ba\) will be at most linear, and typically sublinear. Thus, distortions could come from a different scaling for the variance term \(\nu^{2}\), but as will be apparent from the experiments, also in this case the distortion is sublinear, so there is no need to implement modifications/rescaling to take the issue into account. ## 4 Experiments In this section we present the experimental results. Once described the data generation and training parameters, we can will see the results obtained on a GBM simulated market, a heston simulated market and finally also real market data. Figure 2: The hedging strategy chosen by agents trained at different value of the risk aversion parameter \(\lambda\) is compared with the delta hedging strategy in a zero-cost environment. In the vertical axis the hedging notional as a percentage of the option notional. ### Data generation and agent training We trained our agents on generated data, with episodes of 40 working days, with 17 observations per day, beginning at 9.30 and ending at 17.30. We simulated only the traded spread \(S\), by using the GBM described in Equation (4) with \(\sigma\), the annualized volatility, equal to 60% and neglecting the drift term. We did not consider the possibility of a default of one of the components as no default has been observed in recent times for the instrument in consideration. In each simulation, the underlying spread starts from an initial value of 100 bps; we define the stochastic evolution on the actual time span between the time-steps: 30 minutes during the day, 16 hours between the last step of one trading day and the first step of the next trading day in case of two contiguous trading days, a span of \(16+24n\) hours in the case of trading days separated by \(n\) holidays or weekend days. We trained our agents to hedge a position short a payer (but any other position would have been equivalent) option with 2 months maturity, thus maturing at the end of each episode. The strike \(K\) was 100bps, equal to the initial value of the underlying at the begging of the episode. We assumed an option notional of 100 mln Eur, which implies an hedging portfolio containing an underlying notional between 0 and 100 mln Eur. Given the market structure, our results are valid even assuming an option notional 10 times larger. We also assumed continuous underlying trading, which is reasonable given the option size and the fact that in the market small clips (down to 100K Eur or less) can be traded. Assuming a risk neutral volatility equal to 60% the option has initial value of 530K Eur. We built a training set of 40,000 episodes, and trained our agents varying two parameters: the risk aversion parameter \(\lambda\) and the bid/ask parameter \(ba\). We considered \(\lambda\) following (Vittori et al., 2020), in order to span an efficient frontier in the risk/reward space; we considered \(10^{-6}\lesssim\lambda\lesssim 10^{-3}\), for better interpretability, in the rest of the paper we will rescale \(\lambda\) by \(10^{5}\), so to have bounds between 0.1 and 100. The choice of \(ba\) as an extra parameter comes from the observation that the bid/ask of the instrument considered here shows a highly dynamic pattern (see Figure 1). We considered _ba_ ranging from 0.5 to 2 basis points (bp) as per Figure 1. We also considered the case with low values of \(ba\), even \(ba=0\) in order to further test our algorithms and to check that the standard delta-hedging strategy is smoothly recovered in the limit \(ba\to 0\). ### Testing on a GBM-simulated market We tested our agents on a dataset of 2,000 episodes with the underlying spreads generated by the GBM, with the same parameters of the training dataset. We performed different tests varying the \(ba\) spread in order to monitor agents' performances comparing to the delta-hedging strategy. In the \(ba=0\) case, the trained agent perfectly replicates the delta-hedge, this can be seen in Figure 2 where, for a specific testing scenario, the delta hedging strategy (in red) is compared with the action chosen by the agents trained with different values of the risk aversion parameter (in green, blue and purple). Given the absence of trading costs, all the agents replicate the same strategy, which is the optimal one9, minimizing risks. Under the \(ba=0\) assumption, the strategy has in average zero cumulated p&l. Footnote 9: Indeed, neglecting hedging costs, the Black&Scholes paradigm is violated only by the assumed time discretization. Introducing hedging costs \(ba>0\) the average cumulated p&l of the delta hedging strategy is shifted to negative values, depending linearly on \(ba\), specifically, considering a \(ba\) of 1 bp the cumulated p&l is on average -136kEur. The presence of hedging costs during training induces a smoother strategy for the agent, in terms of underlying allocation changes. Since each action becomes more expensive as \(ba\) increases, the agent cuts costs through the reduction of portfolio rebalances. The downside of this approach is an increase Figure 3: The hedging strategy chosen by agents trained at different value of the risk aversion parameter \(\lambda\) is compared with the delta hedging strategy in an environment including hedging costs. In the vertical axis the hedging notional as a percentage of the option notional. In the upper (lower) plot the bid/ask equals 0.5 (2) basis points. the variability of the rewards, since the option is not continuously hedged. The desired balance between cost reduction and low reward volatility can be achieved changing the lambda aversion parameter of the model. This relationship is plotted in Figure 3, where different degrees of smoothness in the variation of the hedging portfolio can be seen to be dependent on \(\lambda\). The smoothness degree depends also on the size of the hedging cost: defining a certain risk aversion, a higher \(ba\) implies a higher smoothness, as it is apparent by comparing the upper and lower plot. The performance of the agents w.r.t. the delta hedging strategy in terms of cumulated p&l for different values of \(\lambda\) and the \(ba\) parameter is summarized in Figure 4. In the figure, each dot represents the performance of an agent having the \(\lambda\) indicated by the nearby annotated number and acting in an environment with \(ba\) depending on the color (red for 0.5 basis point, orange for 1 basis point etc.). The position on the vertical axis indicates the average p&l performance of the agent w.r.t. to the delta hedging strategy in an environment having the same \(ba\). The average is taken with respect to the terminal p&l measured on the 2,000 testing scenarios. The position on the horizontal axis, instead, indicates the square root of the variance of the terminal p&l (the p&l volatility) on the same testing sample. The colored dots laying on the horizontal axis indicate the variance performance of the delta hedging strategy in terms of p&l volatility at different values of the \(ba\) parameter. As se can see, all the agents perform better than the corresponding delta-hedging strategies in terms of p&l, while a certain number of agents (those lying left of the corresponding colored vertical line) perform better than the delta hedging strategy also in terms of p&l volatility. In this sense, all the frontiers dominate the corresponding delta hedge, and it is striking to notice that the level of dominance depends on the \(ba\) parameter: at low costs the dominance is mild (as it was also experienced in (Vittori et al., 2020), where the very low hedging costs of listed equity products have been considered), at high costs the delta hedging is barely reasonable a strategy. As an example, one can consider the \(\lambda=4\) point of the blue frontier (which assumes very large costs and beats delta hedging both in terms of p&l and p&l volatility) and observe from Figure 3 how smooth its action is. Another thing to notice is the \(\lambda\) parametrization of the different frontiers, there is a shift of \(\lambda\) to Figure 4: Each dot represents the performance of an agent on a GBM-simulated market in terms of p&l (w.r.t. delta hedging) and p&l volatility, depending on \(\lambda\) (annotated next to each dot) and the \(ba\) parameter. the left at the increase of the \(ba\) parameter. This \(\lambda\)-scaling in \(ba\), which is very mild, is in agreement with the considerations made at the end of Section 3. The benefit of adopting our approach instead of the delta hedging strategy is apparent also from Figure 5, where we show the distribution of the p&l of the \(\lambda=4\) agent relative to the p&l of delta hedging in the realistic case of \(ba=1.5\) bp: the agent essentially performs always better. ### Testing on a Heston-simulated market In order to make a further step towards realism, we challenge the assumption of the GBM constant volatility, as we know it does not hold in the financial markets. We thus generated a new testing set of 2,000 episodes with spreads derived from the Heston model, which introduces a dynamic for the volatility: \[dS_{t} =\sqrt{\nu_{t}}\,S_{t}\,dW_{t}^{S} \tag{18}\] \[d\nu_{t} =\kappa\left(\theta-\nu_{t}\right)dt+\xi\sqrt{\nu_{t}}dW_{t}^{\nu} \tag{19}\] with \(\nu_{0}=60\%^{2}\), so to recover the initial volatility used in training, \(\kappa=2\), \(\theta=\nu_{0}\), \(\xi=0.9\), and no correlation between the stochastic terms \(dW_{t}^{S}\) and \(dW_{t}^{\nu}\). With this configuration \(\nu_{t}\) oscillates significantly reaching values as high as \(\sim 120\%\) and as low as \(\sim 0\%\). When pricing the option we maintained the B&S formulation with \(\sigma=60\%\). Even if the agents were trained on a dataset generated with GBM, they are able to achieve very good performance over the heston dataset (see Figure 6). The reason could be that the hedging of an option is a task that implies a deep knowledge of the relationship between the underlying price and the option premium, but the way in which the underlying evolves is probably a secondary aspect. ### Testing on real market data In order to move a further step towards a realistic setup, we consider now real market data for iTraxx Europe Senior Financial index. We use the dataset constructed in Section 2.3, thus considering real market prices and real transaction parameters \(ba\) as seen in Figure 1, and simulate the option price with \(\sigma=60\%\). The available data is sufficient for 5 episodes Figure 5: The distribution of the p&l of the \(\lambda=4\) agent relative to the p&l of delta hedge assuming a the \(ba\) parameter equal to 1.5 bps. Figure 6: Beach dot represents the performance on a Heston-simulated market of an agent in terms of p&l (w.r.t. delta hedging) and p&l volatility, depending on \(\lambda\) (the number close to the dot) and the \(ba\) parameter. Figure 7: The delta hedging strategy (red line) is compared to the strategy selected by agents trained with \(ba=1\), at various risk aversions (other colored lines), on a real episode (the black line shows the underlying spread \(S\) observed between July and September 2020. of 40 days, which we used as a test set for agents trained with different values of \(\lambda\) with \(ba=1\) bp. In Figures 7 and 8 we show the action of the the various agents compared with the delta hedge. We also show the market data dynamics (in black), on the right vertical axis. We can see as in the previous Figures, how lower values of \(lambda\) generate smoother hedging policies. Table 1 summarizes the performance of the various agents (all the figures in kEur): _in all the scenarios, all the considered agents overperform the delta hedging strategy in terms of p&l_. Considering risk, given the low number of scenarios at hand, the cumulated p&l volatility previously considered is a very noisy estimator, thus, we considered the volatility of the p&l along each scenario, a measure similar to the _reward volatility_ defined in Equation (17) and used in Section 3 to define the objective of the TRVO algorithm, as described in (Bisi et al., 2019). Using this measure no agent outperforms delta hedging, but the volatility increase is anyway very small when compared with the cost reduction obtained by adopting our agents. ## 5 Conclusion In this paper we tackled the credit index option hedging problem with the use of reinforcement learning. As we are in a dealer market scenario, there is no market impact, thus the only way to reduce costs when trading is by optimizing the trading policy. We showed that through the use of a state of the art RL algorithm TRVO, it is possible to learn a strategy which beats the practitioner's delta hedge in terms of risk, reward and generates lower transaction costs. This result was obtained not only on data generated through a GBM, but also when generating the underlying with a heston process and using real market data. Interesting future works would be to consider a portfolio of options, or to complicate the financial environment by considering hybrid options. Figure 8: The delta hedging strategy (red line) is compared to the strategy selected by agents trained with \(ba=1\), at various risk aversions (other colored lines), on a real episode (the black line shows the underlying spread \(S\) observed between September and November 2020.
2310.05469
Learning to Predict Structural Vibrations
In mechanical structures like airplanes, cars and houses, noise is generated and transmitted through vibrations. To take measures to reduce this noise, vibrations need to be simulated with expensive numerical computations. Surrogate deep learning models present a promising alternative to classical numerical simulations as they can be evaluated magnitudes faster, while trading-off accuracy. To quantify such trade-offs systematically and foster the development of methods, we present a benchmark on the task of predicting the vibration of harmonically excited plates. The benchmark features a total of 12000 plate geometries with varying forms of beadings, material and sizes with associated numerical solutions. To address the benchmark task, we propose a new network architecture, named Frequency-Query Operator, which is trained to map plate geometries to their vibration pattern given a specific excitation frequency. Applying principles from operator learning and implicit models for shape encoding, our approach effectively addresses the prediction of highly variable frequency response functions occurring in dynamic systems. To quantify the prediction quality, we introduce a set of evaluation metrics and evaluate the method on our vibrating-plates benchmark. Our method outperforms DeepONets, Fourier Neural Operators and more traditional neural network architectures. Code, dataset and visualizations: https://eckerlab.org/code/delden2023_plate
Jan van Delden, Julius Schultz, Christopher Blech, Sabine C. Langer, Timo Lüddecke
2023-10-09T07:26:35Z
http://arxiv.org/abs/2310.05469v3
# Vibroacoustic Frequency Response Prediction with Query-based Operator Networks ###### Abstract Understanding vibroacoustic wave propagation in mechanical structures like airplanes, cars and houses is crucial to ensure health and comfort of their users. To analyze such systems, designers and engineers primarily consider the dynamic response in the frequency domain, which is computed through expensive numerical simulations like the finite element method. In contrast, data-driven surrogate models offer the promise of speeding up these simulations, thereby facilitating tasks like design optimization, uncertainty quantification, and design space exploration. We present a structured benchmark for a representative vibroacoustic problem: Predicting the frequency response for vibrating plates with varying forms of beadings. The benchmark features a total of 12,000 plate geometries with an associated numerical solution and introduces evaluation metrics to quantify the prediction quality. To address the frequency response prediction task, we propose a novel frequency query operator model, which is trained to map plate geometries to frequency response functions. By integrating principles from operator learning and implicit models for shape encoding, our approach effectively addresses the prediction of resonance peaks of frequency responses. We evaluate the method on our vibrating-plates benchmark and find that it outperforms DeepONets, Fourier Neural Operators and more traditional neural network architectures. The code and dataset are available from [https://eckerlab.org/code/delden2023_plate](https://eckerlab.org/code/delden2023_plate). ## 1 Introduction Structural and airborne sound propagation during the usage of everyday products in mobility, housing and work can induce discomfort and deteriorate health in the long-run (Basner et al., 2014). Therefore, great efforts are made to reduce sound pressure levels during product design. This is typically a two-stage process: (1) First, the sound characteristics of a design need to be evaluated. For this, discretization methods such as the finite element method (Zienkiewicz et al., 2005; Bathe, 2007) are Figure 1: We introduce a dataset of 12,000 samples for predicting frequency responses based on plate geometries. A harmonic excitation with frequency \(f_{i}\) from 1 to 300 Hz is applied to all plates at a fixed location, causing them to vibrate. The velocity field of the vibrating plates (field solution) is obtained through FEM and then spatially averaged, resulting in a frequency response over \(f_{i}\). applied on mechanical design models and the systems are numerically solved. This method yields a field solution over the design's spatial domain. Field solutions provide a physical quantity such as the vibration velocity at each point of the design geometry and each queried frequency. To obtain a compact description of the design, the field solutions are spatially averaged and converted to dB-scale, resulting in the _frequency response_. (2) Second, the design needs to be changed to reduce the emitted sound. In general, it is desirable to keep the frequency response low, especially in frequency bands humans are sensitive to or where acoustic coupling is expected. Experienced engineers use frequency responses to directly devise sound reduction measures from experience, e.g. stiffening a design at a certain location to shift resonance frequencies. The exploration of noise-reducing designs is limited by prohibitive simulation costs. Data-driven surrogate modelling is a promising technique that could circumvent this constraint by accelerating the evaluation of design candidates by several magnitudes. To assess the quality of such surrogate models and foster the development of well-performing models, structured benchmarks are required. While benchmarks exist for similar problems, such as directly predicting the solution of time-domain partial differential equations (Takamoto et al., 2022; Otness et al., 2021) and computational fluid dynamics (Takamoto et al., 2022; Bonnet et al., 2022), there is currently no structured benchmark for predicting responses in the frequency domain based on variable geometries. The only dataset dealing with frequency domain data we are aware of pertains to the field of electromagnetic compatibility (Schierholz et al., 2021). To address this gap, we consider vibrating plates excited by a harmonic force as a representative acoustic design problem and introduce a benchmark: Given a variation of a plate geometry and material properties, the goal is to predict the corresponding frequency response function. We vary scalar properties of the plate as well as the geometry by adding _beading patterns_, that change the acoustic properties of the plate (e.g. by shifting resonances) (Rothe, 2022). Plates are common in technical systems as they often function as a building block for more complex designs, e.g. in car bodies, lightweight walls or aircraft fuselages. Also, the characteristics of frequency responses of more complex systems do not systematically differ (Romer et al., 2021; Blech et al., 2021). From a machine learning perspective the problem is intriguing because simple input patterns cause complex, multi-peak frequency responses while adding beadings has a stiffening effect and reduces the number of resonance peaks in the investigated frequency range, resulting in simpler frequency responses. Also, this setup is different to existing benchmarks in that we do not look at the system's evolution over time but the response in the frequency domain. To tackle frequency response prediction, we propose a novel operator model, named Frequency Query Operator (FQ-Operator). This model is trained to map plate geometries into the space of frequency response functions. In this context, operator means that the frequency response function can be evaluated at any frequency, making it infinitely dimensional rather than being limited to a fixed-size vector (Lu et al., 2019). This approach is closely related to implicit models for shape representation (e.g. Mescheder et al., 2018; Saito et al., 2019; Yu et al., 2020). Here, geometry is represented by a neural network, which can be queried at arbitrary points and predicts whether these points are inside or outside the surface. A challenge in frequency response prediction is to accurately predict resonance peaks. To address this, we combine approaches from the research areas of operator learning and implicit models. Summarizing, our contributions are as follows: 1 The novel benchmark dataset addressing frequency response prediction of a vibrating thin plates with varied geometries and stiffening patterns. As part of the benchmark, we propose three complementary metrics to quantify frequency response prediction quality. 2 We evaluate existing methods on this dataset and report their scores for reference. These methods involve DeepONet (Lu et al., 2019) and Fourier Neural Operators (Li et al., 2020) among others. 3 We propose the query-based operator learning architecture, FQ-Operator, that outperforms existing methods on our vibrating-plates dataset. ## 2 Related Work Acoustics.While research on surrogate models for the spatio-temporal evolution of vector fields is fairly common, directly predicting frequency responses through neural networks is an understudied problem. A general CNN architecture is applied in (Lanning et al., 2022) to calibrate the parameters of an analytical model for a composite column on a shake table. The data includes spectrograms representing the structural response in time-frequency domain. The frequency-domain response of acoustic metamaterials is considered in a material design task by conditional generative adversarial networks or reinforcement learning (Gurbuz et al., 2021; Shah et al., 2021; Lai et al., 2021). The frequency response of a multi-mass oscillator is predicted with transformer-based methods Schultz et al. (2023). Within the context of aeroacoustics, the propagation of a two-dimensional acoustic wave while considering sound-scattering obstacles is predicted in time-domain by a CNN Alguacil et al. (2021, 2022). A review of machine learning in acoustics is given by (Bianco et al., 2019). Concerning benchmarks in acoustics, with (Hornikx et al., 2015), several acoustic benchmarks for numerical methods are available. However, these benchmarks do not systematically vary input geometries, making them not directly applicable to data-driven models. Scientific machine learning.Data-driven machine learning techniques were successfully applied in many different disciplines within engineering and applied science; for example for alloy discovery (Rao et al., 2022), crystal structure prediction (Ryan et al., 2018), climate modeling (Rasp et al., 2018) and protein folding (Jumper et al., 2021). A popular use case for data-driven methods is to accelerate fluid dynamics, governed by the Navier-Stokes equations (Brunton et al., 2020; Kochkov et al., 2021; Obiols-Sales et al., 2020; Wang et al., 2019; Tompson et al., 2017). The question of how to structure and train neural networks for predicting the solution of partial differential equations (PDE) has been the topic of intense research. Many methods investigate the inclusion of physics informed loss terms (Raissi et al., 2019; Haghighat et al., 2021; Krishnapriyan et al., 2021; Wang et al., 2019; Heilenkotter and Freudenberg, 2023). Some methods directly solve PDEs with neural networks as a surrogate model (Yu et al., 2018; Bu and Karpante, 2021). Graph neural networks are often employed, e.g. for interaction of rigid and deformable (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2020) objects as well as fluids (Sanchez-Gonzalez et al., 2020). Operator learning and implicit models.A promising avenue of research for incorporating inductive biases for physical models has been operator learning (Lu et al., 2019; Li et al., 2020; Lu et al., 2022; Seidman et al., 2022; Kovachki et al., 2023). Operator learning structures neural networks such that instead of directly mapping from discrete input to output space, the neural network produces a function that can be evaluated at real values instead of a discrete grid. DeepONet (Lu et al., 2019) implements operator learning by taking the value at which it is evaluated as an input and processes this value in a separate branch. Fourier Neural Operators (Li et al., 2020) use a point-wise mapping to a latent space, which is processed through a sequence of individual layers in Fourier space before being projected to the output space. Implicit models (or coordinate-based representation) are models where location is utilized as an input to obtain a location-specific prediction, instead of predicting the entire grid at once and thus fit in the operator learning paradigm. Such models were used to represent shapes (Mescheder et al., 2018; Chen and Zhang, 2018; Park et al., 2019; Saito et al., 2019), later their representations were improved (Sitzmann et al., 2020; Tancik et al., 2020) and adapted for representing neural radiance fields (NeRFs) (Mildenhall et al., 2021; Yu et al., 2020). Our method applies techniques from these implicit models to operator learning. ## 3 Dataset and Benchmark Construction ### Problem Definition Vibrating plates are common components in housings, walls and outer skins. In this work, we consider the vibration of a simply supported aluminum plate (displacement \(=0\), free rotation at plate Figure 2: Process of the finite element solution in frequency domain in order to yield the field solutions at each frequency query. boundary) excited by a point force as a representative problem from this domain. The mechanical problem is governed by a partial differential equation depicted in Appendix A.1. To obtain the reference solution (ground truth) of this mechanical problem, we use a numerical discretization technique, i.e. the finite element method (FEM). Plate geometries are represented by a discretization with finite domains (elements), which connect the entire domain and approximate the shape and the solution of the physical quantities by polynomial ansatz functions. The elements follow typical geometrical forms such as quadrilaterals, which are applied here for meshing the spatial dimensions of the plate as shown in Figure 2, center. As the last step in Figure 2, the discretized domain \(\Omega\) is integrated in order to assemble the system of equations in the following form: \[\left(\mathbf{K}-\omega^{2}\mathbf{M}\right)\mathbf{y}=\mathbf{b} \tag{1}\] The matrices \(\mathbf{K}\) and \(\mathbf{M}\) are stiffness and mass matrix, respectively, while \(\mathbf{y}\) contains the degrees of freedom (field values such as displacement) and \(\mathbf{b}\) the excitation forces. The linear system is solved by a parallel direct solver in order to receive the harmonic field solutions (\(\mathbf{y}\)) at the queried frequency steps. The solution vector contains the translational degree of freedom representing the normal vibration amplitude at the queried frequency at all discretization points within \(\Omega\). The FEM converges to the exact solution by refining the mesh resolution. For details, see e.g. Atalla and Sgard (2015). In general, the procedure of considering a differential equation for the mechanical domain, discretizing by FEM and solving the yielded sparse system above is similar for other acoustical systems. A harmonic unit point load is applied at a fixed relative position near one corner of the plate. This way, all typical dynamic characteristics of the plate are excited, which is comparable to a realistic loading, e.g. by engines. Due to the non-central force location, the plate's response is not symmetric. Different load positions slightly change the dynamic response, but the characteristics remain. As we consider damping in the system, the amplitudes are finite within the plate resonances. The unit point load is applied in a frequency range of 1 to 300 Hz. The field solution is computed using a specialized FEM software for acoustics (Sreekumar and Langer, 2023). ### Geometry Variation The benchmark dataset is constructed to enable surrogate modeling and at the same time cover typical variations an engineer has to consider. To achieve this, we introduce two axes of variation: (1) beating patterns, imposed on the plate geometry, and (2) the geometry and material of the plate itself, concretely the width, length and thickness as well as the damping loss factor of the plate. These parameters along with fixed parameters as well as plots showing the effect of a damping and thickness variation are given in Appendix A.2. \(\sigma=1\) is applied to the leading pattern to ensure transitions at the leading edges are smooth and manufacturable. The proposed method covers a large design space of possible leading patterns. Dataset settings.We construct two dataset splits: For the V-5000 setting, we fix the scalar geometry and material parameters and only impose a randomly sampled heading pattern on the plates. Specifically, 1 - 3 lines and 0 - 2 ellipses are placed. Also, the width of the heading-elements is randomly sampled. Example plates are shown in Figure 3 along with their frequency responses. For the G-5000 setting, we apply the same beading pattern variation and additionally vary the plate geometry (length, width and thickness) as well as one material parameter (damping loss factor). The number of samples for the separate training and test datasets is given in Table 1. Dataset analysis.The mean plate design shows a close to uniform distribution, with an area left free from beadings at the plate's edge (see Figure 3(b)). We find that the number of peaks corresponds with the beaded area: the greater the proportion of beaded mesh elements in a given plate, the fewer the peaks (see Figure 3(a)). This is due to additional beadings stiffening the plates, and it represents an interesting trait specific to our problem. The density of peaks is related to the frequency. As the frequency increases, so does the peak density. Starting from 150 Hz the peak density plateaus (see Figure 3(d)). The average number of peaks in the G-5000 setting is slightly smaller than in the V-5000 setting. This is influenced by the on average smaller plates being stiffer and therefore having less peaks in the frequency range (see Figure 3(c)). ### Evaluation Metrics We propose three complementary metrics to measure the quality of the frequency response predictions. Mean squared error.The _mean squared error (MSE)_ is a well-known regression error measure: For the global deviation we compare the predicted \(\hat{\mathbf{r}}\) and ground truth (obtained through simulation) frequency response \(\mathbf{r}\) by the MSE error \(\mathcal{E}_{\text{MSE}}=\sum_{i}(\hat{\mathbf{r}}_{i}-\mathbf{r}_{i})^{2}\). Earth mover distance.The _earth mover distance_(Pele & Werman, 2009; Rubner et al., 2000) expresses the work needed to transmute a distribution \(P\) into another distribution \(Q\). As a first step, the optimal flow \(\hat{\gamma}\) is identified. Based on \(\hat{\gamma}\) the earth mover distance is expressed as follows: \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & & \multicolumn{3}{c}{Sample Space} & \multicolumn{3}{c}{Sample Number} \\ Setting & Geom. & Lines & Ellipses & Width & Train & Test \\ \hline V-5000 & fix & 1 - 3 & 0 - 2 & 30 - 70 & 5000 & 1000 \\ G-5000 & vary & 1 - 3 & 0 - 2 & 40 - 60 & 5000 & 1000 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset settings. Width is the width of lines and ellipses in mm. Geometry (geom.) involves plate size, thickness and material. Figure 4: Dataset analysis. (a) shows two plate geometries with their corresponding frequency response, the red crosses mark the detected peaks. (b) shows the mean plate design and frequency response. (c) shows number of peaks in different dataset settings. (d) shows the distribution of the peaks over the frequencies. \[\mathcal{E}_{\text{EMD}}(P,Q)=\frac{\sum_{i,j}\hat{\gamma}_{ij}\cdot d_{ij}}{\sum_{ i,j}\hat{\gamma}_{ij}}\quad\quad\text{with }\hat{\gamma}=\min_{\gamma}\sum_{i,j}\gamma_{ij}\cdot d_{ij} \tag{2}\] where \(d_{ij}\) is the distance between bins \(i\) and \(j\) in \(P\) and \(Q\). Correspondingly, \(\gamma_{ij}\) is the flow between bins i and j. We calculate the \(\mathcal{E}_{\text{EMD}}\) based on the original amplitudes in \(m/s\) that have not been transformed to the log-scale (dB) and normalize these amplitudes with the sum over all frequencies. As a consequence and unlike the MSE, \(\mathcal{E}_{\text{EMD}}\) is invariant to the mean amplitude and only considers the shape of the frequency response. In this form, our metric is equivalent to the \(W_{1}\) Wasserstein metric (Vaserstein, 1969; Cuturi, 2013). Peak frequency error.To specifically address the prediction of frequency peaks, which are particularly relevant for engineers, we introduce a third metric called _peak frequency error_. The metric answers two questions: (1) Does the predicted frequency response contain the same peaks as the true response? (2) How far are corresponding ground truth and prediction peaks shifted against each other? To this end, we set up an algorithm that starts by detecting a set of peaks \(P_{\text{GT}}\) in the ground truth and a set of peaks \(P_{\text{PRED}}\) in the prediction using the find_peaks function in scipy (Virtanen et al., 2020) (examples in Appendix B). Then, we match these peaks pairwise using the Hungarian algorithm (Kuhn, 1955) based on the frequency distance. This allows us to determine the ratio between predicted and actual peaks \(R_{peaks}=\frac{|P_{\text{PRED}}|}{|P_{\text{GT}}|}\). To provide a notion of the distribution, we report \(D_{[0.25,0.75]}\), the 25 % and the 75 % quantile of \(R_{peaks}\). We further report \(\mathcal{E}_{\text{F}}\), the mean frequency distance of the matched peaks in Hz. These metrics enable straightforward interpretation of the results. ### Frequency Response Ground Truth To calculate the aggregate frequency response we take the spatial average of the field solutions, specifically the squared absolute velocity in z-direction (orthogonal to the plate). The result is then converted to the dB-scale by rescaling and taking the logarithm. To address numerical issues as well as facilitate an easier interpretation of the evaluation metrics, we normalize the frequency response and the field solution. To do this, we first take the log of the field solutions, to align it with the dB-scale of the frequency response. Then, we subtract the mean per frequency over all samples (depicted in Figure 3(b) for frequency response) and then divide by the overall standard deviation across all frequencies and samples. Small changes in the heading pattern can cause frequency shifts, potentially pushing peaks out of the considered frequency band. To reduce the effect of such edge cases, we predict frequency responses between 1 and 300 Hz but evaluate on the frequency band between 1 and 250 Hz. ## 4 Frequency Response Prediction Model Our goal is to predict a frequency response function \(\mathcal{F}(\mathbf{g},\mathbf{m})\), where \(\mathcal{F}(\cdot)\) denotes an operator, mapping the mesh geometry \(\mathbf{g}\) and a set of scalar parameters for geometry and material \(\mathbf{m}\) to the frequency response function (Figure 5). Therefore, following the operator learning paradigm (Lu et al., 2019), \(\mathcal{F}(\mathbf{g},\mathbf{m})\) is defined for any, even non-integer, frequency \(f\). In contrast, a grid-based vector output would only be defined at certain frequencies. We divide this problem into processing geometry input by an encoder \(\Phi\) and evaluating the output function using a decoder \(\Psi\): \[\mathcal{F}(\mathbf{g},\mathbf{m})(f)=\Psi(\Phi(\mathbf{g},\mathbf{m}),f) \tag{3}\] Figure 5: FQ-Operator method. The geometry encoder takes the mesh geometry and the scalar properties as input. The resulting feature volume along with a frequency query is passed to the query decoder, that either predicts a field solution or directly a frequency response. The field solution is aggregated to arrive at the frequency response at the query frequency f. This formulation is not only common in operator learning but shares similarities with implicit models, for instance by Saito et al. (2019) in the context of 3d shape prediction. For frequency response prediction, we hypothesize that a key challenge lies in a precise prediction of resonance peaks for which an implicit formulation is better suited than using a fixed grid. ### Geometry Encoder \(\Phi\) To parse the plate geometry into a feature vector, we employ three variants: ResNet18 (He et al., 2016, RN18), the vision transformer (Dosovitskiy et al., 2020; Vaswani et al., 2017, ViT) and the encoder part of a UNet (Ronneberger et al., 2015). Processing the geometry mesh with CNNs for 2d images is possible, because the 3d mesh can be represented in 2d as depth over a planar grid structure. For the RN18, we replace batch normalization with layer normalization (Ba et al., 2016), as we found this to work substantially better. Compared to the CNN-based RN18, the ViT architecture supports interactions across different image regions in early layers. For both, the RN18 and the ViT encoder, we obtain a feature vector \(\mathbf{x}\) by average pooling the last feature map. Since the UNet generates field solutions, no pooling is applied. FiLM conditioning.For including the scalar geometry information and the loss factor, we introduce a film layer (Perez et al., 2018). The film layer first encodes the scalar parameters with a linear layer. The resulting encoding is then multiplied element-wise with the feature of the encoder and a bias is added. This operation is applied before the last layer of the geometry encoder (UNet) or after it (RN18, ViT). ### Frequency Query Operator Decoder \(\Psi\) FQO-RN18 and FQO-ViT - MLP-based decoder.Having obtained an encoding of the plate geometry and properties \(\mathbf{x}\), a decoder now takes this as well as a frequency query as input and maps them towards a prediction. For the RN18 and ViT geometry encoders, the decoder is implemented by an MLP \(g\) taking both \(\mathbf{x}\) and a scalar frequency value \(f\) as input to predict the response for that specific query frequency, i.e. \(g(\mathbf{x},f)\in\mathbb{R}\). By querying the decoder with all frequencies individually, we obtain results for the frequency band between 1 and 300 Hz. The MLP has six hidden layers with 512 dimensions each and ReLU activations. FQO-UNet - field solution mediated decoder.To incorporate physics-based contraints, we employ the UNet decoder to predict the field solutions instead of directly predicting the frequency response. A frequency query is appended to the channels of the feature vector produced by the UNet encoder and the network is trained to predict the field solution for this specific query. Note, the predicted field solution is converted to the frequency response by spatial averaging of the squared field quantity. The training loss is the (unweighted) average of the mean squared error of the normalized field solution and frequency response. The downside of this approach is that the decoder has to be evaluated 300 times for 300 frequency queries, slowing down inference and training and leading to higher memory demands. To interpolate between a query-based decoder and predicting all frequency response values at once, we can predict a differently sized neighborhood around the frequency query. Reducing the necessary computations by a factor of five, we opt to map to five frequencies per query. This choice is ablated in Appendix D. The runtime of one forward pass of our methods is reported in Appendix C.6. ### Baselines To compare the FQ-Operator method we further report baseline results on the following alternative methods: A \(k\)-Nearest Neighbors regressor, that finds the nearest neighbors in the latent space of an autoencoder. Two grid-based prediction methods, one with a RN18 and, one with a UNet that predict frequency responses or the field solution for all 300 frequencies at once. DeepONet (Lu et al., 2019), with a RN18 as trunk net and a MLP to encode the query frequencies as a branch net. Two architectures based on Fourier Neural Operators (Li et al., 2020). One employing a FNO as a replacement for the query-based decoder based on RN18 features. The second directly takes the input geometry and is trained to map it to the field solutions. See Appendix C for details on all architectures and specifics on the training procedure. ## 5 Experiments We train the FQ-Operator variations and baseline methods on the vibrating-plates dataset (see Table 2). We find that (a) the FQ-Operator variations outperforms grid-based baselines as well as other operator learning methods and (b) predicting the field solutions and then transforming them to the frequency response leads to better results than directly predicting the frequency response. Regarding (1), we find that the FQ-Operator variations consistently yield better predictions than equivalent grid-based methods, where responses for all frequencies are predicted at once: The \(\mathcal{E}_{\text{MSE}}\) and the \(\mathcal{E}_{\text{EMD}}\) are lower, more peaks are reproduced and the peak positions are more precise. Regarding (2), FQO-UNet strongly outperforms the FQO-RN18, that directly predict frequency responses, which we attribute to the richer training data of field solutions. Despite using the same RN18 geometry encoder as FQO-RN18, DeepONet (Lu et al., 2019) performs worse. We assume that this is due to its approach of incorporating frequency information through a single weighted summation, which limits the model's expressivity (Seidman et al., 2022). In contrast, FQ-Operator introduces the queried frequency earlier into the model. We also test two Fourier Neural Operator (Li et al., 2020, FNO) baselines: the first, RN18 + FNO, which substitutes the query-based decoder with an FNO decoder, slightly underperforms compared to FQO-RN18 on both datasets. The second FNO baseline, trained directly to predict field solutions, yields poorer results despite having access to richer training data. Ablations on several architecture choices are provided in Appendix D. The G-5000 setting seems to yield slightly worse results than the V-5000 setting but the differences seem minor. The small difference is surprising because the space of plates in the G-5000 setting is a superset of the V-5000 space. One reason for this might be the average number of peaks in the frequency response: the plates in G-5000 are on average smaller and because of this stiffer, leading to less peaks (on average 2.5 in G-5000 vs. 3.5 in V-5000 ). This interpretation is supported by the fact that the average error becomes higher with increasing frequency (Figure 6f) and thus increasing peak density (Figure 4d). Looking at a prediction example (Figure 6a-d), we see that the predicted field solution from the FQO-UNet has some differences to the ground truth. The prediction captures the three modes and their position quite well, but the shape of the modes is less regular than in the field solution. Despite that, the resulting frequency response prediction at \(f=149\) is close to the FEM reference (ground truth). In comparison to the grid-based prediction, where peaks tend to be blurry, the frequency response peaks generated by FQO-UNet are more pronounced. Transfer learning.To quantify to which degree features learned on a subset of the design space transfer to a different subset, we split the V-5000 setting into two equally-sized parts based on the number of mesh elements that are part of a heading. The "more beadings" set contains only 2.5 peaks on average because the plates are stiffened by the beadings, compared to 4.4 peaks on average for the "less beadings" set. We find that the training on plates with less beadings leads to a smaller drop in prediction quality (see Table 3). This indicates that training on data with more complex frequency responses might be more efficient. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{**V-5000**} & \multicolumn{4}{c}{**G-5000**} \\ \cline{3-10} & FS & \(\mathcal{E}_{\text{MSE}}\) & \(\mathcal{E}_{\text{EMD}}\) & \(D_{[0.25,0.75]}\) & \(\mathcal{E}_{\text{F}}\) & \(\mathcal{E}_{\text{MSE}}\) & \(\mathcal{E}_{\text{EMD}}\) & \(D_{[0.25,0.75]}\) & \(\mathcal{E}_{\text{F}}\) \\ \hline \(k\)-NN & - & 0.42 & 19.78 & [0.33, 0.67] & 11.5 & 0.69 & 28.87 & [0.00, 0.50] & 20.9 \\ RN18 + FNO & - & 0.18 & 8.70 & [0.50, 1.00] & 4.4 & 0.25 & 13.33 & [0.50, 1.00] & 7.4 \\ DeepONet & - & 0.25 & 14.04 & [0.43, 0.67] & 5.0 & 0.34 & 19.57 & [0.20, 0.50] & 9.6 \\ FNO (field solution) & ✓ & 0.26 & 13.70 & [0.50, 1.00] & 6.9 & 0.24 & 13.25 & [0.50, 1.00] & 7.1 \\ \hline Grid-RN18 & - & 0.23 & 10.60 & [0.50, 1.00] & 4.7 & 0.22 & 11.14 & [0.50, 1.00] & 6.1 \\ Grid-UNet & - & 0.12 & 8.13 & [0.67, 1.00] & 3.1 & 0.15 & 9.22 & [0.50, 1.00] & 5.2 \\ \hline FQO-ViT & - & 0.29 & 13.81 & [0.50, 0.75] & 6.1 & 0.30 & 14.32 & [0.33, 1.00] & 8.3 \\ FQO-RN18 & - & 0.17 & 8.58 & [0.60, 1.00] & 3.9 & 0.19 & 10.04 & [0.53, 1.00] & 5.6 \\ FQO-UNet & ✓ & **0.09** & **5.99** & **[0.74, 1.00]** & **2.7** & **0.13** & **7.14** & **[0.67**, 1.00]** & **4.6** \\ \hline \hline \end{tabular} \end{table} Table 2: Test results for frequency response prediction. FS indicates if field solutions are predicted from which the frequency response is derived. Sampling efficiency.We train the FQO-UNet and the FQO-RN18 with reduced numbers of samples (see Figure 5(e)). Note, that the FQO-UNet with a quarter of the training data has close to the same prediction quality as the FQO-RN18 with full training data. This highlights the benefit of including the field solutions into the training process. Quantitative results are given in Appendix E for both dataset settings. ## 6 Conclusion We introduce the novel problem of data-driven vibroacoustic frequency response prediction from variable geometries. To this end, we created the vibrating-plates dataset with an associated benchmark to foster the development of new methods and provide reference scores with respect to related methods, for transfer learning and sample efficiency. We propose the FQ-Operator method to address frequency response prediction and show that our model compares favorably against the DeepONet and FNO baselines. In general, differentiable surrogate models show great potential for speeding-up frequency response prediction over finite element method: Our models achieved speed-up factors of around 4 to 6 orders of magnitude. In terms of accuracy, our experiments demonstrate that such tasks can be learned with data-driven methods, but specific architectures are necessary. Key findings of this work are that query-based approaches consistently outperform grid-based approaches, and predicting the frequency response mediated by field solutions improves quality over direct prediction. We expect our benchmark to foster the discovery of inductive biases for frequency domain prediction, potentially involving components like different positional encodings, attention mechanisms, recurrence and graph neural networks. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{less beadings \(\mapsto\) more beadings} & \multicolumn{3}{c}{more beadings \(\mapsto\) less beadings} \\ \cline{2-9} & \(\mathcal{E}_{\text{MSE}}\) & \(\mathcal{E}_{\text{EMD}}\) & \(D_{[0.25,0.75]}\) & \(\mathcal{E}_{\text{F}}\) & \(\mathcal{E}_{\text{MSE}}\) & \(\mathcal{E}_{\text{EMD}}\) & \(D_{[0.25,0.75]}\) & \(\mathcal{E}_{\text{F}}\) \\ \hline FQO-UNet (origin) & 0.25 & 11.59 & [0.50, 0.75] & 4.6 & 0.21 & 10.01 & \([0.67,1.00]\) & 5.2 \\ FQO-UNet & 0.32 & 12.68 & [0.67, 1.00] & 8.4 & 0.45 & 19.37 & \([0.33,0.60]\) & 6.2 \\ \hline FQO-UNet (origin) & 0.15 & 8.67 & [0.67, 1.00] & 3.6 & 0.12 & 7.28 & \([0.67,1.00]\) & 3.3 \\ FQO-UNet & 0.17 & 9.33 & [0.67, 1.00] & 5.2 & 0.33 & 15.66 & \([0.40,0.75]\) & 5.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Transfer learning performance: We split V-5000 into two halves based on amount of beadings and evaluate transfer learning performance across these splits: training subset \(\mapsto\) test subset. The gray rows denote validation results on the original subset that has been used for training. Figure 6: Results. (b) to (d) show the field solution at one frequency and prediction for the plate geometry in (a) from FQO-UNet. (e) shows the test MSE for training two methods with reduced numbers of samples from V-5000. (f) shows the MSE for the FQO-UNet for different frequencies. Limitations.Our dataset and method serve as an initial step in the development of surrogate models for acoustic frequency response prediction. Our dataset focuses on plates, a common geometric primitive used in a great number of applications. However, many structures beyond plates exist, involving curved surfaces, multi-component geometries and complex material parameters. While FQ-Operator remains applicable in such cases, appropriate encoder and decoders would need to be designed. As more complex geometries will incur computational costs of FEM simulations, a key question will be how to enhance sample-efficiency even further. Ethics statement.Noise reduction holds significance not just for the health of passengers in vehicles and urban residents, but also for broadening acceptance of wind turbines and heat pumps. Findings in this work could be applicable to more complex problems and other engineering disciplines. Surrogate models can introduce a new source of error into the simulation process, especially when departing far from the training data. Users need to be aware of both the reliability and limitations of surrogate models. Reproducibility statement.The dataset and the code repository can be accessed via [https://eckerlab.org/code/delden2023_plate](https://eckerlab.org/code/delden2023_plate). Hyperparameters and architecture details are reported in Appendix C. Complete details for evaluating or training any architecture described in this work are available in the code repository. All models can be trained on a single GPU. Acknowledgements.This research is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project number 501927736, within the DFG Priority Programme 'SPP 2353: Daring More Intelligence - Design Assistants in Mechanics and Dynamics'. This support is highly appreciated. Additionally, we thank Harikrishnan K. Sreekumar for technical support with the FEM software.
2308.07034
An Inherent Trade-Off in Noisy Neural Communication with Rank-Order Coding
Rank-order coding, a form of temporal coding, has emerged as a promising scheme to explain the rapid ability of the mammalian brain. Owing to its speed as well as efficiency, rank-order coding is increasingly gaining interest in diverse research areas beyond neuroscience. However, much uncertainty still exists about the performance of rank-order coding under noise. Herein we show what information rates are fundamentally possible and what trade-offs are at stake. An unexpected finding in this paper is the emergence of a special class of errors that, in a regime, increase with less noise.
Ibrahim Alsolami, Tomoki Fukai
2023-08-14T09:53:27Z
http://arxiv.org/abs/2308.07034v1
# An Inherent Trade-Off in Noisy Neural Communication with Rank-Order Coding ###### Abstract Rank-order coding, a form of temporal coding, has emerged as a promising scheme to explain the rapid ability of the mammalian brain. Owing to its speed as well as efficiency, rank-order coding is increasingly gaining interest in diverse research areas beyond neuroscience. However, much uncertainty still exists about the performance of rank-order coding under noise. Herein we show what information rates are fundamentally possible and what trade-offs are at stake. An unexpected finding in this paper is the emergence of a special class of errors that, in a regime, increase with less noise. ## I Introduction Currently, there is a growing interest in spiking neural networks in physics and engineering [1, 2]. Mainly because of their potential to improve power efficiency, which, to a great extent, depends on the coding scheme employed. For spiking neural networks, a variety of coding schemes are available. Chief among them is rank-order coding; this coding scheme offers a fundamentally different approach for neural information transmission. In rank-order coding, information is encoded in the order of neural spikes; utilizing this degree of freedom can boost communication speeds and efficiency substantially. Rank-order coding has been proposed as a faster alternative to the traditional rate coding scheme [3]. While rate coding is the most widely accepted coding scheme in neuroscience, it has a subtle problem: some experimental observations are hard to reconcile with it because it is slow. For example, primates can respond selectively to the presentation of 3D objects as quickly as \(100\)-\(150\) ms after the onset of a stimulus [4, 5]. This response is too fast to be explained with rate coding as it needs, for a reasonable degree of accuracy, to accumulate spikes over periods much longer than \(150\) ms. Similarly, humans who were asked to determine whether a briefly flashed picture (\(20\) ms) of a natural scene contains a certain category, such as an animal, could accurately (\(\sim 89\)-\(98\%\)) detect whether a category is present or not within a few to several hundreds of milliseconds [6]. Rate coding seems to hardly harmonize with such rapid processing speed. One could, of course, argue that rate coding across a reasonably large number of neurons could provide speed (in terms of bits/sec). In such an approach, we would have \(n\) neurons firing in parallel, and one counts the number of spikes generated within a relatively short time window--compared to a long time window if we had a few neurons. For instance, we would need a longer time window to accumulate \(n\) spikes from a single neuron than \(n\) neurons firing in parallel. This population rate coding scheme can certainly provide speed, but one must contend with the fact that such an approach is inefficient in terms of bits/neurons [7]. In fact, the efficiency of this approach is upper-bounded by \(\frac{\log_{2}(n+1)}{n}\) (bits/neuron)--that is, the more neurons, the less the efficiency. Moreover, the rapid speed of visual processing is likely to be accomplished with very few spikes [5], and this would require a neural coding scheme whereby a few neurons can communicate efficiently. Rank-order coding can offer both speed and efficiency. With \(n\) neurons, rank-order coding can encode \(\log_{2}n!\) bits per transmission compared to \(\log_{2}(n+1)\) bits per transmission for rate coding. The encoding ability of rank-order coding is vast. Take, for instance, a setting with \(10\) neurons. With these neurons, rank-order coding can in principle form \(10!=3,628,800\) symbols, i.e., firing orders of neurons (Fig. 1). As \(n\) increases, the encoding ability of rank-order coding rapidly accelerates. This vast amount of information available in the arrival order of spikes is often forgotten, and studying neural codes that utilize such arrival order could provide clues on how neurons can transmit information rapidly and efficiently across brain regions. Converging evidence suggests that the relative timing, or rank, of neuronal firing plays an important part in encoding information. In retinal ganglion cells of salamanders, the rapid transmission of visual scenes is likely accomplished by encoding information in the relative timing of spikes [8]. It was shown later in a population of retinal ganglion cells of mice that the content of a visual stimulus could be accurately inferred from the wave of the first stimulus-evoked spikes, indicating the importance of the relative timing of spikes to encode sensory information [9]. Analysis of odor-evoked responses of olfactory neurons of Xenopus laevis (African clawed frogs) demonstrated that the rank of spike latencies is a reliable predictor of odor identity [10]. In addition to its biological applicability, rank-order coding is gaining attention in the field of artificial spiking neural networks, which are becoming popular as they hold great potential in energy-efficient computing [11]. It was shown that spiking neural networks with rank-order coding can achieve a high image-classification accuracy with a relatively small number of spikes in multilayer feedforward networks [12] as well as in recurrent networks [13]. Rank-order coding is also finding favor in hardware implementations of spiking neuromorphic processors; it was demonstrated that rank-order coding can enhance power efficiency [14] and provide a favorable trade-off between energy consumption and classification accuracy [15]. Recently, it was shown that rank-order coding successfully reduced the on-chip inference latency in neuromorphic devices [16]. Despite increasing interest in the use of rank-order coding, far too little attention has been paid to the effect of noise on its performance. Our goal here is to analytically understand the impact of noise on the performance of rank-order coding, as noise is unavoidable in any physical system. In rank-order coding, noise can cause spikes to be swapped with each other, giving rise to errors. Herein we study how well rank-order coding performs under noise in terms of information rate (bits/sec) and communication efficiency (bits/neuron). Contrary to intuition, reducing noise does not necessarily reduce all types of errors. Moreover, we show that information rate and communication efficiency cannot be simultaneously maximized due to an intrinsic trade-off between them. ## II Methods We consider a noise model in which spike times of presynaptic neurons exhibit random delays characterized by an exponential probability density (Fig. 1, Eq. 8). Such noise may arise when neurons do not respond instantly to a stimulus, at an expected time, but rather with a random delay (Fig. 1). In rank-order coding, neurons are intensity-to-delay converters: the higher the activation, the earlier a neuron fires. Traditional integrate-and-fire models have this intensity-to-delay property: the higher the membrane potential is, the earlier a neuron will fire. Without loss of generality, we hypothesize that a postsynaptic neuron responds selectively and reliably to a particular order of presynaptic spikes. A feed-forward shunting inhibition circuit was suggested as the underlying mechanism of this precise decoding of temporal patterns [7]. However, exploring the detailed decoding mechanisms is beyond the scope of this study. The channel capacity enables us to compute the maximum amount of information postsynaptic neurons can receive and is defined as [17, 18] \[C=\max_{p(x)}I(X;Y)\qquad\text{(bits/symbol)}, \tag{1}\] where \(I(X;Y)\) is the mutual information between random variable \(X\) (input symbol) and \(Y\) (output symbol), and is given by \[I(X;Y)=H(Y)-H(Y|X)\qquad\text{(bits/symbol)}. \tag{2}\] Here, \(H(Y)\) is the entropy of \(Y\), and \(H(Y|X)\) is the conditional entropy of \(Y\) given \(X\). In Fig. 1, we have \((n!)^{2}\) possible combinations of input and output symbols, where a symbol is defined as a particular order of neural spikes (e.g., the sequence ABC). Here \(n\) is the number of presynaptic neurons. The probability of sending symbol \(x\) and, because of noise, receiving symbol \(y\) is given by the transition probability \(p(y|x)\). The following probability transition matrix describes such communication channel: \[\mathbf{p(y|x)}=\begin{pmatrix}p(0|0)&p(1|0)&...&p(n!-1|0)\\ p(0|1)&p(1|1)&...&p(n!-1|1)\\ \vdots&\vdots&\ddots&\vdots\\ p(0|n!-1)&p(1|n!-1)&...&p(n!-1|n!-1)\\ \end{pmatrix}_{n!\times n!} \tag{3}\] \[=\begin{pmatrix}\begin{array}{cccc}p_{0}&p_{1}&...&p_{n!-1}\\ p_{1}&p_{0}&...&p_{n!-2}\\ \vdots&\vdots&\ddots&\vdots\\ p_{n!-1}&p_{n!-2}&...&p_{0}\end{array}\end{pmatrix}_{n!\times n!}.\] This communication channel is symmetric because rows of the transition matrix are permutations of each other, and so are the columns. The capacity of this channel is achieved by a uniform distribution on the input \(X\)[18]\(\left(p(x)=\dfrac{1}{n!}\right)\), which results in a uniform distribution on the output \(Y\)\(\left(p(y)=\dfrac{1}{n!}\right)\), and is given by \[C= \max_{p(x)}[H(Y)-H(Y|X)] \tag{4}\] \[= \log_{2}n!-H(\mathbf{r})\qquad\text{(bits/symbol)},\] where \(H(\mathbf{r})=-\sum\limits_{j=0}^{n!-1}p_{j}\log_{2}p_{j}\) is the entropy of a row of matrix \(\mathbf{p(y|x)}\). Fig. 1: Rank-order coding with temporal noise (random delay). Here \(\alpha\) is the spacing between successive spikes before noise is introduced. In this illustration, the magnitude of a synaptic weight is represented by the size of the depicted circles. Here postsynaptic neurons integrate-and-fire and are progressively desensitized by shunting inhibition circuits (red). With shunting inhibition, the sensitivity of a neuron progressively decreases as \(\beta^{k}\), where \(k\) is the arrival order of a spike, and \(\beta\) is a constant that takes values in the range \(0<\beta<1\). A postsynaptic neuron is maximally activated if spikes arrive in the order of its synaptic weights. By setting the firing threshold to this maximum excitation/activation level, a postsynaptic neuron becomes selective to a particular temporal pattern. Due to noise (random delay), an intended spike sequence can be erroneously received. For instance, in this illustration, noise can cause the sequence ABC (\(x=0\)) to be erroneously received as CBA (\(y=5\)), which impairs both the communication rate (bits/sec) and efficiency (bits/neuron). ## III Results **Transition probabilities.** Here we determine the transition probabilities to find the channel capacity in Eq. (4). These probabilities are the likelihood that a particular neural spike sequence is received under the perturbation of noise (random delay, Fig. 1). For instance, for three neurons, \(p(CBA|ABC)\) is the probability that the sequence CBA (\(y=5\)) is erroneously received due to noise, given that the original noise-free sequence is ABC (\(x=0\)). It suffices to compute the transition probability of any row of \(\boldsymbol{p(y|x)}\) because the channel is symmetric [18]. Calculations of the transition probabilities are straightforward but tedious. Therefore, we only evaluate these probabilities when the number of presynaptic neurons is relatively small (see APPENDIX A for derivation). We can obtain: \[\begin{split}& p_{0}=p(AB|AB)=\;=1-\frac{1}{2}e^{-\lambda\alpha} \\ & p_{1}=p(BA|AB)=\;=\frac{1}{2}e^{-\lambda\alpha}\;\;\;,\end{split} \tag{5}\] for two presynaptic neurons and \[\begin{split}& p_{0}=p(ABC|ABC)=1-e^{-\lambda\alpha}+\frac{1}{6}e^{ -3\lambda\alpha}\\ & p_{1}=p(BAC|ABC)=\frac{1}{2}e^{-\lambda\alpha}-\frac{1}{2}e^{-2 \lambda\alpha}+\frac{1}{6}e^{-3\lambda\alpha}\\ & p_{2}=p(ACB|ABC)=\frac{1}{2}e^{-\lambda\alpha}-\frac{1}{3}e^{-3 \lambda\alpha}\\ & p_{3}=p(CAB|ABC)=\frac{1}{6}e^{-3\lambda\alpha}\\ & p_{4}=p(BCA|ABC)=\frac{1}{2}e^{-2\lambda\alpha}-\frac{1}{3}e^{-3 \lambda\alpha}\\ & p_{5}=p(CBA|ABC)=\frac{1}{6}e^{-3\lambda\alpha}\;\;\;,\end{split} \tag{6}\] for three presynaptic neurons. In the above expressions, \(\alpha\) is the spacing between successive spikes before noise is introduced, and \(\lambda\) is the rate parameter of the exponential distribution of the noise. Results with four presynaptic neurons are shown in APPENDIX A. As expected, when \(\lambda\alpha\) increases, error probabilities decrease (Fig. 2a). There is a notable exception, however. In the range \(0\leq\lambda\alpha\leq\ln\sqrt{2}\), the error probability \(p(ACB|ABC)\) increases. This can be viewed in two different ways: **1)** For a fixed value of \(\alpha\), as the noise decreases (that is, \(\lambda\) increases), the probability of this type of error increases (Fig. 2b). **2)** For a fixed value of \(\lambda\), as the spacing between spikes (\(\alpha\)) increases the error probability increases as well. A similar phenomenon is also observed when we have four (\(n=4\)) presynaptic neurons (see Fig. 5). Namely, the following error probabilities: \[\begin{split}& p(ABDC|ABCD),\;p(ACBD|ABCD),\\ & p(ACDB|ABCD),\;p(ADBC|ABCD),\\ & p(ADCB|ABCD),\;p(BACD|ABCD),\\ & p(BADC|ABCD),\;p(BCAD|ABCD),\\ & p(BCDA|ABCD),\;p(BDC|ABCD),\\ & p(BDCA|ABCD),\end{split} \tag{7}\] and \(p(BDCA|ABCD)\). These error probabilities momentarily increase with less noise, which is counter-intuitive: errors typically decrease with less noise--not the opposite. Throughout this study, we shall refer to this class of probabilities as _atypical probabilities_. This type of error is not limited to exponential noise; it can also be observed, for example, with Gaussian noise (see Figs. 8 and 9). **Why do errors increase when we have less noise?** The emergence of atypical probabilities can be explained as follows. Let the probability \(P(ACB|ABC)\) serve as an example (Fig. 2). Moreover, let random variables \(Z_{i}\) (\(i\!=\!1\), \(2\), \(3\)) represent a spike's latency after the perturbation of noise; here \(i\) denotes the index of the \(i^{\text{th}}\) presynaptic neuron (See APPENDIX IX for notation details). For the event \(Z_{1}<Z_{3}<Z_{2}\), or equivalently the sequence \(ACB\), to occur, the following two conditions should be simultaneously satisfied: (i) Random variable \(Z_{2}\) needs to be the largest value and (ii) random variable \(Z_{1}\) needs to be the smallest value. (i) The probability of \(Z_{2}\) being the largest value (i.e., \(Z_{2}>Z_{3}\)) decreases with \(\lambda\) (the larger the value of \(\lambda\), the less the noise) because the amount of overlap between the distributions of \(Z_{2}\) and \(Z_{3}\) decreases as \(\lambda\) increases. (ii) In contrast, the probability of \(Z_{1}\) being the smallest value increases with \(\lambda\) in the interval \((0,2\alpha)\) for the following reason. The event \(Z_{2}>Z_{3}\) implies that \(Z_{2}>2\alpha\) (Fig. 2c and Eq. 8). Thus, when \(Z_{2}>Z_{3}\), more space (from \(\alpha\) to \(2\alpha\)) for \(Z_{1}\) has been made to take the position of the smallest value in the interval \((0,2\alpha)\), thereby increasing the likelihood of the neural order \(ACB\) (\(Z_{1}<Z_{3}<Z_{2}\)). Factor (i) causes \(P(ACB|ABC)\) to decrease, whereas factor (ii) causes \(P(ACB|ABC)\) to increase. The net effect of factors (i) and (iii) is \(P(Z_{1}<2\alpha<Z_{3}<Z_{2})\), which is a concave function. This component brings about a rare regime in which errors increase with \(\lambda\) (or, equivalently, with \(\alpha\)). Mathematically, the probability \(P(Z_{1}<2\alpha<Z_{3}<Z_{2})\) can be obtained by splitting the integration region of \(P(ACB|ABC)\) into two parts: \[P(ACB|ABC)=P(Z_{1}<Z_{3}<Z_{2}\ |\ ABC)=\int_{2\alpha}^{\infty}\int_{2 \alpha}^{z_{2}}\int_{0}^{z_{3}}f_{1}(z_{1})f_{3}(z_{3})f_{2}(z_{2})\;\;dz_{1}\, dz_{3}\,dz_{2} \tag{8}\] \[=\underbrace{\left(\int_{2\alpha}^{\infty}\int_{2\alpha}^{z_{2}} \int_{0}^{2\alpha}f_{1}(z_{1})f_{3}(z_{3})f_{2}(z_{2})\;\;dz_{1}\,dz_{3}\,dz_{ 2}\right)}_{\text{Conceive: }P(Z_{1}<2\alpha<Z_{3}<Z_{2})}\] (9) \[=\underbrace{\left(\frac{1}{2}e^{-\lambda\alpha}\times\underbrace{ \left(1-e^{-2\lambda\alpha}\right)}_{\text{Factor (i): }P(Z_{1}<2\alpha)}\right)}_{\text{Factor (ii): }P(Z_{1}<2\alpha)}+\underbrace{\left(\frac{1}{6}e^{-3\lambda\alpha}\right)}_{ \text{Conce: }P(2\alpha<Z_{1}<Z_{3}<Z_{2})}=\frac{1}{2}e^{-\lambda\alpha}-\frac{1}{3}e^{-3 \lambda\alpha}\;\;\;.\] Here \(f_{i}(z)\) is the probability density function (_pdf_) of the \(i^{\text{th}}\) presynaptic neuron, \(i\in\{1,2,\ldots,n\}\). This _pdf_ describes the likelihood of observing a randomly delayed spike, by noise, at time \(z\) and is given by \[f_{i}(z)=\begin{cases}\lambda\ e^{-\lambda\left(z-(i-1)\alpha\right)}&,\text{ if }z\geq(i-1)\alpha\\ 0&,\text{ otherwise }\end{cases} \tag{8}\] **Communication efficiency and information rate.** Figure (a)a shows the performance of rank-order coding in terms of communication efficiency, which is defined as \[\gamma=\frac{C}{n}\qquad\text{(bits/neuron)}, \tag{9}\] where \(C\) can be calculated by using Eq. 4. The communication efficiency increases monotonically with \(\lambda\alpha\). This increase in efficiency eventually plateaus and is asymptotically bounded by \(\gamma^{*}=\lim_{\lambda\alpha\to\infty}\frac{C}{n}=\frac{\log_{2}(n!)}{n}\) (bits/neuron). Moreover, the higher the value of \(n\), the more efficient the communication is. We can further evaluate the performance of rank-order coding in terms of information rate, which is defined as \[R=\frac{C}{\bar{T}}\qquad\text{(bits/sec)}. \tag{10}\] In the absence of noise, the average symbol duration \(\bar{T}\) (that is, the time difference between the first and last spikes of a rank-order coding symbol) is \((n-1)\alpha\). However, with noise, the average symbol duration increases and is given by \[\bar{T}=\alpha+\frac{1}{\lambda}e^{-\lambda\alpha}\quad\text{(sec/symbol)} \tag{11}\] for two presynaptic neurons (\(n=2\)), \[\bar{T}=2\alpha+\frac{1}{\lambda}e^{-\lambda\alpha}+\frac{1}{2\lambda}e^{-2 \lambda\alpha}\quad\text{(sec/symbol)} \tag{12}\] for three presynaptic neurons (\(n=3\)), and \[\bar{T}=3\alpha+\frac{1}{\lambda}e^{-\lambda\alpha}+\frac{1}{2 \lambda}e^{-2\lambda\alpha}+\frac{1}{2\lambda}e^{-3\lambda\alpha}-\frac{1}{6 \lambda}e^{-4\lambda\alpha}\] \[\qquad\qquad-\frac{1}{6\lambda}e^{-5\lambda\alpha}+\frac{1}{6 \lambda}e^{-6\lambda\alpha}\quad\text{(sec/symbol)} \tag{13}\] for four presynaptic neurons (\(n=4\)) (see APPENDIX C for derivation). In Fig. (b)b, we display the (scaled) information rate as a function of \(\lambda\alpha\). The information rate is a non-monotonic function of \(\lambda\alpha\) and increases with \(n\). Moreover, there is an optimal operating point at which the information rate is maximized; beyond this critical point, the information rate rapidly diminishes. In a noisy environment, there exists an inherent trade-off between the communication efficiency of rank-order coding and its information rate. The communication efficiency continuously increases with \(\lambda\alpha\) (Fig. (a)a), but this gain in efficiency comes at the cost of a loss in the information rate once \(\lambda\alpha\) is beyond a critical value (Fig. (b)b). A range of trade-offs is shown in Fig. (c)c, in which the value of \(\lambda\alpha\) is varied and the pair \((\gamma,\frac{R}{\lambda})\) is displayed. The resultant curves represent upper-bounds of achievable information rates and communication efficiencies. Parameter \(\alpha\) provides a means to control the trade-off between information rate and efficiency. Fig. 2: Transition probabilities and spikes of rank-order coding with temporal noise. (a) Transition probabilities for three presynaptic neurons (\(n=3\)). (b) Atypical probability \(p(ACB|ABC)\); this probability reflects the likelihood that noise causes the sequence \(ABC\) to be erroneously received as \(ACB\). In the range \(0<\lambda<\ln\sqrt{2}\), the probability of an error increases with \(\lambda\) (the higher the value of \(\lambda\), the less the noise). Here \(\alpha\) is arbitrarily set to 1 (sec), and the number of samples per point used in the simulation is \(10^{9}\). A similar phenomenon, where errors momentarily increase, can also be observed by fixing \(\lambda\) and varying \(\alpha\); that is, errors increase as the spacing between neural spikes, \(\alpha\), increases. (c) Spikes of rank-order coding with temporal noise (random delay). When \(Z_{2}>Z_{3}\), more space for \(Z_{1}\) has been made to take the position of the smallest value in the interval \((0,2\alpha)\); this causes the probability of the event \(ACB\) (\(Z_{1}<Z_{3}<Z_{2}\)) to increase. ## IV Discussion This paper set out to study the impact of noise on the performance of rank-order coding. Rank-order coding is advantageous as it utilizes the order of neural spikes, enabling it to boost communication speeds. A disadvantage, however, is that it is susceptible to temporal noise, which can swap presynaptic spikes with each other causing errors at postsynaptic neurons. As such, we considered noise in the form of a random delay to gain insights into the performance of rank-order coding in terms of information rate and communication efficiency. In noisy environments, the information rate and communication efficiency depend at least on three factors: the spacing between spikes \(\alpha\), the rate parameter \(\lambda\), and the number of presynaptic neurons \(n\). The higher the value of \(\lambda\alpha\), the more efficient the communication is. However, increasing \(\lambda\alpha\) beyond an optimal operating point has the adverse effect of reducing the information rate. Additionally, we found a class of error probabilities that increase with less noise. This result is counter-intuitive because errors commonly decrease with less noise--not the opposite. The presence of such error probabilities raises a need for special care in designing error correction schemes for neuromorphic devices that employ rank-order coding. We revealed that rank-order coding has an inherent trade-off between information rate and communication efficiency. This result could provide insights to better understand what trade-offs neurons in different brain regions make (under the rank-order coding hypothesis) between the conflicting needs to be fast and, at the same time, efficient. For example, it is likely that neurons at the early stage of the visual-processing pathway (e.g., the retina) prioritize speed over efficiency. However, efficiency may be favored over speed at later stages as information is likely at/near its final (decision-making) destination. The trade-off result also offers a realistic picture of neuromorphic computing with rank-order coding: information rate and communication efficiency cannot be simultaneously maximized--a compromising trade-off between them needs to be made (Fig. (c)c). In the present study, we assumed that postsynaptic neurons respond selectively to a particular order of spikes (temporal pattern). Studies have shown that cortical neurons exhibit such selectivity to temporal input sequences [19]. Various biological mechanisms of temporal pattern detection have been proposed (e.g., [20]). A feed-forward shunting inhibition circuit, which progressively desensitizes a postsynaptic neuron as spikes arrive (see Fig. 1), may accomplish selectivity to a particular temporal pattern [7]. In such a setting, a postsynaptic neuron would be maximally activated (and fire only) if spikes arrive in the order of its synaptic weights. A small portion of extremely strong synapses observed in log-normally distributed synaptic weights [21] may enhance this progressive desensitization. Generalization of our results to an arbitrary number of neurons is a promising research extension. Another avenue of extension would be to assess the performance of rank-order coding when not all spikes arrive at postsynaptic neurons; some presynaptic neurons, for instance, may misfire. The combination of such noise with temporal noise can significantly Fig. 3: Performance of rank-order coding. (a) Communication efficiency. (b) Information rates. Here we plot the scaled version (\(\frac{R}{\lambda}\)) of \(R\) rather than \(R\) as it eliminates the need to display \(R\) for various combinations of \(\lambda\) and \(\alpha\). (c) The trade-off between information rates and efficiency. affect performance. Nonetheless, hybrid noise would only affect quantitative results but would not change the qualitative results of this study. To conclude, rank-order coding can provide speed and efficiency, but noise imposes a trade-off between them. The results of this study offer a novel insight into the performance of rank-order coding. ## Acknowledgment The authors would like to thank Balashwethan Chockalingam and Thomas Burns for their suggestions and comments. We are grateful for the help and support provided by the Scientific Computing and Data Analysis section of the Research Support Division at OIST. T.F. acknowledges support from KAKENHI grants JP19H04994 and JP18H05213.
2306.16495
Event Detection from Social Media Stream: Methods, Datasets and Opportunities
Social media streams contain large and diverse amount of information, ranging from daily-life stories to the latest global and local events and news. Twitter, especially, allows a fast spread of events happening real time, and enables individuals and organizations to stay informed of the events happening now. Event detection from social media data poses different challenges from traditional text and is a research area that has attracted much attention in recent years. In this paper, we survey a wide range of event detection methods for Twitter data stream, helping readers understand the recent development in this area. We present the datasets available to the public. Furthermore, a few research opportunities
Quanzhi Li, Yang Chao, Dong Li, Yao Lu, Chi Zhang
2023-06-28T18:40:03Z
http://arxiv.org/abs/2306.16495v1
# Event Detection from Social Media Stream: Methods, Datasets and Opportunities ###### Abstract Social media streams contain large and diverse amount of information, ranging from daily-life stories to the latest global and local events and news. Twitter, especially, allows a fast spread of events happening real time, and enables individuals and organizations to stay informed of the events happening now. Event detection from social media data poses different challenges from traditional text and is a research area that has attracted much attention in recent years. In this paper, we survey a wide range of event detection methods for Twitter data stream, helping readers understand the recent development in this area. We present the datasets available to the public. Furthermore, a few research opportunities are discussed as potential future research directions. event detection, social media, natural language processing ## I Introduction The rapid development of social media platforms has led to an explosion of user-generated data posted on the Internet. The huge amounts of such data have enabled the study of many research problems, and event detection is one of the important topics. Twitter is a fast communication channel for spreading breaking news and events, and a good resource for detecting real-time events, such as an earthquake, bombing, or strike event. In this paper, we survey the techniques found in the literature for event detection from Twitter data stream. We also provide the available datasets and discuss several research opportunities. Event definitions vary slightly in previous studies. McMin et al. [66] defines event as "something significant that happens at specific time and place". Xie et al. [96] define events as "real-world occurrences that unfold time and space", which can be described with the so-called 4Ws (what, who, when & where). In this study, we use the definition from Allan et al. [4, 5] and Topic Detection and Tracking (TDT) project: events are real-world occurrences that unfold over space and time, and the objective of event detection is to discover new or previously unidentified events. This definition is more general and has been used in many studies [4, 5, 90, 96, 98, 41, 42, 43, 44, 45, 87]. Event detection from conventional media has been long addressed in the TDT program. However, event detection from social media poses new challenges that are different from those in traditional media. In contrast with the well-written and structured news articles, tweets are restricted in length and the textual information is very limited. The messages include large amounts of informal and abbreviated words, spelling and grammatical errors, irregular sentence structures and mixed languages. Tweets also contain large amounts of meaningless messages, spams, advertisements, and rumors [16, 37, 38, 52, 53, 8, 41, 40, 48, 49], which will negatively affect the detection algorithm performance. Event detection from Twitter data streams involves techniques from various areas, such as natural language processing, text mining, information retrieval (IR) and social network analysis [51, 70]. In this paper, we do not provide an exhaustive review of existing methods but choose the representative techniques to give readers a perspective on the main research directions. The main differences between this paper and previous surveys [59, 8, 76] on event detection from social media data are: 1. Many datasets used in the event detection studies are not available to the public. Previous survey papers do not provide a review on which dataset is available to the public. We checked previous studies to see if their used datasets are available to the public and introduce all the datasets that are available. 2. We provide a full review of the evaluation metrics for event detection. None of the previous surveys has done this; they only mentioned a couple of them in their papers. 3. We also reviewed the recent studies utilizing neural network. Previous surveys have not reviewed the approaches based on neural network. 4. Based on the survey, our research experience and work experience with event detection applications, we provided and discussed a list of challenges and opportunities in this field, which could be potential future research topics. Though we mainly surveyed the event detection techniques on Twitter data, we think the structure we classify the detection methods, the evaluation metrics and the discussions on future research opportunities are also applicable to the short messages produced on other social media platforms. Event Detection in Twitter Data Stream We can review the event detection methods from different angles. Based on the event type, these methods can be classified into unspecified vs. specified event detection. In TDT, event detection can be broadly classified into two categories: new event detection (NED) and retrospective event detection (RED). NED is the discovery of new events from data streams in real time [4], and RED focuses on discovering previously undetected events from historical collections [98]. NED is also called a first-story detection or novelty detection [79, 80, 81, 41, 42]. Table 1 organizes the representative detection approaches based on the event type, detection task, detection technique and their application. Since most event detection techniques are for unspecified events and NED, and most unspecific event detection are for NED, therefore, in the following subsections, we focus more on unspecified event detection. ### _Unspecified Event Detection_ For unspecified events, we have no prior information about the events, so the unspecified event detection techniques rely mainly on exploiting the temporal patterns or signal from Twitter data streams. Unspecified events of interest are usually driven by breaking news, emerging events and general topics attracting the attention of a large number of users. Fig. 1 shows the typical workflow for a NED system for unspecified events. It includes not only the detection part, but also the post-detection components, which are necessary for most event detection applications. Specified event detection workflow has the similar architecture. 1. **Preprocessing stage**. The noise filtering component is to remove spam tweets, and tweets that are basically nonsense, such as profinity, chitchat, and advertisement. Noise filter is usually built as a classifier [53, 41, 42, 87]. The metadata extraction component extracts entities or other metadata (e.g., geo-location, links, and hashtags) that might be used in later stages. 2. **Event detection stage**. Depending on the event detection type (specified, unspecified, RED, NED), the actual detection technique (e.g., clustering based, term based, retrieval based) in the detection stage may be different. The cluster defragmentation and cluster purging components are to merge relevant event clusters together and purge old events from memory; they may not be needed for some detection techniques [41, 42]. 3. **Post-detection stage**. Depending on the application, the post-detection stage may need some components. For example, event summarization may be necessary for most use cases, and the newsworthiness ranking, and event veracity identification (rumor detection) components will benefit news agency users [41, 53, 40, 47, 48, 49, 50, 71, 52]. We organize the unspecified event detection methods into the following categories: clustering based, term based, and neural network based. The neural network approach has overlap with others; we use a separate type for it to highlight the recent studies exploiting neural networks. #### Ii-A1 Clustering Based Approaches Due to the unpredictability and dynamicity in social media data streams, there was a tendency in previous studies to use unsupervised methods, such as clustering and tensor decomposition for event detection [33]. Many event detection algorithms tackle the problem as a stream clustering task. Becker et al. [11] have used an incremental clustering algorithm to detect events from the Twitter stream. Petrovic et al. [2010] and Wurzer et al. [95] used a Locality Sensitive Hashing (LSH) to detect and cluster events from high-volume tweet streams in constant time and space. Aggarwal and Subbian [2] proposed a stream-based clustering algorithm on each incoming post. McCreadie et al. [64] showed that K-means clustering can be successfully used for event detection. Many other approaches have utilized hierarchical or incremental clustering approaches [22, 41, 69, 67]. Corney et al. [22] proposed clustering word n-grams, Li et al. [41] proposed clustering semantic terms, and Morabia et al. [67] proposed clustering segments. Nguyen et al. [69] clustered term frequency-inverse document frequency (tf-idf) vectors after identifying candidate clusters using entity inverted indices. Fedoryszak et al. [26] represent an event as a chain of clusters over time. Their algorithm design is based on the realization that they can decompose burst detection and clustering into separate components that can be scaled independently. Wang and Zhang [92] build a joint model to filter, cluster, and summarize the tweets for new events. Online clustering-based approaches are prone to cluster fragmentation and are usually unable to distinguish between two similar events occurring around the same time [79, 66, 36]. #### Ii-A2 Term Based Approaches Clustering based approach is a document-pivot technique because it relies on tweet, which is a short document. The term-based approach is a feature-pivot technique. It models an event in text streams as a bursty activity, with certain features rising sharply in frequency as the event emerges. The clustering based and term-based approaches can be used together. TwitInfo [61] uses a streaming algorithm to detect spikes in tweet data, and the peak generated by the high volume of posts are considered as events. TwitterMonitor [63] detects emergent topics by identifying the bursty terms within a small-time window. If the system detected high frequency terms co-occur in many tweets in the given time window, they are placed in the same group. Similarly, enBlogue [6] computes statistical values for tag pairs within a given time window and monitors unusual shifts in the tag correlations to detect emergent topics. TopicSketch [97] detects bursty topics by relying on the concept of word acceleration. Some studies utilize anomaly detection algorithm, whose technique is like the term based bursty detection technique. Li and Zhang [45] exploit the semantic types of event related terms. An event is usually defined by the 4Ws questions: who, what, where and when. An event tweet usually contains terms corresponding to these aspects, and these terms can be classified into different semantic classes/types, such as entity names (who) and location (where). They also use the semantic terms for event summarization. Fig. 1: The typical workflow for a NED system for unspecified events. * For RED, the data source will not be a live tweet stream. Term based can often capture misleading term correlations and measuring the term correlations can be computationally prohibitive in an online setting approach [63, 88, 78, 45]. #### Ii-A3 Approaches Using Neural Network Recent studies have applied neural network and deep learning technologies on event detection from social media [17, 18, 92, 33, 41, 42, 68, 33, 15, 43, 45]. The embeddings, such as word embedding or tweet embedding, learned from text via neural network technologies capture the semantic and syntactic regularities in the text. It solves the vocabulary mismatch problem existing in the traditional event detection approaches. Deep network also helps us learn the latent information embedded in text. Chen et al. [18] proposed a deep neural network-based approach where tweets were converted into fixed length vectors using pretrained GloVe embeddings [77], which is then used for tweet clustering. Wang and Zhang [92] build a joint model to filter, cluster, and summarize the tweets for new events. Tweet representation built from Long Short-Term Memory is shared among filtering, clustering, and summarization. Hettiarachchi et al. [33] propose a novel method termed Embed2Detect by combining the characteristics in word embeddings and hierarchical clustering. We expect more studies utilizing neural network will appear. Car et al. [15] propose a novel Knowledge-Preserving Incremental Heterogeneous Graph Neural Network (KPGNN) for incremental social event detection. To acquire more knowledge, KPGNN models complex social messages into unified social graphs to facilitate data utilization. To continuously adapt to the incoming data, KPGNN adopts contrastive loss terms that cope with a changing number of event classes. To deal with large social streams, KPGNN periodically removes obsolete data to maintain a dynamic embedding space. The drawback with deep neural network is that it may have velocity issue when used for online NED if the network structure is very complex. ### _Specified Event Detection_ Specified event detection includes known or planned social events, or events related to some specific topics. These events could be partially or fully specified using related content or metadata information, such as location, venue, or keywords. The detection task could be NED or RED. The techniques used can be IR-based or the ones described in previous section, such as the term based approach. Because the events are specified and it is easier to build training data than unspecified events, many specified event detection approaches use supervised detection algorithms [86, 83, 84, 14, 22, 65, 24]. For example, Sakaki et al. [86] formulated event detection as a classification problem and trained an SVM classifier to detect earthquakes and typhoons events. ### _NED and RED_ Depending on the task and the type of event, event detection in Twitter can also be classified into RED and NED. Because NED techniques involve continuous monitoring of Twitter data for discovering new events in near real time, they are naturally suited for detecting unknown real-world events and breaking news [79, 80, 11, 95, 41, 33, 45, 54]. NED techniques can also be used for specified event detection, although most studies focus on unspecified events. When the task involves specific events (e.g., disasters, crimes, sports) or a specific information about the event (e.g., specific organization, person, or location), this information could be integrated into the NED methods by using classification or filtering techniques [86, 83, 37, 65, 22, 88]. While most research focused on NED to exploit the timely information provided by social data streams, there are also interests in RED from historical data [62, 65, 14, 11]. For RED, because in most cases we already have the whole data collection, traditional IR-based methods can be exploited. Most methods for NED can also be utilized for RED with just small changes [8]. Twitter provides limited search capabilities that allow to retrieve tweets, so some RED tasks are conducted by searching old tweets from Twitter. Vocabulary mismatch is a problem in this case, since Twitter does not provide embedding search. ## III Datasets Evaluation datasets are important for comparing and evaluating the effectiveness of different event detection approaches. One issue with the event detection from Twitter stream is that many datasets used by the studies are not available to the public, and different studies used different datasets. Therefore, it is hard to say which approaches have the state-of-the-art performance. We collected a list of Twitter datasets that are available to the public. Below are the brief introduction and link of the datasets that are available to the public. * Dataset 1: This dataset covers three topics: FA Cup Final, Super Tuesday for US Elections, and US Elections [3]. They have 13, 8 and 26 events, respectively. The tweets were collected from Nov 2012. _[http://socialsensor.it.gr/results/datasets_](http://socialsensor.it.gr/results/datasets_) * Dataset 2: This one consists of 41 events and 671K tweets posted within the area of Manhattan, NYC during 12/2014 [17]. The events are about general topics. _[https://dl.acm.org/doi/10.1145/3332185_](https://dl.acm.org/doi/10.1145/3332185_) * events about English Premier League on October 20, 2019 between Manchester United and Liverpool, And BrexIVote - events about Brexit Super Saturday 2019 on October 19, 2019 [33]. _[https://github.com/hansi/twitter-event-data-2019_](https://github.com/hansi/twitter-event-data-2019_) * Dataset 4: this one has two sub datasets, one from the earthquake domain and another from DDoS attack domain [92]. The tweets were collected from June 2013 to April 2016. [https://github.com/wangzq870305/joint_event_detection](https://github.com/wangzq870305/joint_event_detection) * Dataset 5: this one consists of 27 topics and 116K tweets from April till September 2011 [80, 81, 95]. _[https://era.ed.ac.uk/handle/1842/7612_](https://era.ed.ac.uk/handle/1842/7612_) * Dataset 6: Inouye and Kalita [34] collected the top trending topics from Twitter for the year of 2011, and finally got 50 trending topics with a total set of 75K tweets. _[https://ieeexplore.ieee.org/document/6113128_](https://ieeexplore.ieee.org/document/6113128_) * Dataset 7: this corpus is provided by McMinn et al. [66]. The tweets were collected from Dec. 2012. It has 506 events on different general topics containing over 150K relevant tweets. The problem with this dataset is that it contains only tweet id, and the majority of these tweets cannot be downloaded from Twitter since they are not available any more. _[http://mir.dcs.gla.ac.uk/resources/_](http://mir.dcs.gla.ac.uk/resources/_) ## IV evaluation metrics To evaluate the quality of the detected events, various metrics have been used in previous studies, summarized below. ### _Normalized Topic Weighted Minimum Cost (Cmin)_ This metric is from TDT program [5] and has been used by several studies [80, 81, 92, 95]. _Cmin_ is a linear combination of miss and false alarm probabilities, which allows comparing different methods based on a single value metric. Computing _Cmin_ needs several equations, and we skip them here due to the space limit. See [5] for more details. The TDT project assumed that the documents come from a noiseless stream, such as newswire, which means that all the documents in the stream are considered newsworthy. As a result, evaluation based on _Cmin_ has ignored precision and focused instead only on miss and false alarm rate. However, social media stream is very noisy, which means that _Cmin_ is no longer a good metric here. To get a complete picture of the effectiveness of an event detection approach, we should measure both recall and precision, described below. ### _Precision, Recall and F-measure_ These three metrics could be used if a labeled dataset is used to evaluate the performance of an algorithm. An event recall is the percentage of ground-truth events successfully detected by a method. A ground-truth event is considered successfully detected if there exists a predicted event that matches certain number of tweets or terms (threshold varies by studies). Precision is defined as the percentage of ground-truth events in the generated events. F measure is the harmonic mean of precision and recall. Many previous studies [3, 17, 80, 85, 82] have used part or all these three metrics. The issue with event cluster level precision, recall and F is that they cannot measure the cohesiveness within a cluster. To overcome this drawback, we suggest using the following two measures: NMI and B-Cubed. ### _Normalized Mutual Information (NMI)_ NMI [60, 89] and C-Cubed [Amigo et al., 2008] have been used in previous studies on general and social media message clustering [12, 93, 26, 41, 42, 9]. We chose them because both metrics balance the desired clustering properties: to maximize the homogeneity of events within each cluster, and to minimize the number of clusters that tweets of each event spread across. NMI is an information-theoretic metric that was originally proposed as the objective function for cluster ensembles. It measures how much information is shared between actual ground truth events, each with an associated tweet set and the clustering assignment. More details are in [89]. ### _B-Cubed_ B-Cubed [9] estimates the precision and recall associated with each tweet in the dataset individually, and then uses the average precision \(P_{b}\) and average recall \(R_{b}\) values for the dataset to compute B-Cubed: \[\textit{B-Cubed}=\frac{2^{*}P_{b}*R_{b}}{P_{b}+R_{b}} \tag{1}\] For each tweet, precision is defined as the proportion of items in the tweet's cluster corresponding to the same event, and recall is the proportion of tweets that correspond to the same event, which are also in the tweet's cluster. ## V Challenges and Opportunities Social media post is made up of short, noisy, and unstructured data, and the volume is huge. These challenges have been well discussed in previous studies. Based on the review of related studies, our research experience and our direct work experience with real world applications and the stakeholders, such as news agency, public safety office and big corporation, we present the following challenges and opportunities in this field, which could be potential future research topics. Due to the space limitation, we just briefly discuss them. ### _Event Evolution Stages_ An event may evolve and develop into multi stages. For example, a bombing attack event may have a few stages: the booming incident, pursuit of the suspect, the arrest of the suspect, and sentence of the suspect. Depending on the application and the preferred granularity level, these stages may also be considered as different but related events. As the event evolves, the terms used to describe the event may also gradually change. Take the tsunami in Japan that occurred in 2011 as an example, initially, the event is dominated by keywords like "earthquake" and "tsunami", but later words such as "nuclear" and "radiation" are introduced. Clearly identifying the development stages of an event will help us analyze, understand, present, and organize the event. There are some explorations on this topic [75, 26], but given the importance and challenge of this problem, how to identify these evolution stages and connecting them together has not attracted enough research intention. ### _Multi-task Learning_ Studies already show that jointly learning can improve performance of tasks that are related or share some common information [92, 44, 56, 48]. In the event detection workflow, depending on the applications, the following tasks might be involved: event detection, entity extraction, event summarization, topic classification, rumor detection, and novelty detection. One future research direction is to explore multi-task learning techniques on these tasks. These tasks share some information and some of them also have inter-dependence relation. We expect jointly learning will benefit at least some of them, as initially demonstrated by [92]. The recent advances in neural network and deep learning technologies will also help this exploration. ### _Temporal Information Identification_ In social media, users may talk about any event; some events may be as old as days, months, or even years ago, e.g., a discussion about an event occurring in World War II. A real-time novel event detection system is only interested in events that are happening now or just happened a short time ago. To filter out the old events, we need to identify the temporal information in a cluster's tweets and use that information to determine whether the event is a new one. When we say an event is an "old" event, it may have different meanings in different use cases or applications. Many events need to specifically extract the temporal information from its tweets to decide if it is an old event or not; the traditional novelty detection techniques may not work for this case. Li et al. (2017) and Li and Zhang (2021) identify temporal information and use it as one semantic type in their clustering algorithms. But they did not explicitly address the issue mentioned above. ### _Event Witness Identification_ Social media has provided citizen journalism with an unprecedented scale, and access to a real time platform, where once passive witnesses can become active and share their eyewitness testimony with the world, including with journalists who may choose to publicize their report. Identifying witness accounts is important for rumor debunking, crises management, and basically any task that involves on the ground eyes. Witness identification involving analyzing the tweet text, location information in user profile, messages posted preceding and after the message of this event. Fang et al. (2015) use n-grams and traditional classifiers to identify witness accounts of an event, the proposed method was used in debunking rumors in social media [53, 41, 42]. This area is under-researched, and methods exploiting neural networks may benefit. ### _Multimodal Event Detection_ Currently most event detection studies in Twitter focus on text content, but more and more social media messages contain image, video, voice, or links. As advances in video, voice and image analysis, multi-modality algorithms have been utilized in other applications and shown success [53, 47]. One promising research direction is to exploit the multimedia information in event detection on Twitter. Reference [10] and [100] have explored this direction, but only in a narrow domain. Like the case of multi-task learning, the recent advances in neural network technologies will help this research direction. ### _Event Popularity Prediction_ Many events develop gradually, unnoticed at its early stage and finally evolve to an event having big impact. Detecting an event when it is already spreading or going viral is not hard. One challenging and important task is to predict the event's popularity, so that the related parties can get alerts earlier and get prepared or act before it causes series damage. One issue with most current detection approaches is that when an event has not evolved for some time, that event may be removed from the radar of the system, usually due to the computing resource constraint, but later it becomes a big event. Including the popularity prediction ability in the whole event detection workflow will help. Popularity prediction involves not just the textual information, but also network propagation and social media user profile information. Gupta et al., [29] uses regression classification with social and event features to predict even popularity, while Chen et al. [20] use just hashtags. One interesting direction would be to explore both neural network models and a large set of multimodal features. ### _Rumor Detection and Event Detection Integration_ Rumor early detection is to detect a rumor at its early stage before it wide spreads on social media, so that one can take appropriate actions earlier. Early detection is especially important for a real-time system, since the more a rumor spreads, the more damages it causes [40, 41, 47, 48, 54, 52]. Currently, rumor detection and event detection are two separate tasks. After an event is detected, then we detect the veracity of that event. One challenging research direction is to detect the event and rumor jointly, so we can identify the veracity of the event as early as possible. Much information can be shared by these two tasks, such as the entities extracted, user info and network propagation info. We think this will be an interesting and challenging research topic, and a good solution will have very big impact on the rumor detection field. ### _Cross-platform and Cross-language_ Most previous studies on event detection on social media focus on only one specific social media platform. A solution that can detect and link events on different platforms will provide us at least two benefits: 1. The same event may have different burst or propagation velocity and characteristics on different platforms. The knowledge about this event gained from one platform may help us detect and analyze the event on another platform. 2. For the same event, user responses and opinions may be different on different platforms. A cross-platform solution may help related parties to gain a deep understanding and full picture about people's responses to an event, such as a public safety event. Cross-language event detection and analysis has becoming much more attractive in recent years, since nowadays more events have world-wide effect, such as events about finance, politics, and public health crisis. Like the cross-platform event detection case, a cross-language solution will also benefit both the detection and understanding of an event. Liu et al. [57] unify multi-lingual sources into same language, and then detect events by merging the same entities and similar phrases and present multiple similarity measures by using word2vec model. For multi-lingual event detection, this study translates different languages into one, and it do not do anything more than that. ## VI Conclusion In this paper, we survey a wide range of event detection methods for Twitter data stream and present a list of datasets that are available to the public. A few research opportunities are also discussed, which could be potential future research directions.
2302.04566
Pointwise Kan extensions along 2-fibrations and the 2-category of elements
We study the 2-category of elements from an abstract point of view. We generalize to dimension 2 the well-known result that the category of elements can be captured by a comma object that also exhibits a pointwise left Kan extension. For this, we propose an original definition of pointwise Kan extension along a discrete 2-opfibration in the lax 3-category of 2-categories, 2-functors, lax natural transformations and modifications. Such definition uses cartesian-marked lax limits, which are an alternative to weighted 2-limits. We show that a pointwise Kan extension along a discrete 2-opfibration is always a weak one as well. The proof is based on an original generalization of the parametrized Yoneda lemma which is as lax as it can be.
Luca Mesiti
2023-02-09T11:04:31Z
http://arxiv.org/abs/2302.04566v2
# The 2-Set-enriched Grothendieck construction ###### Abstract. We study in detail an extended version of the Grothendieck construction, that corresponds to a 2-categorical generalization of the construction of the category of elements. After refining the universal property of the lax comma object to suit the lax 3-categorical ambient in which we need to work, we describe how we can think of this extended Grothendieck construction as the archetypal 3-dimensional classification process, in the sense of a would-be 3-dimensional elementary topos theory. We show a new, more intuitive and elementary proof of the reduction of the weighted 2-limits to essentially conical ones, and use a generalized version of the latter to propose an original definition of pointwise Kan extension in the weakly enriched context that hosts the studied Grothendieck construction. We then present a pointwise Kan extension result, and we conclude by proving that a pointwise Kan extension as defined here is always a weak one as well, after showing a lax but not too lax generalization of the parametrized Yoneda lemma. Key words and phrases:Grothendieck construction, elementary topos, lax, limits, Kan extension, 2-categories, category of elements, lax comma, classifier, 3-categories, enrichment, fibrations, conicalization, parametrized Yoneda lemma, weighted limits 2 In dimension 1, the construction of the category of elements, that we shall think of as the \(\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\! \mathcal{S}\! a morphism \((B,X)\to(C,X^{\prime})\) in \(\int^{\mathrm{op}}\!F\) is a pair \((f,\alpha)\) with \(f\colon B\to C\) a morphism in \(\mathcal{B}\) and \(\alpha\colon F(f)(X)\to X^{\prime}\) a morphism in \(F(B)\); _a \(2\)-cell \((f,\alpha)\Rightarrow(g,\beta)\colon(B,X)\to(C,X^{\prime})\) is a \(2\)-cell \(\delta\colon f\Rightarrow g\) in \(\mathcal{B}\) such that \(\alpha=\beta\circ F(\delta)_{X}\)._ In dimension \(1\), it is then known that the construction of the category of elements can be captured in a more abstract way, which is particularly interesting for the (\(2\)-dimensional) elementary topos theory. **Proposition 1.2**.: _Let \(F\colon\mathcal{B}\to\mathcal{S}\!\mathfrak{e}t\) be a copresheaf. The construction of the category of elements of \(F\) is equivalently given by the comma object_ (1) _As a direct consequence, it is also given by the pullback between \(F\) and the lax limit of the arrow \(1\colon\mathcal{I}\to\mathcal{S}\!\mathfrak{e}t\), where the latter coincides with the forgetful functor \(\tau\colon\mathcal{S}\!\mathfrak{e}t_{\bullet}\to\mathcal{S}\!\mathfrak{e}t\) from pointed sets to sets._ According to Weber's [16], Proposition 1.2 shows the construction of the category of elements as the archetypal \(2\)-dimensional classification process, exhibiting \(\mathcal{S}\!\mathfrak{e}t\) as the canonical (\(2\)-dimensional) universe for the \(2\)-category \(\mathcal{C}\!\mathfrak{e}T\). What gets classified is precisely the discrete opfibrations with small fibres. Weber actually only considered the point of view of the second part of Proposition 1.2, but we prefer the first, that of equation (1), as it shows the singleton as the verum inside the category \(\mathcal{S}\!\mathfrak{e}t\) of generalized truth values. Equation (1) also makes it clear that the passage from the \(1\)-dimensional elementary topos theory to the \(2\)-dimensional one is obtained by upgrading the classification by pullbacks to one by comma objects. As a consequence, we end up classifying discrete opfibrations, since they are (defined by representability from) what is classified in the archetypal elementary \(2\)-topos \(\mathcal{C}\!\mathfrak{e}T\). We are interested in understanding a general pattern, for example from an enriched point of view. Of course it is natural to consider \(1\colon\mathcal{I}\to\mathcal{S}\!\mathfrak{e}t\) as the verum truth value, and this has an immediate generalization to the general enriched setting, but we would like to understand the reason why we should take the comma object to regulate the classification process in dimension \(2\). What we believe is the deep reason behind this is that the comma objects are the archetypal example of _exact square in \(\mathcal{C}\!\mathfrak{e}T\)_ (Definition 1.3), with the notable consequence that comma objects manage to express every copresheaf as a pointwise left Kan extension of the constant copresheaf at \(1\) (Theorem 1.4). **Definition 1.3**.: An _exact square in \(\mathcal{C}\!\mathcal{A}T\)_ is a diagram in \(\mathcal{C}\!\mathcal{A}T\) such that, for every category \(\mathcal{M}\), the associated Beck-Chevalley natural transformation given by the pasting is an isomorphism, where \(\operatorname{Lan}_{p}\) and \(\operatorname{Lan}_{q}\) denote the pointwise left Kan extensions respectively along \(p\) and along \(q\), and \(\epsilon^{p}\) and \(\eta^{q}\) are respectively the counit of the adjunction formed by the first one and the unit of the adjunction formed by the second one. **Theorem 1.4**.: _Let \(F\colon\mathcal{B}\to\mathcal{S}\!\mathfrak{e}t\) be a copresheaf. Then_ \[F=\operatorname{Lan}_{\mathcal{G}(F)}\Delta 1\] _where \(\Delta 1\colon\int^{\operatorname{op}}\!F\to\mathcal{S}\!\mathfrak{e}t\) is the functor constant at \(1\)._ Proof.: This proof does not seem to appear in the literature, but might be folklore. By Proposition 1.2, the construction of the category of elements can be expressed as a comma object, forming thus an exact square. Considering \(\mathcal{M}=\mathcal{S}\!\mathfrak{e}t\) in the definition of exact square, the component of the associated Beck-Chevalley transformation on the copresheaf \(1\colon\boldsymbol{1}\to\mathcal{S}\!\mathfrak{e}t\) exhibits \[\operatorname{Lan}_{\mathcal{G}(F)}\Delta 1\cong\operatorname{Lan}_{1}1 \circ F.\] And since \(1\colon\boldsymbol{1}\to\mathcal{S}\!\mathfrak{e}t\) is dense, we have that \(\operatorname{Lan}_{1}1=\operatorname{Id}_{\mathcal{S}\!\mathfrak{e}t}\) (a reference for this basic result on density is Kelly's [11]). **Remark 1.5**.: Theorem 1.4 (together with Proposition 1.2) implies a huge portion of the theory around the construction of the category of elements, including the canonical extension of the construction to a functor \(\mathcal{G}(-)\colon\left[\mathcal{B},\mathcal{S}\!\mathfrak{e}t\right]\to \mathcal{C}\!\mathfrak{a}t\left/\mathcal{B}\right.\), the fully faithfulness of \(\mathcal{G}(-)\), the stability under pullback of the discrete (op)fibrations with small fibres (that form the essential image of \(\mathcal{G}(-)\)) and the conicalization of the weighted \(\mathcal{S}\!\mathfrak{e}t\)-enriched limits. This does not seem to appear in the literature, but might be folklore. For example, the extension of the construction of the category of elements to a fully faithful functor \(\mathcal{G}(-)\colon\left[\mathcal{B},\mathcal{S}\!\mathfrak{e}t\right]\to \mathcal{C}\!\mathfrak{a}t\left/\mathcal{B}\right.\) can be obtained by the chain of isomorphisms \[\left[\mathcal{B},\mathcal{S}\!\mathfrak{e}t\right](F,G)\cong\left[\int^{ \operatorname{op}}\!F,\mathcal{S}\!\mathfrak{e}t\right]\left(\Delta 1,G\circ\mathcal{G}(F)\right)\cong\mathcal{C}\! \mathfrak{a}t\left/\mathcal{B}\right.\left(\int^{\operatorname{op}}\!F,\int^ {\operatorname{op}}\!G\right)\] (for \(F\) and \(G\) arbitrary), where the first isomorphism is given by \(F=\operatorname{Lan}_{\mathcal{G}(F)}\Delta 1\) and the second one is given by the universal property of the comma object \(\int^{\operatorname{op}}\!G\). The **second** (Theorem 3.11) and **third** (Theorem 4.17) **main results** of this paper can be condensed in the following theorem, that gives a categorification of both Proposition 1.2 and Theorem 1.4. Exactly as its \(1\)-dimensional analogue yields a lot of useful applications (see Remark 1.5), so can this result. **Theorem 1.6**.: _Let \(F\colon\mathcal{B}\to\mathcal{C}\mathcal{A}\mathcal{T}\) be a \(2\)-functor. The \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{t}\)-enriched Grothendieck construction is equivalently given by the lax comma object \((\)with a new universal property, described in Definition 3.8, that refines both the ones given by Gray in [6] and by Lambert in [12]\({}_{\blacksquare}\)_ _Furthermore, this square \((\)filled with a lax normal natural transformation\()\) exhibits \(F\) as the pointwise left Kan extension of \(\Delta 1\) along \(\mathcal{G}\left(F\right)\) in \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\)\((\)it lives in a tridimensional world that admits the lax natural transformations, see Remark 3.3\()\), where this concept of pointwise Kan extension is defined originally in this paper in Definition 4.15, using the concept of \((\)non necessarily conical anymore\()\) oplax normal \(2\)-colimit that we introduce in Section 2._ \[F=\operatorname{Lan}_{\mathcal{G}\left(F\right)}\Delta 1.\] Theorem 1.6 shows in which sense the \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{t}\)-_enriched Grothendieck construction_ can be thought of as the archetypal \(3\)-dimensional classification process, in the sense of a would-be tridimensional elementary topos theory. What gets classified are the _split \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{t}\)-opfibrations_ with small fibres (introduced in Lambert's [12]) as _discrete \(2\)-fibrations_). Analogously to how the classification by pullbacks (in dimension \(1\)) is upgraded to one by comma objects to obtain the notion of \(2\)-dimensional elementary topos (see above), we believe that it should then be upgraded to one by lax comma objects (defined as in Definition 3.8), in order to reach a definition of _tridimensional elementary topos_. It is interesting to notice that we have to move out of \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}\) (see Remark 3.3) in order to capture the laxness that permeates the Grothendieck construction. So the archetypal elementary \(3\)-topos seems to be \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\), inscribed in a sequence \[\mathcal{S}\mathcal{E}\mathcal{t}\quad\leadsto\quad\mathcal{C}\mathcal{A} \mathcal{T}\quad\leadsto\quad 2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\] that we believe is best explained by what we call a \(2\)-\(\mathcal{V}\)_-enrichment_ (Definition 3.19): \[\mathcal{V}\quad\leadsto\quad\mathcal{V}\text{-}\mathcal{C}\mathcal{A} \mathcal{T}\quad\leadsto\quad 2\text{-}\mathcal{V}\text{-}\mathcal{C}\mathcal{A} \mathcal{T}\] (whence comes the name we give to the \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{t}\)_-enriched Grothendieck construction_). The **fourth main result** of this paper (Proposition 4.22) is that a _pointwise left Kan extension in \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\)_ (Definition 4.15) along a \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{t}\)_-opplification_ (Definition 3.23) is always also a _weak left Kan extension_ (Definition 4.1). This justifies and explains more the concept of pointwise Kan extension in \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\) that we propose here. Such result is based on a new \(\operatorname{oplax}^{\mathrm{n}}\) - lax generalization of the parametrized Yoneda lemma, presented in Theorem 4.20, that shows how the oplax normal naturality is the minimum amount of strictness required to expand the data on the identities to a lax natural transformation. **Outline of the paper.** In Section 2, we produce, simultaneously (Construction 2.5), both the \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{t}\)_-enriched Grothendieck construction_ and the _lax normal conical \(2\)-limits_. We show with new proofs that the lax normal conical \(2\)-limits and the weighted \(2\)-limits give equivalent theories (Theorem 2.18, Theorem 2.14) and consider a colimit version as well. In Section 3, we refine the universal property of the lax comma object to suit a lax \(3\)-categorical ambient (Definition 3.8) and we conceive the \(2\)-\(\mathcal{S}\!\mathcal{C}\!\mathcal{H}_{\mathrm{\!\,lax}}\) is always a weak onestruction as the archetypal \(3\)-dimensional classifier (Theorem 3.11). We then inscribe this in a weak enrichment idea (Remark 3.14). In Section 4, we present and apply a pointwise Kan extension result for the \(2\)-\(\mathcal{S}\!\mathcal{C}\!\mathcal{H}\)-enriched Grothendieck construction (Theorem 4.17), after giving an original definition of _pointwise Kan extension in \(2\)-\(\mathcal{C}\!\mathcal{C}\!\mathcal{H}_{\mathrm{\!\,lax}}\)_ (Definition 4.15). We conclude by showing that a pointwise Kan extension in \(2\)-\(\mathcal{C}\!\mathcal{C}\!\mathcal{H}_{\mathrm{\!\,lax}}\) is always a weak one as well (Proposition 4.22), using a \(\mathrm{oplax}^{\mathrm{n}}\) - lax generalization of the parametrized Yoneda lemma (Theorem 4.20). ## 2. (Op)lax normal conical \(2\)-(co)limits In this section, we show how the wish of giving an essential solution to the problem of conicalization of the weighted \(2\)-limits produces, simultaneously, both the \(2\)-\(\mathcal{S}\!\mathcal{C}\!\mathcal{H}\)-_enriched Grothendieck construction_, thus justifying the explicit definition of Definition 1.1 that Street gave in [15], and the _lax normal conical \(2\)-limits_, that are a particular case of a \(2\)-dimensional limit introduced by Gray in [6]. This brings to the first main result of this paper (Theorem 2.18), which is a new, more intuitive and elementary proof of the fact (firstly proved by Street in [15]) that every weighted \(2\)-limit can be reduced to a _lax normal conical_ one - that is, one of conical form but with coherent \(2\)-cells inside - using the \(2\)-\(\mathcal{S}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\)-_enriched Grothendieck construction_. These two kinds of \(2\)-limit give equivalent theories, as it is also true that a _lax normal conical \(2\)-limit_ is a weighted one - proved here with a new more explicit proof or originally by Street in [15] - but _lax normal conical_ ones are in many situations simpler to use. Indeed one can sometimes handle \(2\)-cells inside the cones quite easily, but really needs to have one selected universal cone rather than a bunch of cones forming a cylinder. The following idea may be kept in mind to understand the differences between these two kinds of \(2\)-limit. When one moves to dimension \(2\), the conical limits do not suffice anymore, as the functors from the terminal \(\,\boldsymbol{1}\) to a category \(\,\mathcal{C}\!\mathcal{C}\) cannot capture the whole of \(\,\mathcal{C}\!\mathcal{C}\), but just its objects. A solution to this problem is to consider at least also the functors from \(\,\mathcal{Z}\!\mathcal{C}\) (or all the functors from anything to \(\,\mathcal{C}\!\mathcal{C}\)), in order to capture the morphisms of \(\,\mathcal{C}\!\mathcal{C}\) as well; this brings to the concept of weighted \(2\)-limit. But there is another solution, that is considering the functors from the terminal \(\,\boldsymbol{1}\) to \(\,\mathcal{C}\!\mathcal{C}\) and now also the natural transformations between them; this brings towards the concept of _lax normal conical \(2\)-limit_. We then notice that, in order to essentially conicalize the weighted \(2\)-colimits, it is more natural to use the _oplax normal conical \(2\)-colimits_. It is the non necessarily conical anymore _oplax normal \(2\)-colimits_ that we will use in Section 4 to propose a notion of colimit in a \(2\)-\(\mathcal{S}\!\mathcal{C}\!\mathcal{C}\!\mathcal{H}\)-_category_ and with that one of _pointwise left Kan extension in \(2\)-\(\mathcal{C}\!\mathcal{C}\!\mathcal{H}_{\mathrm{\!\,lax}}\)_. **Recall 2.1**.: Let \(\,\mathcal{V}\!\mathcal{V}\!\mathcal{C}\) to be a complete and cocomplete symmetric closed monoidal category. Let \(F\colon\mathcal{A}\to\,\mathcal{C}\!\mathcal{C}\) be a \(\,\mathcal{V}\!\mathcal{C}\)-functor with \(\,\mathcal{A}\!\mathcal{C}\) a small \(\,\mathcal{V}\!\mathcal{C}\)-category, and let \(W\colon\mathcal{A}\to\,\mathcal{V}\!\mathcal{C}\) be another \(\,\mathcal{V}\!\mathcal{C}\)-functor (called the _weight_, in place of the classical constant at \(1\) functor \(\Delta 1\), that now does no longer suffice). The \(\,\mathcal{V}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\)_**-limit of \(F\) weighted by \(W\)**_, denoted as \(\lim^{W}\!F\), is (if it exists) an object \(L\in\mathcal{C}\) together with an isomorphism in \(\,\mathcal{V}\!\mathcal{C}\!\mathcal{C}\left(U,L\right)\cong\left[\,\mathcal{ A},\mathcal{V}\!\mathcal{C}\,\right]\left(W,\,\mathcal{C}\left(U,F(-)\right)\right)\)_ \(\,\mathcal{V}\!\mathcal{C}\)-natural in \(U\in\mathcal{C}^{\mathrm{op}}\), where \(\left[\,\mathcal{A},\mathcal{V}\!\mathcal{C}\right]\) is the \(V\)-category of \(\,\mathcal{V}\!\mathcal{C}\)-copresheaves on \(\,\mathcal{A}\!\mathcal{C}\) valued in \(\,\mathcal{V}\!\mathcal{C}\) enriched over itself. (For example, if \(\,\mathcal{V}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\! \mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\! \mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\! \mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\! \mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\! \mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C} \!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\!\mathcal{C}\! \mathcal{C}\! 2-functors, 2-natural transformations and modifications from \(\mathcal{A}\) to \(\mathcal{C}\mathcal{A}\mathcal{T}\).) Notice that, when \(\lim^{W}F\) exists, the isomorphism of equation (2) with \(U=L\) gives us in particular a \(\mathcal{V}\)-natural transformation \[\lambda\colon W\Rightarrow\mathcal{C}\left(L,F(-)\right)\] by considering the identity on \(L\); this \(\lambda\) is called the _universal cylinder_. Given instead \(F\colon\mathcal{A}\to\mathcal{C}\) and \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{V}\) two \(\mathcal{V}\)-functors with \(\mathcal{A}\) small, the \(\mathcal{V}\)_-colimit of \(F\) weighted by \(W\)_, denoted as \(\operatorname{colim}^{W}F\), is defined by the universal property \[\mathcal{C}\left(\operatorname{colim}^{W}F,U\right)\cong\left[\mathcal{A}^{ \mathrm{op}},\mathcal{V}\right]\left(W,\,\mathcal{C}\left(F(-),U\right)\right)\] \(\mathcal{V}\)-natural in \(U\in\mathcal{C}\). And in place of \(\lambda\) we find the _universal cocylinder_ \[\mu\colon W\Rightarrow\mathcal{C}\left(F(-),C\right).\] **Recall 2.2**.: Although the classical constant at 1 weight \(\Delta 1\), called the _conical weight_, does no longer suffice in the general enriched setting, we pay attention to when a weighted limit can be _reduced to a conical one_ (also said _conicalized_), i.e. reduced to one weighted by \(\Delta 1\). It is well known (see Kelly's [11]) that in the \(\mathcal{S}\mathcal{e}\)-enriched setting every weighted limit can be conicalized, using the construction of the category of elements (that we shall think of as the \(\mathcal{S}\mathcal{e}\)-enriched Grothendieck construction). A known strategy to show this (used by Kelly in [11]) is to show that the particular weighted \(\mathcal{V}\)-colimits \[W\cong\operatorname{colim}^{W}\mathrm{y},\] for \(W\colon\mathcal{A}\to\mathcal{V}\) an enriched presheaf with \(\mathcal{A}\) small and \(\mathrm{y}\colon\mathcal{A}^{\mathrm{op}}\to\left[\mathcal{A},\mathcal{V}\right]\) the Yoneda embedding (that are immediatly seen to hold by the Yoneda lemma), are conicalizable in a nice way for \(\mathcal{V}=\mathcal{S}\mathcal{e}\), and deduce that then every weighted \(\mathcal{S}\mathcal{e}\)-limit is conicalizable using the lemma of continuity of a limit in its weight, that we recall here below. The idea is that if we manage to write any weighted limit in the left hand side of equation (3) with \(W=\Delta 1\), we obtain in the right hand side its conicalization, provided that \(H\) is nice enough (that is, similar enough to the Yoneda embedding) to make the limit \(\lim^{H(-)}F\) simplify. **Lemma 2.3** (Continuity of a limit in its weight).: _Let \(W\colon\mathcal{B}^{\mathrm{op}}\to\mathcal{V}\), \(H\colon\mathcal{B}\to\left[\mathcal{A},\mathcal{V}\right]\) and \(F\colon\mathcal{A}\to\mathcal{C}\) be \(\mathcal{V}\)-functors, and suppose that \(\operatorname{colim}^{W}H\) and each \(\lim^{H(B)}F\) with \(B\in\mathcal{B}\) exist. Then_ \[\lim^{\operatorname{colim}^{W}H}F\cong\lim^{W}\Bigl{(}\lim^{H(-)}F\Bigr{)} \tag{3}\] _either side existing if the other does._ _The colimit version, with \(W\) and \(F\) as above but now with \(H\colon\mathcal{B}\to\left[\mathcal{A}^{\mathrm{op}},\mathcal{V}\right]\), is_ \[\operatorname{colim}^{\operatorname{colim}^{W}H}F\cong\operatorname{colim}^{W }\Bigl{(}\operatorname{colim}^{H(-)}F\Bigr{)}.\] Proof.: See Kelly's [11]. **Remark 2.4**.: We are interested in the problem of conicalization of the weighted 2-limits. This is, strictly speaking, not possible, but we search for an essential solution to it nevertheless, motivated by the advantages that having a universal essential cone selected from the universal cylinder could bring. We will exploit the strategy described in Recall 2.2 to give a new, more intuitive and elementary proof of the fact that every weighted 2-limit can be reduced to a _lax normal conical_ one. So we focus on conicalizing first the \(2\)-colimits \[W\cong\operatorname{colim}^{W}\mathrm{y},\] for \(W\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}T\) an enriched presheaf with \(\mathcal{A}\) small and \(\mathrm{y}\colon\mathcal{A}^{\mathrm{op}}\to[\mathcal{A},\mathcal{C}\!\mathcal{ A}T]\) the \(2\)-Yoneda embedding. In particular, we want to encode the universal cocylinder \[\mu\colon W\Rightarrow\left[\mathcal{A},\mathcal{V}\right](\mathrm{y}(-),W)\] (given by the Yoneda lemma) in terms of a universal cocone \[\widetilde{\mu}\colon\Delta 1\Rightarrow\left[\mathcal{A},\mathcal{V} \right](H(-),W)\] with \(H\) some \(\mathcal{V}\)-functor \(\mathcal{B}\to\left[\mathcal{A},\mathcal{V}\right]\). The difficulty we encounter is that, in dimension \(2\), the components of \(\mu\) are functors rather than mere functions, and a cocone is too limited to encode the extra data given by the assignments on morphisms. The idea is to then admit \(2\)-cells inside the cocone \(\widetilde{\mu}\) in order to encode the images under the components \(\mu_{A}\) of \(\mu\) (with \(A\in\mathcal{A}\)) of the morphisms in \(W(A)\). After all, such images are morphisms in \([\mathcal{A},\mathcal{C}\!\mathcal{A}T]\left(\mathrm{y}(A),W\right)\), and thus \(2\)-cells between \(\mathrm{y}(A)\) and \(W\). This is explored in the following construction, that originally shows how both the _\(2\)-\(\mathcal{S}\!\mathcal{E}\)-enriched Grothendieck construction_ and the _lax normal conical \(2\)-limits_ arise simultaneously from the wish to essentially solve the problem of conicalizing the weighted \(2\)-limits. **Construction 2.5** (The \(2\)-\(\mathcal{S}\!\mathcal{E}\)-enriched Grothendieck constr.).: Following Remark 2.4, we search for a more relaxed notion of \(2\)-natural transformation and for a \(2\)-functor \(H\colon\mathcal{B}\to[\mathcal{A},\mathcal{C}\!\mathcal{A}T]\) such that any cocylinder \[\varphi\colon W\Rightarrow[\mathcal{A},\mathcal{C}\!\mathcal{A}T]\left( \mathrm{y}(-),U\right)\] with \(U\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}T\) a \(2\)-functor can be encoded in terms of a relaxed \(2\)-natural transformation \[\widetilde{\varphi}\colon\Delta 1\xrightarrow[\text{relaxed}]{}[\mathcal{A}, \mathcal{C}\!\mathcal{A}T]\left(H(-),U\right)\colon\mathcal{B}^{\mathrm{op}} \to\mathcal{C}\!\mathcal{A}T,\] that is a relaxed version of a cocone. As we will want to apply Lemma 2.3 (continuity of a limit in its weight) to deduce that every weighted \(2\)-limit can then be analogously concalized, we search for \(H\) of the form \[\left(\int^{\mathrm{op}}\!W\right)^{\mathrm{op}}\xrightarrow[\mathcal{G}(W)^{ \mathrm{op}}]{}\mathcal{A}^{\mathrm{op}}\xrightarrow[\mathcal{A},\mathcal{C} \!\mathcal{A}T]\,,\] where \(\int^{\mathrm{op}}\!W\) and \(\mathcal{G}(A)\) are, up to now, just symbols, but will be found to be the \(2\)-\(\mathcal{S}\!\mathcal{E}\)_-enriched Grothendieck construction_ (as defined explictly by Street in [15], see Definition 1.1). Indeed, we will need the limit \(\lim^{H(-)}F\) of the right hand side of equation (3) to simplify, and we can achieve this using the Yoneda lemma if \(H\) factorizes through the Yoneda embedding. For every \(A\in\mathcal{A}\) and \(X\in W(A)\), we have a morphism \[\varphi_{A}(X)\colon\mathrm{y}(A)\to U,\] and we want to form the cocone \(\widetilde{\varphi}\) exactly with these morphisms. So, for every \(A\in\mathcal{A}\) and \(X\in W(A)\) we need an object in \(\int^{\mathrm{op}}\!W\) whose image with respect to \(\mathcal{G}(W)\) is \(A\). We call such object \((A,X)\). Since these are the only objects we need for our conicalization process, we take the objects of \(\int^{\mathrm{op}}\!W\) to be precisely all the pairs \((A,X)\) with \(A\in\mathcal{A}\) and \(X\in W(A)\) and define \(\mathcal{G}(W)\) on objects to be the projection on the first component. We then take \[\widetilde{\varphi}_{(A,X)}\coloneqq\varphi_{A}(X).\] But we also need to encode inside \(\widetilde{\varphi}\) the assignment of every \(\varphi_{A}\) with \(A\in\mathcal{A}\) on morphisms \(\alpha\colon X\to X^{\prime}\) in \(W(A)\). And, after Remark 2.4, the idea is to use a relaxed version of a cocone. We would like \(\widetilde{\varphi}\) to be at least a lax natural transformation, so that it is not ill-behaved. So, for every morphism \(\xi\colon(A,X)\to(A^{\prime},X^{\prime})\) in \(\int^{\mathrm{op}}\!W\), we can have a \(2\)-cell For every \(A\in\mathcal{A}\) and every morphism \(\alpha\colon X\to X^{\prime}\) in \(W(A)\), we need a morphism \((A,X)\to(A,X^{\prime})\) in \(\int^{\mathrm{op}}\!W\) whose image with respect to \(\mathcal{G}\left(W\right)\) is \(\mathrm{id}_{A}\), so that we can take the component of \(\widetilde{\varphi}\) on it to be \(\varphi_{A}(\alpha)\). Wishing to write the action of \(\mathcal{G}\left(W\right)\) again as a projection on the first component, we call such morphism \((A,X)\to(A,X^{\prime})\) as \((\mathrm{id}_{A},\alpha)\). Now, we want to encode the \(2\)-naturality of \(\varphi\) into the relaxed naturality of \(\widetilde{\varphi}\). For every morphism \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) and every \(X\in W(A)\), the naturality of \(\varphi\) expresses the equality \[\varphi_{A^{\prime}}(W(f)(X))=\varphi_{A}(X)\circ\mathrm{y}\left(f\right).\] So, for every \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) and \(X\in W(A)\), we need a morphism \[\underline{f}^{X}\colon(A,X)\to(A^{\prime},W(f)(X))\] in \(\int^{\mathrm{op}}\!W\) such that \(\mathcal{G}\left(W\right)\left(\underline{f}^{X}\right)=f\) and \(\widetilde{\varphi}_{\underline{f}^{X}}=\mathrm{id}\). It is now natural to take \(\underline{\mathrm{id}_{A}}^{X}=(\mathrm{id}_{A},\mathrm{id}_{X})\) for every \(A\in\mathcal{A}\) and \(X\in W(A)\) and ask any of such equal morphisms to be the identity on \((A,X)\). We will see below that the two kinds of morphisms \((\mathrm{id}_{A},\alpha)\) and \(\underline{f}^{X}\) are enough for our needs, but we need to close the union of these morphisms under composition. Of course, it is clear how to compose morphisms of the same kind with each other in a natural way. For the composition of different kinds of morphisms, we notice that, given \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) and \(\alpha\colon X\to X^{\prime}\) in \(W(A)\), the two morphisms \(\underline{f}^{X^{\prime}}\circ(\mathrm{id}_{A},\alpha)\) and \((\mathrm{id}_{A^{\prime}},W(f)(\alpha))\circ\underline{f}^{X}\) in \(\int^{\mathrm{op}}\!W\) will have the same associated structure \(2\)-cell of \(\widetilde{\varphi}\), by lax naturality of \(\widetilde{\varphi}\), since by naturality of \(\varphi_{A}\) \[\varphi_{A}(\alpha)\circ\mathrm{y}\left(f\right)=\varphi_{A^{\prime}}(W(f)( \alpha)).\] We then take such two morphisms in \(\int^{\mathrm{op}}\!W\) to be equal, so that we will be able to recover the naturality of \(\varphi\) (on morphisms) starting from \(\widetilde{\varphi}\). At this point, every finite composition of morphisms in \(\int^{\mathrm{op}}\!W\) can be reduced to a composite \[(A,X)\xrightarrow{f^{X}}(A^{\prime},W(f)(X))\xrightarrow{(\mathrm{id}_{A^{ \prime}},\alpha)}(A^{\prime},X^{\prime})\] for some \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) and \(\alpha\colon W(f)(X)\to X^{\prime}\) in \(W(A^{\prime})\). So we define the morphisms in \(\int^{\mathrm{op}}\!W\) to be precisely all the formal composites \((\mathrm{id}_{A^{\prime}},\alpha)\circ\underline{f}^{X}\), that we call \((f,\alpha)\), with \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) and \(\alpha\colon W(f)(X)\to X^{\prime}\) in \(W(A^{\prime})\). And we take the identities and the composition as described above. In particular, we see that \(\underline{f}^{X}=(f,\mathrm{id}_{W(f)(X)})\), whence we shall use the right hand side notation for such morphism from now on. And we can also get an explicit formula for the composition of an arbitrary diagram \((A,X)\xrightarrow{(f,\alpha)}(A^{\prime},X^{\prime})\xrightarrow{(f^{\prime}, \alpha^{\prime})}(A^{\prime\prime},X^{\prime\prime})\) in \(\int^{\mathrm{op}}\!W\), that is \[(f^{\prime},\alpha^{\prime})\circ(f,\alpha)=(\operatorname{id}_{A^{\prime\prime}}, \alpha^{\prime})\circ(f^{\prime},\operatorname{id})\circ(\operatorname{id}_{A^{ \prime}},\alpha)\circ(f,\operatorname{id})=\] \[=(\operatorname{id}_{A^{\prime\prime}},\alpha^{\prime})\circ(\operatorname{id }_{A^{\prime\prime}},W(f^{\prime})(\alpha))\circ(f^{\prime},\operatorname{id}) \circ(f,\operatorname{id})=(f^{\prime}\circ f,\alpha^{\prime}\circ W(f^{ \prime})(\alpha)).\] It is readily seen that we have just given the data for a category \(\int^{\operatorname{op}}\!W\). Since we want \(\mathcal{G}(W)\) to be a functor, for an arbitrary morphism \((f,\alpha)\) in \(\int^{\operatorname{op}}\!W\) we need to have \[\mathcal{G}(W)\left(f,\alpha\right)=\mathcal{G}(W)\left(\operatorname{id}_{A ^{\prime}},\alpha\right)\circ\mathcal{G}(W)\left(f,\operatorname{id}\right)=f\] So, with the notation we have chosen, \(\mathcal{G}(W)\) is still acting as a projection on the first component. As we want \(\widetilde{\varphi}\) to be at least a lax natural transformation, the structure \(2\)-cell of \(\widetilde{\varphi}\) associated to an arbitrary morphism \((f,\alpha)\) in \(\int^{\operatorname{op}}\!W\) needs to be \[\widetilde{\varphi}_{(f,\alpha)}=\widetilde{\varphi}_{(\operatorname{id}_{A^ {\prime}},\alpha)\circ(f,\operatorname{id})}=\varphi_{A^{\prime}}(\alpha) \circ\operatorname{id}=\varphi_{A^{\prime}}(\alpha).\] We now want to encode the \(2\)-dimensional part of the \(2\)-naturality of \(\varphi\) into the \(2\)-dimensional part of the relaxed naturality of \(\widetilde{\varphi}\). As we want \(\widetilde{\varphi}\) to be at least lax natural, we will have, for every \(2\)-cell \(\Xi\colon(f,\alpha)\Rightarrow(g,\beta)\colon(A,X)\to(A^{\prime},X^{\prime})\) in \(\int^{\operatorname{op}}\!W\), (4) The \(2\)-naturality of \(\varphi\) expresses the following equality, for every \(2\)-cell \(\delta\colon f\Rightarrow g\colon A\to A^{\prime}\) in \(\mathcal{A}\) and for every \(X\in W(A)\): \[\varphi_{A^{\prime}}(W(\delta)_{X})=\varphi_{A}(X)\operatorname{y}(\delta): \varphi_{A^{\prime}}(W(f)(X))\Rightarrow\varphi_{A^{\prime}}(W(g)(X)).\] So, for every \(2\)-cell \(\delta\colon f\Rightarrow g\colon A\to A^{\prime}\) in \(\mathcal{A}\) and every \(X\in W(A)\), we need a \(2\)-cell in \(\int^{\operatorname{op}}\!W\), that we call \(\underline{\delta}^{X}\) or just \(\delta\), such that \[\underline{\delta}^{X}\colon(f,W(\delta)_{X})\Rightarrow(g,\operatorname{id} )\colon(A,X)\to(A^{\prime},W(g)(X))\] and \(\mathcal{G}\left(W\right)(\underline{\delta}^{X})=\delta\). These are the only \(2\)-cells in \(\int^{\operatorname{op}}\!W\) that we need in order to encode the \(2\)-naturality of \(\varphi\) and they are closed under both the vertical composition and the horizontal composition inherited from \(\mathcal{A}\), but we have to close them under whiskering with each of the two kinds of \(1\)-cells in \(\int^{\operatorname{op}}\!W\). The horizontal composition of those \(2\)-cells inherited from \(\mathcal{A}\) tells us in particular how to whisker with morphisms of type \((h,\operatorname{id})\). Precisely, given a \(2\)-cell \(\delta\colon f\Rightarrow g\colon A\to A^{\prime}\) and morphisms \(h\colon B\to A\) and \(k\colon A^{\prime}\to C\) in \(\mathcal{A}\), and given \(X\in W(A)\) and \(Y\in W(B)\), \[(k,\operatorname{id})\underline{\delta}^{X}=\left(k\,\underline{\delta}\right) ^{X}\quad\text{and}\quad\underline{\delta}^{W(h)(Y)}(h,\operatorname{id})= \left(\underline{\delta}h\right)^{Y}.\] Now, since for every \(\delta\colon f\Rightarrow g\colon A\to A^{\prime}\) in \(\mathcal{A}\) and every \(\alpha\colon X\to X^{\prime}\) in \(W(A)\), we have that the axiom of equation (4) of \(\widetilde{\varphi}\) on the two whiskerings \(\underline{\delta}^{X^{\prime}}(\operatorname{id}_{A},\alpha)\) and \((\operatorname{id}_{A^{\prime}},W(g)(\alpha))\underline{\delta}^{X}\) in \(\int^{\operatorname{op}}\!W\) is exactly the same, by the analogous swapping property we asked the composition of \(1\)-cells in \(\int^{\operatorname{op}}\!W\), we ask such two whiskerings in \(\int^{\operatorname{op}}\!W\) to be equal. So, at this point, every whiskering and then every horizontal composition of \(2\)-cells in \(\int^{\operatorname{op}}\!W\) can be reduced to a whiskering of the form \((\operatorname{id},\beta)\underline{\delta}^{X}\) for some \(2\)-cell \(\delta\colon f\Rightarrow g\colon A\to A^{\prime}\) in \(\mathcal{A}\), \(X\in W(A)\) and \(\beta\colon W(g)(X)\to X^{\prime}\) in \(W(A^{\prime})\). So we define the \(2\)-cells in \(\int^{\operatorname{op}}\!W\) to be precisely the formal whiskerings \((\operatorname{id},\beta)\underline{\delta}^{X}\) for some \(2\)-cell \(\delta\colon f\Rightarrow g\colon A\to A^{\prime}\) in \(\mathcal{A}\), \(X\in W(A)\) and \(\beta\colon W(g)(X)\to X^{\prime}\) in \(W(A^{\prime})\). Equivalently, a \(2\)-cell \((f,\alpha)\Rightarrow(g,\beta)\colon(A,X)\to(A^{\prime},X^{\prime})\) in \(\int^{\mathrm{op}}\!W\) is a \(2\)-cell \(\delta\colon f\Rightarrow g\) in \(\mathcal{A}\) such that \[\alpha=\beta\circ W(\delta)_{X}.\] For this, we will call such \(2\)-cell just \(\delta\colon(f,\alpha)\Rightarrow(g,\beta)\). The identities are given by taking \(\delta\) to be the appropriate identity \(2\)-cell. The horizontal composition is the one obtained by the description above; namely, given \(\delta\colon(f,\alpha)\Rightarrow(g,\beta)\colon(A,X)\to(A^{\prime},X^{\prime})\) and \(\varepsilon\colon(f^{\prime},\alpha^{\prime})\Rightarrow(g^{\prime},\beta^{ \prime})\colon(A^{\prime},X^{\prime})\to(A^{\prime\prime},X^{\prime\prime})\) in \(\int^{\mathrm{op}}\!W\), \[\varepsilon\ast\delta=\left((\mathrm{id},\beta^{\prime})_{\underline{ \varepsilon}}{}^{X^{\prime}}\right)\ast\left((\mathrm{id},\beta)_{\underline{ \delta}}{}^{X}\right)=\left((\mathrm{id},\beta^{\prime})\circ(\mathrm{id},W( g^{\prime})(\beta))\right)\left(\underline{\varepsilon}{}^{W(g)(X)}\ast \underline{\delta}{}^{X}\right)=\] \[=(\mathrm{id},\beta^{\prime}\circ W(g^{\prime})(\beta))( \underline{\varepsilon\ast\delta}{})^{X}\] and so it corresponds to the \(2\)-cell \(\varepsilon\ast\delta\) in \(\mathcal{A}\). It is natural to define the vertical composition \((f,\alpha)\stackrel{{\delta}}{{\Rightarrow}}(g,\beta)\stackrel{{ \delta^{\prime}}}{{\Rightarrow}}(h,\gamma)\) in \(\int^{\mathrm{op}}\!W\) to be the \(2\)-cell in \(\int^{\mathrm{op}}\!W\) that corresponds to the \(2\)-cell \(\delta^{\prime}\circ\delta\) in \(\mathcal{A}\). And since we want \(\mathcal{G}(W)\) to be a \(2\)-functor, we need \(\mathcal{G}(W)\) to send a \(2\)-cell \((\mathrm{id},\beta)\delta^{X}\) to the \(2\)-cell \(\delta\) in \(\mathcal{A}\). It is then straightforward to show that we have given \(\int^{\mathrm{op}}\!W\) the structure of a \(2\)-category and that \(\mathcal{G}(W)\colon\int^{\mathrm{op}}\!W\to\mathcal{A}\) is a \(2\)-functor. Moreover, we see that \(\int^{\mathrm{op}}\!W\) is small if \(\mathcal{A}\) is small. We call \(\mathcal{G}(W)\) (or sometimes also just \(\int^{\mathrm{op}}\!W\)) the _\(2\)-Set-enriched Grothendieck construction_ of \(W\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}\!T\). Notice that we have also described the right notion of relaxed \(2\)-natural transformation that we need \(\widetilde{\varphi}\) to satisfy in order to encode the \(2\)-naturality of \(\varphi\). It is a form of marked lax natural transformation, that is, a lax natural transformation such that certain structure \(2\)-cells are asked to be identities. And such slight strictness is necessary to encode the strict axioms of \(2\)-naturality of \(\varphi\). This brings to Definition 2.8 and to the new proof of the result that every weighted \(2\)-limit can be reduced to a _lax normal conical_ one (for which we have given the ideas in this construction). In short, we read from Construction 2.5 the following explicit definition of the \(2\)-Set-enriched Grothendieck construction, that coincides with one of Street's [15]. **Definition 2.6**.: Let \(F\colon\mathcal{B}\to\mathcal{C}\!\mathcal{A}\!T\) be a \(2\)-functor with \(\mathcal{B}\) a \(2\)-category. The _\(2\)-Set-enriched Grothendieck construction of \(F\)_ is the \(2\)-functor \(\mathcal{G}(F):\int^{\mathrm{op}}\!F\to\mathcal{B}\), given by the projection on the first component, with \(\int^{\mathrm{op}}\!F\) such that: _an object of \(\int^{\mathrm{op}}\!F\)_ is a pair \((B,X)\) with \(B\in\mathcal{B}\) and \(X\in F(B)\); _a morphism \((B,X)\to(C,X^{\prime})\) in \(\int^{\mathrm{op}}\!F\)_ is a pair \((f,\alpha)\) with \(f\colon B\to C\) a morphism in \(\mathcal{B}\) and \(\alpha\colon F(f)(X)\to X^{\prime}\) a morphism in \(F(B)\); _a \(2\)-cell \((f,\alpha)\Rightarrow(g,\beta)\colon(B,X)\to(C,X^{\prime})\) in \(\int^{\mathrm{op}}\!W\)_ is a \(2\)-cell \(\delta\colon f\Rightarrow g\) in \(\mathcal{B}\) such that \(\alpha=\beta\circ F(\delta)_{X}\); _the compositions and identities_ are as described in Construction 2.5. **Remark 2.7**.: The \(2\)-Set-enriched Grothendieck construction is a natural extension of the usual Grothendieck construction. We believe that the latter should actually be conceived as the restriction of the former to \(2\)-presheaves into \(\mathcal{C}\!\mathcal{A}\!T\) on a locally discrete \(2\)-category. After all, in dimension \(1\) (see Definition 1.1), the idea of reorganizing a family of sets as the fibres of a function with domain the disjoint union of the family was fully realized by the construction of the category of elements, admitting families of sets indexed by a category rather than by just a set. So we conceive the \(2\)-Set-enriched Grothendieck construction as the complete Grothendieck construction that lives in dimension \(2\), rather than a particular case of a construction that lives in dimension \(3\), and we will do the same with the notion of fibration that it produces (see Section 3). It is interesting to notice that the \(2\)-\(\mathcal{S}\)\(\mathcal{S}\)\(\mathcal{S}\)-enriched Grothendieck construction and then also the usual Grothendieck construction are bound up with some form of laxness, being produced together with the relaxed notion of \(2\)-natural transformation that is necessary to encode the weighted \(\mathcal{C}\)\(\mathcal{A}\)-(co)cylinders (Construction 2.5). We will explore this in more detail in Section 3. **Definition 2.8**.: Let \(W\colon\mathcal{A}\to\mathcal{C}\)\(\mathcal{A}\)\(\mathcal{T}\) be a \(2\)-functor with \(\mathcal{A}\) small, and consider \(2\)-functors \(M,N\colon\int^{\mathrm{op}}\!\!W\to\mathcal{D}\). A _lax normal natural transformation \(\alpha\) from \(M\) to \(N\)_, denoted \(\alpha\colon M\underset{\mathrm{lax}^{n}}{\Longrightarrow}N\), is a lax natural transformation \(\alpha\) from \(M\) to \(N\) such that the structure \(2\)-cell on every morphism \[\big{(}f,\mathrm{id}_{W(f)(X)}\big{)}\colon(A,X)\to(B,W(f)(X))\] in \(\int^{\mathrm{op}}\!\!W\) is the identity. **Remark 2.9**.: The lax normal natural transformations are a particular case of a more general notion of marked lax natural transformation introduced by Gray in [6]. Street showed in [15] that this less general notion is totally sufficient to build all the general limits considered by Gray (showing that it is sufficient to build all the weighted limits and that all the limits considered by Gray are particular weighted ones). Notice that, in some sense, the laxness of the lax normal natural transformations only belongs to the vertical part of \(\int^{\mathrm{op}}\!\!W\). This idea will be understood even better after Theorem 2.14 (the lax normal conical \(2\)-limits are weighted ones). **Definition 2.10**.: Let \(W\colon\mathcal{A}\to\mathcal{C}\)\(\mathcal{A}\)\(\mathcal{T}\) be a \(2\)-functor with \(\mathcal{A}\) small, and let \(F\colon\int^{\mathrm{op}}\!\!W\to\mathcal{C}\) be a \(2\)-functor. Notice that \(\int^{\mathrm{op}}\!\!W\) is small, since \(\mathcal{A}\) is small. The _lax normal conical \(2\)-limit of \(F\)_, denoted as \(\mathrm{lax}^{n}\operatorname{-lim}^{\Delta 1}F\), is (if it exists) an object \(L\in\mathcal{C}\) together with an isomorphism of categories \[\mathcal{C}\left(U,L\right)\cong\left[\int^{\mathrm{op}}\!\!W,\mathcal{C} \mathcal{A}\mathcal{T}\right]_{\mathrm{lax}^{n}}\left(\Delta 1,\,\mathcal{C}\left(U,F(-) \right)\right)\] \(2\)-natural in \(U\in\mathcal{C}^{\mathrm{op}}\), where \(\left[\int^{\mathrm{op}}\!\!W,\mathcal{C}\mathcal{A}\mathcal{T}\right]_{ \mathrm{lax}^{n}}\) is the \(2\)-category of \(2\)-functors, lax normal natural transformations and modifications from \(\int^{\mathrm{op}}\!\!W\) to \(\mathcal{C}\)\(\mathcal{A}\)\(\mathcal{T}\). (Notice that indeed lax normal natural transformations compose well vertically.) When \(\mathrm{lax}^{n}\operatorname{-lim}^{\Delta 1}F\) exists, taking \(U=L\) and considering the identity on \(L\) gives us in particular a lax normal natural transformation \[\lambda\colon\Delta 1\underset{\mathrm{lax}^{n}}{\Longrightarrow}\mathcal{C} \left(L,F(-)\right),\] called the _universal lax normal cone_. For the definition of _lax normal conical \(2\)-colimit_ in \(\mathcal{C}\), we just apply the definition of lax normal conical \(2\)-limit to a \(2\)-functor \(F\colon\int^{\mathrm{op}}\!\!W\to\mathcal{C}^{\mathrm{op}}\). As usual, we prefer to consider instead \(F^{\mathrm{op}}\colon\left(\int^{\mathrm{op}}\!\!W\right)^{\mathrm{op}}\to \mathcal{C}\), but this time we cannot rename \(\left(\int^{\mathrm{op}}\!\!W\right)^{\mathrm{op}}\) as some \(\int^{\mathrm{op}}\!\!Z\). So the definition we propose is the following. Let \(W\colon\mathcal{A}\to\mathcal{C}\)\(\mathcal{A}\)\(\mathcal{T}\) be a \(2\)-functor with \(\mathcal{A}\) small, and let \(F\colon\left(\int^{\mathrm{op}}\!\!W\right)^{\mathrm{op}}\to\mathcal{C}\) be a \(2\)-functor. The _lax normal conical \(2\)-colimit of \(F\)_, denoted as \(\mathrm{lax}^{n}\operatorname{-colim}^{\Delta 1}F\), is (if it exists) an object \(C\in\mathcal{C}\) together with an isomorphism of categories \[\mathcal{C}\left(C,U\right)\cong\left[\int^{\mathrm{op}}\!\!W,\mathcal{C} \mathcal{A}\mathcal{T}\right]_{\mathrm{lax}^{n}}\left(\Delta 1,\,\mathcal{C}\left(F(-),U \right)\right)\] \(2\)-natural in \(U\in\mathcal{C}\). When \(\mathrm{lax}^{n}\operatorname{-colim}^{\Delta 1}F\) exists we have, in place of \(\lambda\), the _universal lax normal cocone_. **Remark 2.11**.: Notice that considering just \(2\)-functors \(F\) of the form \(F\colon\,\int^{\operatorname{op}}\!W\to\mathcal{C}\) in Definition 2.10 is not restrictive at all. Indeed any \(2\)-category \(\mathcal{B}\) can be seen as the \(2\)-\(\operatorname{\mathcal{S}\!\mathit{et}}\)-enriched Grothendieck construction of the \(2\)-functor \(\Delta\!1\colon\,\mathcal{B}\to\mathcal{C}\!\mathcal{A}\!T\) constant at \(1\). We have in fact that \(\int^{\operatorname{op}}\!\Delta 1\cong\mathcal{B}\) and that \(\mathcal{G}(\Delta 1)\) is the identity \(2\)-functor up to this isomorphism. It is important, though, to have a specified expression of the domain of the diagram \(F\) in terms of a \(2\)-\(\operatorname{\mathcal{S}\!\mathit{et}}\)-enriched Grothendieck construction of some \(2\)-functor \(W\colon\,\mathcal{A}\to\mathcal{C}\!\mathcal{A}\!T\) with \(\mathcal{A}\) small, as we need it in order to be able to say "normal". The idea is that we want the laxness of the cones to only belong to the categories that form the family \(W\) themselves rather than also to the connections between them (see also the proof of Theorem 2.14, that shows how the lax normal conical \(2\)-limits are weighted). **Example 2.12**.: Every conical \(2\)-limit is a lax normal conical one. Indeed, let \(F\colon\,\mathcal{A}\to\mathcal{C}\) be a \(2\)-functor. By Remark 2.11, we can view \(\mathcal{A}\) as the \(2\)-\(\operatorname{\mathcal{S}\!\mathit{et}}\)-enriched Grothendieck construction of the \(2\)-functor \(\Delta\!1\colon\,\mathcal{A}\to\mathcal{C}\!\mathcal{A}\!T\). And the lax normal natural transformations between \(\mathcal{A}=\int^{\operatorname{op}}\!\Delta 1\) and \(\mathcal{C}\!\mathcal{A}\!T\) are just strict \(2\)-natural, as every morphism of \(\int^{\operatorname{op}}\!\Delta 1\) is of the form \((f,\operatorname{id})\). **Remark 2.13**.: We would now like to see that the lax normal conical \(2\)-limits are particular weighted \(2\)-limits. The proof we present (Theorem 2.14) does not seem to appear in the literature. There is a solution to this problem (without proof) in Street's paper [15], but that solution is actually about all the general \(2\)-limits introduced by Gray and is more complicated than the one presented here (the weight is given as the coidentifier of a certain \(2\)-cell with horizontal codomain the weight of lax conical limits). And since Street proved in [15] that all the general \(2\)-limits introduced by Gray are built from the lax normal conical ones, we think a more explicit solution of the weight for lax normal conical \(2\)-limits could be valuable. It is also interesting to notice that the weight for lax normal conical \(2\)-limits (that is in the proof of Theorem 2.14) is much simpler than the weight for lax conical \(2\)-limits (that can be found in Street's [15]). Indeed, the latter involves quotients of lax \(2\)-dimensional slices, while ordinary \(1\)-dimensional slices are enough for the former. **Theorem 2.14**.: _Lax normal conical \(2\)-limits are particular weighted \(2\)-limits._ Proof.: Let \(Z\colon\,\mathcal{A}\to\mathcal{C}\!\mathcal{A}\!T\) be a \(2\)-functor with \(\mathcal{A}\) a small \(2\)-category, and let \(F\colon\,\int^{\operatorname{op}}\!Z\to\mathcal{C}\) be a \(2\)-functor. We want to write the universal property of the lax normal conical \(2\)-limit of \(F\) as the universal property of the \(2\)-limit of \(F\) weighted by some \(2\)-functor \(\operatorname{W}^{\operatorname{lax}}\colon\,\int^{\operatorname{op}}\!Z\to \mathcal{C}\!\mathcal{A}\!T\). It suffices to show that for every \(2\)-functor \(N\colon\,\mathcal{C}^{\operatorname{op}}\to\left[\int^{\operatorname{op}}\!Z,\mathcal{C}\!\mathcal{A}\!T\right]\), calling \(N^{U}\coloneqq N(U)\), there is an isomorphism of categories \[\left[\int^{\operatorname{op}}\!Z,\mathcal{C}\!\mathcal{A}\!T\right]_{ \operatorname{lax}^{n}}\left(\Delta 1,N^{U}\right)\cong\left[\int^{ \operatorname{op}}\!Z,\mathcal{C}\!\mathcal{A}\!T\right]\left(\operatorname{W }^{\operatorname{lax}^{n}},N^{U}\right) \tag{5}\] \(2\)-natural in \(U\in\mathcal{C}^{\operatorname{op}}\). Let then \(\varphi\colon\Delta 1\underset{\operatorname{lax}^{n}}{\Longrightarrow}N^{U}\) be a lax normal natural transformation; we want to convert it into a \(2\)-natural transformation \([\varphi]\colon\,\operatorname{W}^{\operatorname{lax}^{n}}\Rightarrow N^{U}\). Given \((B,X^{\prime})\in\int^{\operatorname{op}}\!Z\), we have a map \(\varphi_{(B,X^{\prime})}\colon\, I\to N^{U}(B,X^{\prime})\), i.e. an object \(\varphi_{(B,X^{\prime})}\in N^{U}(B,X^{\prime})\). But we also have, for every morphism \((f,\alpha)\colon(A,X)\to(B,X^{\prime})\) in \(\int^{\operatorname{op}}\!Z\), a 2-cell i.e. a morphism \(\varphi_{(f,\alpha)}\) in \(N^{U}(B,X^{\prime})\). So we want \(\operatorname{W^{lax}}^{\operatorname{n}}(B,X^{\prime})\) to be some sort of slice that parametrizes all these data. But the idea behind lax normal natural transformations is that the laxness is all concentrated in the vertical part of \(\int^{\operatorname{op}}\!W\), that is, on the morphisms of the form \((\operatorname{id}_{A},\alpha)\) in \(\int^{\operatorname{op}}\!W\) for some \(A\in\mathcal{A}\) and some morphism \(\alpha\colon X\to X^{\prime}\) in \(W(A)\). We can then parametrize just the vertical part of the data, defining \[\operatorname{W^{lax^{n}}}: \int^{\operatorname{op}}\!Z \longrightarrow \mathcal{C}\mathcal{A}\mathcal{T}\] \[\begin{CD}(B,X^{\prime})@V{Z(B)}V{}V\\ @V{}V{(g,\beta)}V\mapsto @V{}V{\beta\circ Z(g)(-)}V\\ (C,X^{\prime\prime})\end{CD}\] where the action of \(\beta\circ Z(g)(-)\) on morphisms is given by \(Z(g)(\operatorname{dom}(-))\). (This can be compared with the weight for lax conical 2-limits, that considers quotients of the lax slices of the entire \(\int^{\operatorname{op}}\!Z\) on \((B,X^{\prime})\).) Given a 2-cell \(\delta\colon(g,\beta)\Rightarrow(h,\gamma)\colon(B,X^{\prime})\to(C,X^{ \prime\prime})\), we define \(\operatorname{W^{lax^{n}}}(\delta)\) to be the natural transformation Indeed, given \((X\xrightarrow{\alpha}X^{\prime})\in Z(B)\left/{X^{\prime}}\right.\), we do have that \(Z(\delta)_{X}\) is a morphism in \(Z(C)\left/{X^{\prime\prime}}\right.\) from \(\beta\circ Z(g)(\alpha)\) to \(\gamma\circ Z(h)(\alpha)\), as \(Z(\delta)\) is a natural transformation and \(\delta\) is a 2-cell in \(\int^{\operatorname{op}}\!Z\). And the naturality of \(Z(\delta)_{\operatorname{dom}(-)}\) holds by naturality of \(Z(\delta)\). It is then easy to show that \(\operatorname{W^{lax^{n}}}\) is a 2-functor. Now, we construct \([\varphi]\colon\operatorname{W^{lax^{n}}}\Rightarrow N^{U}\) in the following way. Given \((B,X^{\prime})\in\int^{\operatorname{op}}\!Z\), we set \[[\varphi]_{(B,X^{\prime})}(\operatorname{id}_{X^{\prime}})\coloneqq\varphi_{ (B,X^{\prime})}\] whence we have to choose by naturality that \[[\varphi]_{(B,X^{\prime})}(X\xrightarrow{\alpha}X^{\prime})=N^{U}( \operatorname{id}_{B},\alpha)\left(\varphi_{(B,X)}\right).\] And we can then set \[[\varphi]_{(B,X^{\prime})}\left(\begin{CD}X@V{\alpha}V{}V{}V\\ @V{\alpha}V{}V{X^{\prime}}V\end{CD}\right)\coloneqq\varphi_{(\operatorname{ id}_{B},\alpha)},\] whence we have to choose (looking again at the naturality of \([\varphi]\) that we want to obtain) \[[\varphi]_{(B,X^{\prime})}\left(\begin{CD}S@>{\alpha}>{}>T\\ @V{\theta\circ\alpha}V{}V@V{\alpha}V{\theta}V{}V\\ @V{}V{X^{\prime}}V{\theta}V\end{CD}\right)=N^{U}(\operatorname{id}_{B}, \theta)\left(\varphi_{(\operatorname{id}_{B},\alpha)}\right).\] The component \([\varphi]_{(B,X^{\prime})}\) is then a functor, thanks to the lax naturality of \(\varphi\). And it is easy to show that \([\varphi]\) is a natural transformation, using that \(\varphi\) is a lax normal natural transformation and how composition is defined in \(\int^{\mathrm{op}}\!Z\). We show that \([\varphi]\) is \(2\)-natural. So take a \(2\)-cell \(\delta\colon(g,\beta)\Rightarrow(h,\gamma)\colon(B,X^{\prime})\to(C,X^{ \prime\prime})\) in \(\int^{\mathrm{op}}\!Z\) and \((X\xrightarrow{\alpha}X^{\prime})\in Z(B)\big{/}_{X^{\prime}}\); we prove that \[N^{U}(\mathrm{id}_{C},\gamma\circ Z(h)(\alpha))\left(\varphi_{(\mathrm{id}_{C},Z(\delta)_{X})}\right)=N^{U}(\delta)_{N^{U}(\mathrm{id}_{B},\alpha)\left( \varphi_{(B,X)}\right)}.\] But this is obtained combining the following two equations: \[\varphi_{(\mathrm{id}_{C},Z(\delta)_{X})}=\varphi_{(g,Z(\delta)_{X})}=N^{U} \left(\underline{\delta}^{X}\right)_{\varphi_{(B,X)}}\] \[\delta(\mathrm{id}_{B},\alpha)=(\mathrm{id}_{C},\gamma\circ Z(h)(\alpha)) \underline{\delta}^{X}.\] Consider now a modification \(\Theta\colon\varphi\ncong\psi\) between two lax normal natural transformations \(\varphi,\psi\colon\Delta 1\xrightarrow[\mathrm{lax}^{\mathrm{n}}N^{U}\). We want to convert it into a modification \([\Theta]\colon[\varphi]\ncong[\psi]\colon\mathrm{W}^{\mathrm{lax}^{\mathrm{ n}}}\Rightarrow N^{U}\). Given \((B,X^{\prime})\in\int^{\mathrm{op}}\!Z\), we define \([\Theta]_{(B,X^{\prime})}\) to be the natural transformation with general component on \((\alpha\colon X\to X^{\prime})\in Z(B)\big{/}_{X^{\prime}}\) the morphism in \(N^{U}(B,X^{\prime})\) \[[\Theta]_{(B,X^{\prime}),\alpha}\coloneqq N^{U}(\mathrm{id}_{B},\alpha)\left( \Theta_{(B,X)}\right)\colon N^{U}(\mathrm{id}_{B},\alpha)\left(\varphi_{(B,X) }\right)\to N^{U}(\mathrm{id}_{B},\alpha)\left(\psi_{(B,X)}\right).\] Then \([\Theta]_{(B,X^{\prime})}\) is natural since \(\Theta\) is a modification, and \([\Theta]\) is a modification since \(\Theta\) is a modification between lax normal natural transformations. Moreover this construction is certainly functorial. Conversely, starting from a \(2\)-natural transformation \(\sigma\colon\mathrm{W}^{\mathrm{lax}^{\mathrm{n}}}\Rightarrow N^{U}\), we convert it into a lax normal natural transformation \(\overline{\sigma}\colon\Delta 1\xrightarrow[\mathrm{lax}^{\mathrm{n}}N^{U}\). Given \((B,X^{\prime})\in\int^{\mathrm{op}}\!Z\), we define \[\overline{\sigma}_{(B,X^{\prime})}\coloneqq\sigma_{(B,X^{\prime})}(\mathrm{ id}_{X^{\prime}}).\] And given a morphism \((f,\alpha)\colon(A,X)\to(B,X^{\prime})\) in \(\int^{\mathrm{op}}\!Z\), we define noticing that \[\sigma_{(B,X^{\prime})}(Z(f)(X)\xrightarrow{\alpha}X^{\prime})=N^{U}(f,\alpha )\left(\sigma_{(A,X)}(\mathrm{id}_{X})\right)=N^{U}(f,\alpha)\left(\overline{ \sigma}_{(A,X)}\right)\] by naturality of \(\sigma\). We then have that \(\overline{\sigma}\) satisfies the \(1\)-dimensional part of being a lax normal natural transformation, by functoriality of the components of \(\sigma\) and naturality of \(\sigma\) (on morphisms). It remains to prove the \(2\)-dimensional part of \(\overline{\sigma}\) being lax natural. So take a \(2\)-cell \(\delta\colon(g,\beta)\Rightarrow(h,\gamma)\colon(B,X^{\prime})\to(C,X^{ \prime\prime})\) in \(\int^{\mathrm{op}}\!Z\); we want to show that \[\overline{\sigma}_{(h,\gamma)}\circ N^{U}(\delta)_{\overline{\sigma}_{(B,X^{ \prime})}}=\overline{\sigma}_{(g,\beta)}.\] By the \(2\)-naturality of \(\sigma\), we have that \[N^{U}(\delta)_{\overline{\sigma}_{(B,X^{\prime})}}=N^{U}(\delta)_{\sigma_{(B,X ^{\prime})}(\mathrm{id}_{X^{\prime}})}=\sigma_{(C,X^{\prime\prime})}\left( \begin{array}{c}Z(g)(X^{\prime})\xrightarrow[\beta]{Z(\delta)_{X^{\prime}}}Z (h)(X^{\prime})\\ \beta\nmid X^{\prime\prime}\nmid\nmid\nmid\end{array}\right)\] and we conclude by functoriality of \(\sigma_{(C,X^{\prime\prime})}\). Considering now a modification \(\Xi\colon\sigma\ncong\rho\colon\mathrm{W}^{\mathrm{lax}^{\mathrm{n}}} \Rightarrow N^{U}\), we convert it into a modification \(\overline{\Xi}\colon\overline{\sigma}\ncong\overline{\rho}\colon\Delta 1 \xrightarrow[\mathrm{lax}^{\mathrm{n}}N^{U}\]{}\). Given \((B,X^{\prime})\in\int^{\mathrm{op}}\!Z\), we define \[\overline{\Xi}_{(B,X^{\prime})}\coloneqq\Xi_{(B,X^{\prime}),\mathrm{id}_{X^{ \prime}}}.\] Then \(\overline{\Xi}\) is a modification since \(\Xi\) is a modification, and this construction is certainly functorial. It is now easy to see that the two constructions we have produced are inverses of each other, giving us an isomorphism of categories as in equation (5) for every \(U\in\mathcal{C}^{\mathrm{op}}\). And the \(2\)-naturality of such isomorphism holds trivially just by the fact that \(N\) lands in \(\left[\int^{\mathrm{op}}\!Z,\mathcal{C}\!\mathcal{A}\!T\right]\). **Remark 2.15**.: The proof of Theorem 2.14 works the same also for the lax normal conical \(2\)-colimit of a \(2\)-functor \(F\colon\left(\int^{\mathrm{op}}\!Z\right)^{\mathrm{op}}\to\mathcal{C}\). Indeed the corresponding natural isomorphism of categories that we then want to prove is exactly the same we had in equation (5). The only thing that changes is that \(N\) becomes \(N\colon\mathcal{C}\to\left[\int^{\mathrm{op}}\!Z,\mathcal{C}\!\mathcal{A}\!T\right]\), but the naturality in \(U\) of the isomorphism of categories in equation (5) continues to hold trivially just by the fact that \(N\) lands in \(\left[\int^{\mathrm{op}}\!Z,\mathcal{C}\!\mathcal{A}\!T\right]\). We thus obtain the following corollary (of the proof of Theorem 2.14). **Corollary 2.16**.: _Lax normal conical \(2\)-colimits are particular weighted \(2\)-colimits, and the weight that expresses them is \(\mathrm{W}^{\mathrm{lax}^{n}}\)._ **Remark 2.17**.: We now present our new proof of the fact (firstly proved by Street in [15]) that every weighted \(2\)-limit can be reduced to a lax normal conical one (and thus essentially conicalized). It will be based on the intuitive and elementary Construction 2.5. **Theorem 2.18**.: _All the weighted \(2\)-limits can be reduced to lax normal conical ones. More precisely, given \(2\)-functors \(F\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}\!T\) and \(W\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}\!T\) with \(\mathcal{A}\) small, we have that_ \[\lim^{W}\!F\cong\mathrm{lax}^{n}\operatorname{-lim}^{\Delta 1}(F\circ\mathcal{G}(W))\] _either side existing if the other does, where \(\mathcal{G}(W)\) is the \(2\)-\(\mathcal{S}\!\mathcal{E}\)-enriched Grothendieck construction of \(W\) (see Definition 2.6)._ Proof.: Looking at Remark 2.4, we first focus on essentially conicalizing the weighted \(2\)-colimits \[W\cong\mathrm{colim}^{W}\mathrm{y}\] with \(W\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}\!T\) a \(2\)-presheaf with \(\mathcal{A}\) small and \(\mathrm{y}\colon\mathcal{A}^{\mathrm{op}}\to[\mathcal{A},\mathcal{C}\! \mathcal{A}\!T]\) the \(2\)-Yoneda embedding, and then apply the lemma of continuity of a limit in its weight (Lemma 2.3) to analogously essentially conicalize any weighted \(2\)-limit. Looking at Construction 2.5, given \(W\) as above, we prove that there is an isomorphism of categories \[\left[\mathcal{A},\mathcal{C}\!\mathcal{A}\!T\right]\!\left(W,\left[\mathcal{ A},\mathcal{C}\!\mathcal{A}\!T\right]\!\left(\mathrm{y}\left(-\right),U \right)\right)\cong\left[\int^{\mathrm{op}}\!W,\mathcal{C}\!\mathcal{A}\!T \right]_{\mathrm{lax}^{n}}\left(\Delta 1,\left[\mathcal{A},\mathcal{C}\! \mathcal{A}\!T\right]\!\left((\mathrm{y}\circ\mathcal{G}(W)^{\mathrm{op}})(- ),U\right)\right) \tag{6}\] \(2\)-natural in \(U\in[\mathcal{A},\mathcal{C}\!\mathcal{A}\!T]\), so that we can conclude that \[W\cong\mathrm{colim}^{W}\mathrm{y}\cong\mathrm{lax}^{n}\operatorname{-colim}^{ \Delta 1}(\mathrm{y}\circ\mathcal{G}(W)^{\mathrm{op}}).\] We have already described in Construction 2.5 how to obtain from a \(2\)-natural transformation \[\varphi\colon W\Rightarrow\left[\mathcal{A},\mathcal{C}\!\mathcal{A}\!T\right] \!\left(\mathrm{y}\left(-\right),U\right);\] with \(U\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}\!T\) a \(2\)-presheaf, a lax normal natural transformation \[\widetilde{\varphi}\colon\Delta 1\underset{\mathrm{lax}^{n}}{\Longrightarrow}\left[ \mathcal{A},\mathcal{C}\!\mathcal{A}\!T\right]\!\left((\mathrm{y}\circ \mathcal{G}(W)^{\mathrm{op}})(-),U\right):\int^{\mathrm{op}}\!W\to\mathcal{C} \!\mathcal{A}\!T.\] Indeed, defining \[\widetilde{\varphi}_{(A,X)} \coloneqq\varphi_{A}(X)\] \[\widetilde{\varphi}_{(f,\alpha)} =\varphi_{B}(\alpha)\] for every morphism \((f,\alpha)\colon(A,X)\to(B,X^{\prime})\) in \(\int^{\operatorname{op}}W\), we have that \(\widetilde{\varphi}\) satisfies the \(1\)-dimensional part of being a lax normal natural transformation, since \(\varphi\) is natural and the components of \(\varphi\) are functors, and it also satisfies the \(2\)-dimensional part of being lax natural by the definition of the \(2\)-cells in \(\int^{\operatorname{op}}W\) and the \(2\)-naturality of \(\varphi\). Take now a modification \[\Theta\colon\varphi\xRightarrow{}\psi\colon W\xrightarrow[]{}[\mathcal{A}, \mathcal{C}\mathcal{A}T]\,(\operatorname{y}(-),U)\,;\] we want to convert it into a modification between lax normal natural transformations \[\widetilde{\Theta}\colon\widetilde{\varphi}\xRightarrow{}\widetilde{\psi} \colon\Delta 1\underset{\operatorname{lax}^{\operatorname{a}}}{\Longrightarrow}[ \mathcal{A},\mathcal{C}\mathcal{A}T]\,((\operatorname{y}\circ\mathcal{G}(W)^{ \operatorname{op}})(-),U)\,.\] Given \((A,X)\in\int^{\operatorname{op}}W\), we define \[\widetilde{\Theta}_{(A,X)}\coloneqq\Theta_{A,X}.\] \(\widetilde{\Theta}\) is then a modification since \(\Theta\) is so and such assignment is certainly functorial. We now construct its inverse. Take a lax normal natural transformation \[\sigma\colon\Delta 1\underset{\operatorname{lax}^{\operatorname{a}}}{\Longrightarrow}[ \mathcal{A},\mathcal{C}\mathcal{A}T]\,((\operatorname{y}\circ\mathcal{G}(W)^{ \operatorname{op}})(-),U):\int^{\operatorname{op}}W\to\mathcal{C}\mathcal{A}T;\] we want to convert it into a \(2\)-natural transformation \[\widehat{\sigma}\colon W\xrightarrow[]{}[\mathcal{A},\mathcal{C}\mathcal{A}T] \,(\operatorname{y}(-),U)\,.\] We define the component of \(\widehat{\sigma}\) on \(A\in\mathcal{A}\) as \[\begin{array}{ccc}\widehat{\sigma}_{A}:&W(A)\ \longrightarrow&[\mathcal{A}, \mathcal{C}\mathcal{A}T]\,(\operatorname{y}(A),U)\\ &X&&\sigma_{(A,X)}\\ &\downarrow_{\alpha}&\mapsto&\downarrow_{\sigma_{(\operatorname{id}}A, \alpha)}\\ &X^{\prime}&&\sigma_{(A,X^{\prime})}\end{array}\] Then \(\widehat{\sigma}_{A}\) is a functor, since \(\sigma\) is lax natural. We prove that \(\widehat{\sigma}\) is natural. So take \(f\colon A\to B\) a morphism in \(\mathcal{A}\). We need to show that the following square is commutative: This holds by the fact that \(\sigma\) is lax normal and that, given \(\alpha\colon X\to X^{\prime}\) in \(W(A)\), the following two morphisms are equal in \(\int^{\operatorname{op}}W\): \[(A,X)\xrightarrow{(f,\operatorname{id}_{W(f)(X)})}(B,W(f)(X))\xrightarrow {(\operatorname{id}_{B},W(f)(\alpha))}(B,W(f)(X^{\prime}))\] \[(A,X)\xrightarrow{(\operatorname{id}_{A},\alpha)}(A,X^{\prime})\xrightarrow {(f,\operatorname{id}_{W(f)(X^{\prime})})}(B,W(f)(X^{\prime})).\] We now prove that \(\widehat{\sigma}\) is \(2\)-natural. Given a \(2\)-cell \(\delta\colon f\xrightarrow[]{}g\colon A\to B\), we need to show that for every \(X\in W(A)\) we have \[\sigma_{(\operatorname{id}_{B},W(\delta)_{X})}=\sigma_{(A,X)}\operatorname{y }(\delta)\,.\] But \(\delta\) gives a \(2\)-cell \(\underline{\delta}^{X}\colon(f,W(\delta)_{X})\Rightarrow(g,\mathrm{id})\colon(A,X) \rightarrow(B,W(g)(X))\), whence by \(\sigma\) being lax normal \[\sigma_{(A,X)}\,\mathrm{y}\,(\delta)=\sigma_{(f,W(\delta)_{X})}=\sigma_{( \mathrm{id}_{B},W(\delta)_{X})}.\] Take now a modification between lax normal natural transformations \[\Xi\colon\sigma\xRightarrow{}\rho\colon\Delta 1\underset{\mathrm{lax}^{n}}{ \Longrightarrow}[\mathcal{A},\mathcal{C}\mathcal{A}\mathcal{T}]\,((\mathrm{y} \circ\mathcal{G}(W)^{\mathrm{op}})(-),U)\,;\] we want to convert it into a modification \[\widehat{\Xi}\colon\widehat{\sigma}\xRightarrow{}\widehat{\rho}\colon W \xRightarrow{}[\mathcal{A},\mathcal{C}\mathcal{A}\mathcal{T}]\,(\mathrm{y} \,(-),U)\,.\] We define the component of \(\widehat{\Xi}\) on \(A\in\mathcal{A}\) to be the natural transformation with component on \(X\in W(A)\) \[\widehat{\Xi}_{A,X}\coloneqq\Xi_{(A,X)}.\] We have that \(\widehat{\Xi}\) is a modification since \(\Xi\) is a modification between lax normal natural transformations. And this construction is certainly functorial. It is now easy to see that the two constructions we have produced are inverses of each other, giving us an isomorphism of categories as in equation (6). And the \(2\)-naturality of such isomorphism of categories holds trivially. We have thus proved that every \(2\)-presheaf \(W\colon\mathcal{A}\rightarrow\mathcal{C}\mathcal{A}\mathcal{T}\) with \(\mathcal{A}\) small can be expressed as \[W\cong\mathrm{lax}^{\mathrm{n}}\operatorname{-colim}^{\Delta 1}(\mathrm{y}\circ\mathcal{G}(W)^{\mathrm{op}}).\] Consider now \(2\)-functors \(F\colon\mathcal{A}\rightarrow\mathcal{C}\) and \(W\colon\mathcal{A}\rightarrow\mathcal{C}\mathcal{A}\mathcal{T}\) with \(\mathcal{A}\) small. Then by the argument above and Corollary 2.16 we have \[W\cong\mathrm{lax}^{\mathrm{n}}\operatorname{-colim}^{\Delta 1}(\mathrm{y}\circ\mathcal{G}(W)^{ \mathrm{op}})\cong\mathrm{colim}^{\mathrm{W}^{\mathrm{lax}}}(\mathrm{y}\circ \mathcal{G}(W)^{\mathrm{op}}).\] By Lemma 2.3 and Theorem 2.14, we obtain \[\lim^{W}F\cong\lim^{\mathrm{W}^{\mathrm{lax}^{\mathrm{n}}}}\Bigl{(}\lim^{( \mathrm{y}\circ\mathcal{G}(W)^{\mathrm{op}})(-)}F\Bigr{)}\cong\lim^{\mathrm{ W}^{\mathrm{lax}^{\mathrm{n}}}}(F\circ\mathcal{G}(W))\cong\mathrm{lax}^{\mathrm{n}} \operatorname{-lim}^{\Delta 1}(F\circ\mathcal{G}(W))\] either side existing if the other does, where the isomorphism in the middle is easy to prove. **Remark 2.19**.: The proof of Theorem 2.18, together with Lemma 2.3 and the proofs of Theorem 2.14 and Corollary 2.16, also shows how to obtain the correspondence between the universal cylinder of a weighted \(2\)-limit and the universal lax normal cocone of the associated lax normal conical \(2\)-limit. Calling the two, respectively, \[\lambda\colon W\xRightarrow{}\mathcal{C}\,(L,F(-))\] \[\widehat{\lambda}\colon\Delta 1\underset{\mathrm{lax}^{n}}{\Longrightarrow} \mathcal{C}\,(L,(F\circ\mathcal{G}(W))\,(=))\] for \(F\colon\mathcal{A}\rightarrow\mathcal{C}\) and \(W\colon\mathcal{A}\rightarrow\mathcal{C}\mathcal{A}\mathcal{T}\) two \(2\)-functors with \(\mathcal{A}\) small, the correspondence is given by the following equations, for every \((f,\alpha)\colon(A,X)\rightarrow(B,X^{\prime})\) in \(\int^{\mathrm{op}}\!W\): \[\widehat{\lambda}_{(A,X)}=\lambda_{A}(X)\quad\text{ and }\quad\widehat{ \lambda}_{(f,\alpha)}=\lambda_{B}(\alpha). \tag{7}\] **Proposition 2.20**.: _A weighted \(2\)-limit is preserved or reflected precisely when its associated lax normal conical \(2\)-limit is so._ Proof.: Clear after Remark 2.19. **Remark 2.21**.: As weighted \(2\)-colimits in \(\mathcal{C}\) are just weighted \(2\)-limits in \(\mathcal{C}^{\mathrm{op}}\), we automatically obtain from Theorem 2.18 the reduction of weighted \(2\)-colimits in \(\mathcal{C}\) to lax normal conical ones. More precisely, given \(2\)-functors \(F\colon\mathcal{A}\to\mathcal{C}\) and \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) with \(\mathcal{A}\) small, we obtain that \[\operatorname{colim}^{W}F\cong\operatorname{lax}^{\mathrm{n}}\text{-} \operatorname{colim}^{\Delta 1}(F\circ\mathcal{G}\left(W\right)^{\mathrm{op}}),\] where \(\mathcal{G}\left(W\right)\colon\,\int^{\mathrm{op}}\!W\to\mathcal{A}^{\mathrm{ op}}\). But notice that, when \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\), there is a more natural Grothendieck construction we can do on \(W\), i.e. the one which produces the projection on the first component \(\mathcal{G}\left(W\right)\colon\,\int\!W\to\mathcal{A}\), with \(\int\!W\) defined as follows: _an object of \(\int\!W\)_ is a pair \((A,X)\) with \(A\in\mathcal{A}\) and \(X\in F(A)\); _a morphism \((A,X)\to(B,X^{\prime})\) in \(\int\!W\)_ is a pair \((f,\alpha)\) with \(f\colon A\to B\) a morphism in \(\mathcal{A}\) and \(\alpha\colon X\to W(f)(X^{\prime})\) a morphism in \(W(A)\); _a \(2\)-cell \((f,\alpha)\Rightarrow(g,\beta)\colon(A,X)\to(B,X^{\prime})\) in \(\int\!W\)_ is a \(2\)-cell \(\delta\colon f\Rightarrow g\) in \(\mathcal{A}\) such that \(W(\delta)_{X^{\prime}}\circ\alpha=\beta\); _the compositions and identities_ are analogous to the ones described in Construction 2.5. In dimension \(1\), given \(Z\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{S}\mathcal{S}\mathcal{e}\), we have that \(\left(\int^{\mathrm{op}}\!Z\right)^{\mathrm{op}}\) and \(\int\!Z\) coincide, but this is not true in dimension \(2\). We can obtain a formula of reduction of the weighted \(2\)-colimits to some kind of conical ones that uses this more natural Grothendieck construction, changing "lax" into "oplax". Our idea, that does not seem to appear in the literature, is that it is more natural to reduce weighted \(2\)-limits to lax normal conical ones, but weighted \(2\)-colimits to _oplax normal conical_ ones. And this brings to Theorem 2.24 and Theorem 2.25, which are original. **Definition 2.22**.: Let \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) be a \(2\)-functor with \(\mathcal{A}\) small, and consider \(2\)-functors \(M,N\colon\,\left(\int\!W\right)^{\mathrm{op}}\to\mathcal{D}\). An _oplax normal natural transformation \(\alpha\) from \(M\) to \(N\)_, denoted \(\alpha\colon M\xrightarrow[\operatorname{\mathrm{oplax}}^{\mathrm{n}}]{}N\), is an oplax natural transformation \(\alpha\) from \(M\) to \(N\) such that the structure \(2\)-cell on every morphism \[\left(f,\operatorname{id}_{W(f)(X)}\right)\colon(A,W(f)(X))\leftarrow(B,X)\] in \(\left(\int\!W\right)^{\mathrm{op}}\) is the identity. **Definition 2.23**.: Let \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) be a \(2\)-functor with \(\mathcal{A}\) small, and let \(F\colon\,\int\!W\to\mathcal{C}\) be a \(2\)-functor. Notice that \(\int\!W\) is small, since \(\mathcal{A}\) is small. The _oplax normal conical \(2\)-colimit of \(F\)_, denoted as \(\operatorname{oplax}^{\mathrm{n}}\text{-}\operatorname{colim}^{\Delta 1}F\), is (if it exists) an object \(C\in\mathcal{C}\) together with an isomorphism of categories \[\mathcal{C}\left(C,U\right)\cong\left[\left(\int\!W\right)^{\mathrm{op}}, \mathcal{C}\mathcal{A}\mathcal{T}\right]_{\operatorname{oplax}^{\mathrm{n}}} \left(\Delta 1,\,\mathcal{C}\left(F(-),U\right)\right)\] \(2\)-natural in \(U\in\mathcal{C}\), where \(\left[\left(\int\!W\right)^{\mathrm{op}},\mathcal{C}\mathcal{A}\mathcal{T} \right]_{\operatorname{oplax}^{\mathrm{n}}}\) is the \(2\)-category of \(2\)-functors, oplax normal natural transformations and modifications from \(\left(\int\!W\right)^{\mathrm{op}}\) to \(\mathcal{C}\mathcal{A}\mathcal{T}\). (Notice that indeed oplax normal natural transformations compose well vertically.) When \(\operatorname{oplax}^{\mathrm{n}}\text{-}\operatorname{colim}^{\Delta 1}F\) exists, taking \(U=C\) and considering the identity on \(C\) gives us in particular an oplax normal natural transformation \[\mu\colon\Delta 1\xrightarrow[\operatorname{\mathrm{oplax}}^{\mathrm{n}}]{} \mathcal{C}\left(F(-),C\right),\] called the _universal oplax normal cocone_. **Theorem 2.24**.: _Oplax normal conical \(2\)-colimits are particular weighted \(2\)-colimits. More precisely, given \(2\)-functors \(Z\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) and \(F\colon\int\!Z\to\mathcal{C}\) with \(\mathcal{A}\) small, the weight that realizes \(\mathrm{oplax}^{\mathrm{n}}\) -\(\mathrm{colim}^{\Delta 1}F\) is_ \[\begin{array}{ccc}\mathrm{W}^{\mathrm{oplax}^{\mathrm{n}}}:&\left(\int\!Z \right)^{\mathrm{op}}&\longrightarrow&\mathcal{C}\mathcal{A}\mathcal{T}\\ &\left(B,X^{\prime}\right)&&X^{\prime}\big{/}Z(B)\\ &\big{\uparrow}(g,\beta)&\mapsto&\big{\downarrow}Z(g)(-)\circ\beta\\ &\left(C,X^{\prime\prime}\right)&&X^{\prime\prime}\big{/}Z(C)\\ &\left(B,X^{\prime}\right)&&\big{\downarrow}\\ &\left(C,X^{\prime\prime}\right)&&\big{\downarrow}\end{array}\] _where the action of \(Z(g)(-)\circ\beta\) on morphisms is given by \(Z(g)(\mathrm{cod}(-))\)._ Proof.: The proof is analogous to the one of Theorem 2.14. **Theorem 2.25**.: _All the weighted \(2\)-colimits can be reduced to oplax normal conical ones. More precisely, given \(2\)-functors \(F\colon\mathcal{A}\to\mathcal{C}\) and \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) with \(\mathcal{A}\) small, we have that_ \[\mathrm{colim}^{W}F\cong\mathrm{oplax}^{\mathrm{n}}\operatorname{-colim}^{ \Delta 1}(F\circ\mathcal{G}(W))\] _either side existing if the other does, where \(\mathcal{G}(W)\colon\int\!W\to\mathcal{A}\) is the \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{E}\mathcal{T}\)-enriched Grothendieck construction of \(W\)._ Proof.: Exactly as in the proof of Theorem 2.18, we can prove that every \(2\)-presheaf \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) with \(\mathcal{A}\) small can be expressed as \[W\cong\mathrm{colim}^{W}\mathrm{y}\cong\mathrm{oplax}^{\mathrm{n}}\operatorname {-colim}^{\Delta 1}(\mathrm{y}\circ\mathcal{G}(W))\cong\mathrm{colim}^{\mathrm{W}^{ \mathrm{oplax}^{\mathrm{n}}}}(\mathrm{y}\circ\mathcal{G}(W)),\] where \(\mathrm{y}\colon\mathcal{A}\to[\mathcal{A}^{\mathrm{op}},\,\mathcal{C} \mathcal{A}\mathcal{T}]\) is the \(2\)-Yoneda embedding, and the last isomorphism is given by Theorem 2.24. Using Lemma 2.3, we obtain \[\mathrm{colim}^{W}F\cong\mathrm{colim}^{\mathrm{W}^{\mathrm{oplax}^{\mathrm{n }}}}\left(\mathrm{colim}^{(\mathrm{y}\circ\mathcal{G}(W))(-)}F\right)\cong \mathrm{colim}^{\mathrm{W}^{\mathrm{oplax}^{\mathrm{n}}}}(F\circ\mathcal{G}(W )),\] either side existing if the other does, and we conclude by Theorem 2.24. **Remark 2.26**.: Exactly as in Remark 2.19, in the notation of Theorem 2.25, we can calculate the correspondence between the universal cocylinder of \(\mathrm{colim}^{W}F\) and the universal oplax normal cocone of \(\mathrm{oplax}^{\mathrm{n}}\) -\(\mathrm{colim}^{\Delta 1}(F\circ\mathcal{G}(W))\). We find that it is the same as the one in equation (7). **Proposition 2.27**.: _A weighted \(2\)-colimit is preserved or reflected precisely when its associated oplax normal conical \(2\)-colimit is so._ Proof.: Clear after Remark 2.26. **Example 2.28**.: By the proof of Theorem 2.25, every \(2\)-presheaf \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}\mathcal{T}\) with \(\mathcal{A}\) small can be expressed as \[W\cong\mathrm{oplax}^{\mathrm{n}}\operatorname{-colim}^{\Delta 1}(\mathrm{y}\circ \mathcal{G}(W)).\] The universal oplax normal cocone is given by In particular, taking \(\mathcal{A}=\mathit{1}\), we have that a \(2\)-presheaf on \(\mathcal{A}\) is just a small category \(\mathcal{D}\), and its \(2\)-\(\mathcal{S}\)\(\mathcal{S}\)\(\mathcal{S}\)-enriched Grothendieck construction just gives the functor to the terminal \(\mathcal{D}\to\mathit{1}\). We then obtain that every small category \(\mathcal{D}\) is the oplax normal conical \(2\)-colimit of \(\Delta\mathrm{1}\colon\mathcal{D}\to\mathcal{C}\mathcal{A}\mathcal{T}\), with oplax normal cocone given by **Remark 2.29**.: Example 2.28 shows that \(\mathit{1}\) is "oplax normal conical dense" in \(\mathcal{C}\mathcal{A}\mathcal{T}\). And this may also serve as another motivation for the (op)lax normal conical \(2\)-(co)limits. Indeed, this is sufficient to express all the (co)powers as (op)lax normal conical \(2\)-(co)limits. And when \(\mathcal{C}\) is a (co)complete \(2\)-category, every \(2\)-(co)limit in \(\mathcal{C}\) can be written in terms of (co)powers and conical \(2\)-(co)limits. Remember also that by Example 2.12 every conical \(2\)-(co)limits is an (op)lax normal conical one. We can now understand how the (op)lax normal conical (co)limits are the result of the idea (anticipated at the beginning of this section) of considering the \(2\)-cells between \(\mathit{1}\) and a category \(\mathcal{D}\) in order to capture the whole of \(\mathcal{D}\), rather than considering the functors from \(\mathit{2}\) (that brings instead to the weighted \(2\)-limits). ## 3. The \(2\)-\(\mathcal{S}\)\(\mathcal{S}\)\(\mathcal{S}\)-enriched Grothendieck construction In this section, we explore in detail the \(2\)-\(\mathcal{S}\)\(\mathcal{S}\)\(\mathcal{S}\)-enriched Grothendieck construction (Definition 2.6), that firstly appeared in Street's [15], from a more abstract point of view, that is particularly relevant to, but not only, the (higher dimensional) elementary topos theory. This brings to the second main result of this paper (Theorem 3.11), which is expressing the \(2\)-\(\mathcal{S}\)\(\mathcal{S}\)\(\mathcal{S}\)-enriched Grothendieck construction as a _lax comma object in \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\)_, as defined here in Definition 3.8 with a universal property that refines both the existing ones of Gray's [6] (unraveled by Kelly in [10]) and Lambert's [12]. We explain how such laxness is needed to capture the \(2\)-\(\mathcal{S}\)\(\mathcal{S}\)\(\mathcal{S}\)-enriched Grothendieck construction (and the usual Grothendieck construction as well), and inscribe it in an original idea of \(2\)-\(\mathcal{V}\)-enrichment, whence comes the name we gave to the studied construction. The lax \(3\)-category \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\) of \(2\)-categories, \(2\)-functors, lax natural transformations and modifications is thus conceived as the archetypal \(3\)-dimensional elementary topos, suggesting to use the _lax comma objects in a lax \(3\)-category_ (Definition 3.8) to regulate the classification process in a would-be _elementary \(3\)-topos_. Compare this with Proposition 1.2, that according to Weber's [16] presents \(\mathcal{C}\mathcal{A}\mathcal{T}\) as the archetypal elementary \(2\)-topos, with the construction of the category of elements as classification process, and suggests to regulate the classification by comma objects in dimension \(2\). What gets classified in \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\) is the _discrete \(2\)-fibrations_ with small fibres (a generalized version of the usual Grothendieck fibrations) introduced by Lambert in [12], that are a locally discrete version of Hermida's \(2\)-fibrations (defined in [7]). Insisting on a \(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{t}\)-enrichment point of view, we will call them _\(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{t}\)-opfibrations_. We think that the _\(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{t}\)_-enriched Grothendieck construction should be taken into consideration towards the development of a general enriched Grothendieck construction (that considers general enriched fibrations). This will be explored in future work. **Remark 3.1**.: The guiding idea of this section is to categorify Proposition 1.2, presenting then the _\(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{t}\)_-enriched Grothendieck construction as the archetypal \(3\)-dimensional classification process (in the sense of a would-be tridimensional elementary topos theory). Analogously to the passage from pullbacks, that regulate the classification in dimension \(1\), to comma objects, in dimension \(2\), we expect to find now a further "comma version" of the comma object that is suitable to a tridimensional ambient. The first instance of such process is given by Proposition 3.2, that shows the strictest filled square analogous to the one of Proposition 1.2 we can have in the setting of the _\(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{t}\)_-enriched (but also just the usual) Grothendieck construction. **Proposition 3.2**.: _Let \(F\colon\mathcal{B}\to\mathcal{C}\!\mathcal{A}\!\mathcal{T}\) be a \(2\)-functor and consider its \(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{t}\)-enriched Grothendieck construction. There is a lax normal natural transformation \(\lambda\) of the form_ Proof.: Given \((A,X)\in\int^{\mathrm{op}}F\), we define the component of \(\lambda\) on \((A,X)\) to be the functor \(\mathit{I}\to F(A)\) corresponding to \(X\in F(A)\). Given now a morphism \((f,\alpha)\colon(A,X)\to(B,X^{\prime})\) in \(\int^{\mathrm{op}}F\), we define the structure \(2\)-cell of \(\lambda\) on \((f,\alpha)\) to be the natural transformation corresponding to \(\alpha\). It is straightforward to see that this is indeed a lax normal natural transformation, by construction of \(\int^{\mathrm{op}}F\). **Remark 3.3**.: Proposition 3.2 forces us to move out of \(2\)-\(\mathcal{C}\!\mathcal{A}\!\mathcal{T}\) in order to capture the \(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{t}\)-enriched (but also just the usual) Grothdendieck construction from an abstract point of view. Indeed, we need to at least admit the lax natural transformations as \(2\)-cells. And if we wish to recover the Grothendieck construction of pseudofunctors or of general lax functors into \(\mathcal{C}\!\mathcal{A}\!\mathcal{T}\), we also need to admit the lax functors as \(1\)-cells of our ambient. We will just consider strict \(2\)-functors for simplicity, to avoid at least the problems with the whiskering, but we actually expect everything to hold for lax functors as well (despite not having investigated much of it yet). We call _\(2\)-\(\mathcal{C}\!\mathcal{A}\!\mathcal{T}_{\mathrm{lax}}\)_ the lax \(3\)-category of \(2\)-categories, \(2\)-functors, lax natural transformations and modifications. In his paper [12], Lambert has indeed proved that this forms a lax \(3\)-category, that is, a category enriched over the \(1\)-category of \(2\)-categories and normal lax functors. Be careful that _\(2\)-\(\mathcal{C}\!\mathcal{A}\!\mathcal{T}_{\mathrm{lax}}\)_ has no underlying \(2\)-category, since the interchange rule now only holds in a lax version, in the sense that we now have a modification between the two possible lax natural transformations. Indeed, consider two lax natural transformations Then for every \(A\in\mathcal{A}\), the component \(\alpha_{A}\) is a morphism \(F(A)\xrightarrow{}G(A)\) in \(\mathcal{B}\) and we can consider the structure \(2\)-cell of \(\beta\) on such morphism. We obtain that \(\beta_{\alpha_{A}}\) is a \(2\)-cell in \(\mathcal{C}\) of the form And the \(\beta_{\alpha_{A}}\)'s collect into a modification \(\beta_{\alpha}\), since the axiom we should check on a morphism \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) is given by the \(2\)-dimensional property of \(\beta\) being a lax natural transformation, applied to the \(2\)-cell \(\alpha_{f}\) in \(\mathcal{B}\). **Remark 3.4**.: Remark 3.1 and Proposition 3.2 lead us to use the concept of _lax comma object_. This concept appears and is heavily used in Gray's book [6], but without giving a complete universal property suitable to the lax \(3\)-categorical ambient \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\). Kelly just unraveled in [10] the partial universal property presented by Gray, while Lambert attempted in [12] to give a better one but missing the uniqueness result in the \(2\)-dimensional part and giving only a partial \(3\)-dimensional part. We give in Definition 3.8 (see also Proposition 3.9) the complete universal property of the lax comma object, that refines both the ones of Gray (and Kelly) and Lambert. In order to distinguish the explicit definition (given in Gray's [6]) from the complete universal property of the lax comma object, we will call the former "_lax comma_" and the latter "_lax comma object in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\)". However, we will use the same symbol for both, justified by Proposition 3.9. **Definition 3.5**.: Let \(F\colon\mathcal{A}\to\mathcal{C}\) and \(G\colon\mathcal{B}\to\mathcal{C}\) be \(2\)-functors. The _lax comma from \(F\)to \(G\)_ is the \(2\)-category \(F\mathbin{/\!\!/}G\) that is given by the following data: _an object of \(F\mathbin{/\!\!/}G\) is a triple \((A,B,h)\) with \(A\in\mathcal{A}\), \(B\in\mathcal{B}\) and \(h\colon F(A)\to G(B)\) a morphism in \(\mathcal{C}\);_ _a \(1\)-cell \((A,B,h)\to(A^{\prime},B^{\prime},h^{\prime})\) in \(F\mathbin{/\!\!/}G\) is a triple \((f,g,\varphi)\) with \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\), \(g\colon B\to B^{\prime}\) in \(\mathcal{B}\) and_ _a \(2\)-cell in \(\mathcal{C}\);_ _a \(2\)-cell \((f,g,\varphi)\xrightarrow{}(f^{\prime},g^{\prime},\varphi^{\prime})\colon(A, B,h)\to(A^{\prime},B^{\prime},h^{\prime})\) is a pair \((\alpha,\beta)\) with \(\alpha\colon f\xrightarrow{}f^{\prime}\) in \(\mathcal{A}\) and \(\beta\colon g\xrightarrow{}g^{\prime}\) in \(\mathcal{B}\) such that_ _the composition_ of \(1\)-cells is given by pasting and that of \(2\)-cells is inherited by the ones in \(\mathcal{A}\) and \(\mathcal{B}\)._ The _oplax comma from \(F\) to \(G\)_ is the co of the lax comma from \(F^{\mathrm{co}}\) to \(G^{\mathrm{co}}\). The following proposition shows the partial universal property of the lax comma object presented by Gray in [6]. **Proposition 3.6**.: _Let \(F\colon\mathcal{A}\to\mathcal{C}\) and \(G\colon\mathcal{B}\to\mathcal{C}\) be \(2\)-functors. The lax comma from \(F\) to \(G\) is equivalently given by the enriched conical limit in \(2\)-CAT of the diagram_ _where \(\mathcal{C}^{2}_{\mathrm{oplax}}\) is the lax comma from \(\operatorname{Id}_{\mathcal{C}}\) to \(\operatorname{Id}_{\mathcal{C}}\)_(_in some sense, the fundamental one_)_._ **Remark 3.7**.: But there is a better universal property that the lax comma satisfies. Indeed, it is a _lax comma object in \(2\)-CAT\({}_{\mathrm{lax}}\)_, as defined here below. **Definition 3.8**.: Let \(\mathcal{Q}\) be a lax \(3\)-category and consider \(1\)-cells \(F\colon\mathcal{A}\to\mathcal{C}\) and \(G\colon\mathcal{B}\to\mathcal{C}\) in \(\mathcal{Q}\). The _lax comma object in \(\mathcal{Q}\) from \(F\) to \(G\)_ is, if it exists, an object \(F\mathbin{\mathchoice{\mathbin{\kern 0.0pt\vrule width 0.0pt height 6.0pt depth -0.0pt\kern-0.0pt/}}{\mathbin{\kern 0.0pt \vrule width 0.0pt height 6.0pt depth -0.0pt\kern-0.0pt/}}{\mathbin{\kern 0.0pt \vrule width 0.0pt height 6.0pt depth -0.0pt/}}{\mathbin{\kern 0.0pt \vrule width 0.0pt height 6. notice that we are precisely asking that \(\Xi\) corresponds to the \(3\)-cell given by the lax interchange rule in \(\mathcal{Q}\) of \(\mathcal{M}\) \(\mathcal{M}\), being a \(2\)-cell in \(F\mathbin{/\!\!/}G\), is determined by its projections through \(\partial_{0}\) and \(\partial_{1}\). So we are forced to define \(\nu_{m}\) to be the \(2\)-cell \((\Gamma_{m},\Delta_{m})\) in \(F\mathbin{/\!\!/}G\). This is indeed a \(2\)-cell since \(\Xi\) is a modification. It is straightforward to check that \(\nu\) is a lax natural transformation, since \(\Gamma\) and \(\Delta\) are so. And we immediatly see that if both \(\Gamma\) and \(\Delta\) are strict \(2\)-natural (resp. pseudo-natural) then also \(\nu\) is so. The observation that \(\nu\) is then the unique lax natural transformation \(V\underset{\mathrm{lax}}{\Longrightarrow}W\) such that the modification corresponding to the lax interchange rule in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\) of \(\nu\) and \(\lambda\) coincides with \(\Xi\) follows from Remark 3.3. For \((iii)\), let \(M\in\mathcal{M}\). Since the component \(\Theta_{M}\) will be a \(2\)-cell in \(F\mathbin{/\!\!/}G\), it is determined by its projections through \(\partial_{0}\) and \(\partial_{1}\). So we need to define \[\Theta_{M}\coloneqq(\Phi_{M},\Psi_{M}).\] That this is indeed a \(2\)-cell \(\nu_{M}\Rightarrow\omega_{M}\) in \(F\mathbin{/\!\!/}G\) is guaranteed by equation (8), taking components on \(M\). The condition that the \(\Theta_{M}\)'s need to satisfy in order for them to collect into a modification \(\Theta\) is then an equality between \(2\)-cells in \(F\mathbin{/\!\!/}G\), and thus it suffices to check its projections through \(\partial_{0}\) and \(\partial_{1}\). But those two resulting conditions are given by the fact that both \(\Phi\) and \(\Psi\) are modifications. **Remark 3.10**.: Notice from Definition 3.8 that the lax comma object in a lax \(3\)-category really is an upgrade of the comma object to a lax \(3\)-dimensional ambient. Indeed, a lax comma object in a \(2\)-category is precisely a comma object, since any \(\Xi\) of Definition 3.8 is then forced to be the identity, and the tridimensional part becomes trivial. Interestingly, the uniqueness in the \(2\)-dimensional part of the universal property of the lax comma object in a lax \(3\)-category is obtained by considering the lax interchange rule. The universal property of Proposition 3.6 is obtained precisely by restricting ourselves to consider as \(\Gamma\) and \(\Delta\) only strict \(2\)-natural transformations. **Theorem 3.11**.: _Let \(F\colon\mathcal{B}\to\mathcal{CAT}\) be a \(2\)-functor. The \(2\)-\(\mathcal{S}\!\!\mathcal{E}\)-enriched Grothendieck construction, defined explicitly in Construction 2.5, is equivalently given by the lax comma object_ (9) _in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\) and such lax comma object is presented by the lax normal natural transformation \(\lambda\) of Proposition 3.2._ _As a consequence, it is then also given by the strict \(3\)-pullback in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\) between \(F\) and the replacement \(\tau\) of \(1\): \(1\to\mathcal{CAT}\) obtained by taking the lax comma object of \(1\): \(1\to\mathcal{CAT}\) along the identity of \(\mathcal{CAT}\) (that is a lax \(3\)-dimensional version of the lax limit of the arrow \(1\): \(1\to\mathcal{CAT}\)):_ _The domain of \(\tau\) is a lax pointed version of \(\mathcal{CAT}\), whence the notation \(\mathcal{CAT}_{\bullet,\mathrm{lax}}\)._ Proof.: The proof is a straightforward calculation. The fact that \(\mathcal{G}\left(F\right)\) is then also the strict \(3\)-pullback of \(\tau\) is readily checked by showing that such strict \(3\)-pullback satisfies the universal property of the lax comma object \(\mathit{1}\!/\!\!/F\) in \(2\)-\(\mathcal{C}\!\!\mathit{AT}_{\mathrm{lax}}\) that we have presented in Definition 3.8, using the universal properties of \(\mathit{1}\!/\!\!/\operatorname{Id}_{\mathcal{C}\!\!\mathit{AT}}\) and of the strict \(3\)-pullback together with some basics of the calculus of pasting. **Remark 3.12**.: Before commenting on Theorem 3.11 (we will in Remark 3.14), we use it to canonically extend the \(2\)-\(\mathcal{S}\!\!\mathit{et}\)-enriched Grothendieck construction by functoriality. **Proposition 3.13**.: _By Theorem 3.11, the \(2\)-\(\mathcal{S}\!\!\mathit{et}\)-enriched Grothendieck construction canonically extends to a \(2\)-functor_ \[\mathcal{G}\left(-\right):\,\left[\mathcal{B},\mathcal{C}\!\!\mathit{AT} \right]\to 2\)_-\(\mathcal{C}\!\!\mathit{AT}\,/\!\!\mathcal{B}\] _and to one with domain \(\left[\mathcal{B},\mathcal{C}\!\!\mathit{AT}\right]_{\mathrm{lax}}\) as well, for every \(2\)-category \(\mathcal{B}\)._ Proof.: Given a \(2\)-natural transformation \(\varphi\colon F\Rightarrow G\colon\mathcal{B}\to\mathcal{C}\!\!\mathit{AT}\) - also a lax natural transformation would equally work - we define \(\mathcal{G}\left(\varphi\right)\) as the unique morphism \(\mathcal{G}\left(\varphi\right):\,\int^{\mathrm{op}}\!F\to\int^{\mathrm{op}}\!G\) induced by the universal property of the lax comma object \(\int^{\mathrm{op}}\!G\) in \(2\)-\(\mathcal{C}\!\!\mathit{AT}_{\mathrm{lax}}\) applied to the lax natural transformation where \(\lambda^{F}\) is the lax natural transformation that presents \(\int^{\mathrm{op}}\!F\) as a lax comma object in \(2\)-\(\mathcal{C}\!\!\mathit{AT}_{\mathrm{lax}}\). Explicitly, for every \(2\)-cell \(\delta\colon(f,\alpha)\Rightarrow(g,\beta)\colon(B,X)\to(C,X^{\prime})\) in \(\int^{\mathrm{op}}\!F\) \[\mathcal{G}\left(\varphi\right)\left(B,X\right)=(B,\varphi_{B}(X))\quad\text{ and}\quad\mathcal{G}\left(\varphi\right)\left(f,\alpha\right)=(f,\varphi_{C}(\alpha))\quad\text{ and}\quad\mathcal{G}\left(\varphi\right)\left(\delta\right)=\delta.\] Given a modification \(\Theta\colon\varphi\ncong\psi\colon F\Rightarrow G\colon\mathcal{B}\to \mathcal{C}\!\!\mathit{AT}\), we define \(\mathcal{G}\left(\Theta\right)\) as the unique \(2\)-natural transformation induced by the universal property of the lax comma object \(\int^{\mathrm{op}}\!G\) in \(2\)-\(\mathcal{C}\!\!\mathit{AT}_{\mathrm{lax}}\) applied, in the notation of Definition 3.8 to \(V=\mathcal{G}\left(\varphi\right)\), \(W=\mathcal{G}\left(\psi\right)\), \(\Gamma=\mathrm{id}\), \(\Delta=\mathrm{id}\) and \(\Xi\) given by Explicitly, the component of \(\mathcal{G}\left(\Theta\right)\) on an object \((B,X)\in\int^{\mathrm{op}}\!F\) is \[\mathcal{G}\left(\Theta\right)_{(B,X)}=\left(\mathrm{id}_{B},\Theta_{B,X} \right).\] It is straightforward to show that \(\mathcal{G}\left(-\right)\) is indeed a \(2\)-functor. **Remark 3.14**.: Theorem 3.11 shows in which sense the \(2\)-\(\mathcal{S}\!\!\mathit{et}\)-enriched Grothendieck construction can be thought of as the archetypal \(3\)-dimensional classifier, in the sense of a would-be elementary \(3\)-topos theory. Weber proposed in [16] to convert, in the passage from dimension \(1\) to dimension \(2\), all the monomorphisms into discrete opfibrations. As we said in the introduction, also keeping the classifier to have the terminal as domain and thus be the inclusion of a verum inside of generalized truth values but changing the classification process to be regulated by comma objects is equally good, and actually preferable. Now, in order to reach the dimension 3, we propose to either upgrade the classification process into one regulated by lax comma objects (as defined here in Definition 3.8) or to consider pullbacks of the notion of fibration that the \(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{T}\)-enriched Grothendieck construction produces. We will describe such notion of fibration in detail below, see Definition 3.23. It is interesting to notice that we had to move out of \(2\)-\(\mathcal{C}\!\mathcal{A}\!\mathcal{T}\) (see also Remark 3.3), in order to capture the laxness that permeates the Grothendieck construction (seen already from Construction 2.5, where we simultaneously constructed the lax normal conical \(2\)-limits and the \(2\)-\(\mathcal{S}\!\mathcal{E}\!\mathcal{T}\)-enriched Grothendieck construction). So the archetypal elementary \(3\)-topos seems to be \(2\)-\(\mathcal{C}\!\mathcal{A}\!\mathcal{T}_{\mathrm{lax}}\). This idea is also reinforced by the fact that Buckley, in his paper [3], continuing the work of Bakovic's [1], found the need of considering trihomomorphisms \(F\colon\mathcal{B}^{\mathrm{copo}}\to 2\)-\(\mathcal{C}\!\mathcal{A}\!\mathcal{T}_{\mathrm{lax}}\) in order to capture non-split Hermida's \(2\)-fibrations (presented in Hermida's [7]) via a suitable Grothendieck construction. So what seems to work is the sequence \[\mathcal{S}\!\mathcal{E}\!\mathcal{T}\quad\leadsto\quad\mathcal{C}\!\mathcal{ A}\!\mathcal{T}\quad\leadsto\quad 2\text{-}\mathcal{C}\!\mathcal{A}\!\mathcal{T}_{ \mathrm{lax}}\] We believe that such sequence is best explained by what we call a \(2\)-\(\mathcal{V}\)_-enrichment_, given \(\mathcal{V}\) a nice enough monoidal category, so that we view the sequence as \[\mathcal{V}\quad\leadsto\quad\mathcal{V}\text{-}\mathcal{C}\!\mathcal{A}\! \mathcal{T}\quad\leadsto\quad 2\text{-}\mathcal{V}\text{-}\mathcal{C}\!\mathcal{A}\! \mathcal{T}\] with \(\mathcal{V}=\mathcal{S}\!\mathcal{E}\!\mathcal{T}\). Our idea is that enriching again over \(\mathcal{V}\text{-}\mathcal{C}\!\mathcal{A}\!\mathcal{T}\) should take into account the fact that \(\mathcal{V}\text{-}\mathcal{C}\!\mathcal{A}\!\mathcal{T}\) is a \(2\)-category, and then be a _weak enrichment_ rather than an ordinary enrichment (that would only consider the underlying category of \(\mathcal{V}\text{-}\mathcal{C}\!\mathcal{A}\!\mathcal{T}\)). We would like to thank Francesco Dagnino for the interesting discussions that brought to this idea of \(2\)-\(\mathcal{V}\)-enrichment. As described with more detail below, weakly enriching over \(\mathcal{S}\!\mathcal{E}\!\mathcal{T}\!\mathcal{C}\!\mathcal{A}\!\mathcal{T}= \mathcal{C}\!\mathcal{A}\!\mathcal{T}\) actually produces bicategories, lax functors, lax natural transformations and modifications, but we will restrict to consider \(2\)-categories and \(2\)-functors, for simplicity. Nevertheless, we will be able to capture the lax natural transformations in this way, and the \(2\)-\(\mathcal{V}\)-enrichment idea will guide us to propose a notion of colimit and of pointwise Kan extension in the still poorly studied context of \(2\)-\(\mathcal{C}\!\mathcal{A}\!\mathcal{T}_{\mathrm{lax}}\), in Section 4. We also notice that the further step in the sequence above could bring towards a version of the Gray-categories, but we have not investigated this yet. We recall the following remark on the ordinary enrichment. **Recall 3.15**.: If \((\mathcal{V},\otimes,I)\) is a monoidal category with coproducts such that \(-\otimes-\) preserves coproducts in each variable, we can define a \(\mathcal{V}\)-enriched category as pair \((S,\mathcal{A})\) with \(S\) a set, that will be the set of objects, and \(\mathcal{A}\) a monoid in the monoidal category \(\left[S\times S,\mathcal{V}\right]\) of functors (actually given by mere functions) from \(S\times S\) to \(\mathcal{V}\) and natural transformations (actually given by the mere components), that we think of as the monoidal category of square matrices indexed by \(S\) with entries in \(\mathcal{V}\), with the tensor product given by matrix multiplication and tensor unit given by the identity matrix (with \(I\) all over the main diagonal and the initial object elsewhere). The multiplication of the monoid \(\mathcal{A}\) gives indeed the composition of the enriched category, the unit gives the identities and the axioms of monoid precisely ask the composition to be associative and unital. We can then also define \(\mathcal{V}\)-enriched functors on this line, using the following construction. Given a \(\mathcal{V}\)-enriched category \((T,\mathcal{B})\) and a function \(F\colon S\to T\), we can define a monoid \(F^{*}\mathcal{B}\) in \(\big{[}S\times S,\mathcal{V}\big{]}\), taking \[F^{*}\mathcal{B}=\Big{(}S\times S\xrightarrow{F\times F}T\times T\xrightarrow{ \mathcal{B}}\mathcal{V}\Big{)}\] and defining the multiplication and the unit by whiskering those of \(\mathcal{B}\) with \(F\times F\) on the left. Indeed \(F^{*}\mathcal{B}\) is a monoid, by pasting calculations, since all the required axioms just involve cells of strictly higher levels than \(F\times F\). Given \((S,\mathcal{A})\) and \((T,\mathcal{B})\) two \(\mathcal{V}\)-enriched categories, a \(\mathcal{V}\)-enriched functor \((S,\mathcal{A})\to(T,\mathcal{B})\) can be defined as a pair \((F,\overline{F})\) with \(F\colon S\to T\) a function and \(\overline{F}\colon\mathcal{A}\to F^{*}\mathcal{B}\) a morphism between monoids in \(\big{[}S\times S,\mathcal{V}\big{]}\). **Remark 3.16**.: We now give a weak \(2\)-categorical generalization of Recall 3.15. The concept of _weak enrichment_ is explored in Garner and Shulman's [5], but in the terms presented below (on the line of Recall 3.15) it does not seem to appear in the literature. We will use the concepts of pseudomonoid in a \(2\)-category and of lax morphism between them. A definition of them can be found in Vasilakopoulou's [14]. **Construction 3.17** (Weak enrichment).: Let \((\mathcal{K},\otimes,I,\alpha,\lambda,\rho)\) be a monoidal \(2\)-category, i.e. a \(2\)-category \(\mathcal{K}\) that is monoidal in the \(1\)-dimensional sense but such that the tensor product is a \(2\)-functor \(\mathcal{K}\times\mathcal{K}\to\mathcal{K}\). And assume that \(\mathcal{K}\) has coproducts and that \(-\otimes-\) preserves them in each variable. Then, for every set \(S\), the \(2\)-category \([S\times S,\mathcal{K}]\) is \(2\)-monoidal as well, with tensor product given by matrix multiplication and tensor unit given by the identity matrix. Indeed, the matrix multiplication can be extended to a \(2\)-functor using the \(2\)-dimensional property of the (now enriched) coproducts, with the \(2\)-functoriality given by the fact that everything can be discharged on components and that \(-\otimes-\colon\mathcal{K}\times\mathcal{K}\to\mathcal{K}\) is a \(2\)-functor. We define a \(\mathcal{K}\)_-weakly enriched category_ as a pair \((S,\mathcal{A})\) with \(S\) a set, thought as the set of objects, and \(\mathcal{A}\) a pseudomonoid in the monoidal \(2\)-category \([S\times S,\mathcal{K}]\) of square \(S\)-indexed matrices with entries in \(\mathcal{K}\) (whose \(1\)-cells are the \(2\)-natural transformations and whose \(2\)-cells are the modifications). Notice that a strict \(2\)-monoid in \([S\times S,\mathcal{K}]\) is the same thing as a monoid in the monoidal category \([S\times S,\mathcal{K}_{0}]\), and thus precisely an enriched category over \(\mathcal{K}_{0}\) with object set \(S\). Now, notice that if \((T,\mathcal{B})\) is a \(\mathcal{K}\)-weakly enriched category and \(F\colon S\to T\) is a function, then \[F^{*}\mathcal{B}=\Big{(}S\times S\xrightarrow{F\times F}T\times T\xrightarrow {\mathcal{B}}\mathcal{V}\Big{)}\] is a pseudomonoid in \([S\times S,\mathcal{K}]\), defining all the needed structure cells as those of \(\mathcal{B}\) whiskered with \(F\times F\) on the left. That \(F^{*}\mathcal{B}\) is a pseudomonoid holds by pasting calculations, since all the required axioms just involve cells of strictly higher levels than \(F\times F\). Given two \(\mathcal{K}\)-weakly enriched categories \((S,\mathcal{A})\) and \((T,\mathcal{B})\), we define a \(\mathcal{K}\)_**-weakly enriched functor**\((S,\mathcal{A})\to(T,\mathcal{B})\) as a pair \((F,\overline{F})\) with \(F\colon S\to T\) a function and \(\overline{F}\colon\mathcal{A}\to F^{*}\mathcal{B}\) a lax morphism between lax monoids in \([S\times S,\mathcal{K}]\). Given now \((F,\overline{F}),(G,\overline{G})\colon(S,\mathcal{A})\to(T,\mathcal{B})\) two \(\mathcal{K}\)-weakly enriched functors, we define a \(\mathcal{K}\)_**-weakly enriched natural transformation**\(\varphi\colon(F,\overline{F})\underset{\mathrm{lax}}{\Longrightarrow}(G, \overline{G})\) as a collection of \(1\)-cells \[\varphi_{A}\colon I\to\mathcal{B}\left(F(A),G(A)\right)\] in \(\,\mathcal{K}\) for every \(A\in S\) and \(2\)-cells in \(\,\mathcal{K}\) for every pair \((A,B)\in S\times S\) such that, for every \(A,B,C\in S\), with notations like \(\mathcal{A}_{A,B}\coloneqq\mathcal{A}\left(A,B\right)\) and \(\mathcal{B}^{F,G}_{A,B}\coloneqq\mathcal{B}\left(F(A),G(B)\right)\) and omitting the tensor product of objects, the pasting in \(\,\mathcal{K}\) is equal to the pasting and the pasting in \(\,\mathcal{K}\) is equal to the pasting \[\tikzfig{eq:18}{\mathcal{A}_{BC}\left(\mathcal{A}_{AB}I\right)\xrightarrow{1 \otimes G\otimes\varphi_{A}}\mathcal{A}_{BC}\left(\mathcal{B}_{AB}^{GG}\mathcal{ B}_{AA}^{FG}\right)\xrightarrow{1\otimes\mathrm{comp}}\mathcal{A}_{BC}\mathcal{B}_{AB}^{FG}\] [MISSING_PAGE_POST] **Remark 3.22**.: We now proceed to describe the \(2\)-\(\mathcal{S}\!\mathcal{E}\)_-opfibrations_ over a \(2\)-category \(\mathcal{B}\), that are the notion of opfibration produced by the \(2\)-\(\mathcal{S}\!\mathcal{E}\)-enriched Grothendieck construction, in the sense that they will precisely be (up to restricting to small fibres) what forms the essential image of the \(2\)-functor \(\mathcal{G}\left(-\right)\colon\left[\mathcal{B},\mathcal{C}\!\mathcal{A} \mathcal{T}\right]\to 2\)-\(\mathcal{C}\!\mathcal{A}\mathcal{T}\) /\(\mathcal{B}\) described in Proposition 3.13. They have been firstly introduced by Lambert in [12] with the name "discrete \(2\)-fibrations" (up to co-op changes), justified by the fact that they are equivalently the locally discrete Hermida's \(2\)-fibrations (for Hermida's \(2\)-fibrations, see [7]). We believe, however, that the \(2\)-\(\mathcal{S}\!\mathcal{E}\)_-fibrations_ are what fully realizes the concept of fibration in dimension \(2\), while Hermida's \(2\)-fibrations belong more to dimension \(3\). We shall look at Construction 2.5 (where we intuitively produced the \(2\)-\(\mathcal{S}\!\mathcal{E}\)-enriched Grothendieck construction) to investigate which lifting properties hold for \(\mathcal{G}\left(F\right)\) for some \(2\)-functor \(F\colon\mathcal{B}\to\mathcal{C}\!\mathcal{A}\mathcal{T}\) and could characterize the functors that are isomorphic to some \(\mathcal{G}\left(F\right)\). Since, calling \(H_{0}\) the underlying functor of a \(2\)-functor \(H\), we have that \[\mathcal{G}\left(F\right)_{0}=\mathcal{G}\left(F_{0}\right),\] we obtain that \(\mathcal{G}\left(F\right)_{0}\) is a usual Grothendieck opfibration. In the notation of Construction 2.5, the cartesian liftings are given by the \(\underline{f}^{X}\). But we now also have liftings of the \(2\)-cells in \(\mathcal{B}\). Indeed, for every \(2\)-cell \(\delta\colon f\Rightarrow g\colon B\to B^{\prime}\) in \(\mathcal{B}\) and every \(X\in F(B)\), we have a \(2\)-cell in \(\int^{\operatorname{op}}\!F\) \[\underline{\delta}^{X}\colon(f,F(\delta)_{X})\Rightarrow(g,\operatorname{id} )\colon(B,X)\to(B^{\prime},F(g)(X))\] such that \(\mathcal{G}\left(F\right)(\underline{\delta}^{X})=\delta\). As a consequence, for every morphism \((g,\beta)\) in \(\int^{\operatorname{op}}\!F\), the \(2\)-cell \[\delta=(\operatorname{id}_{B},\beta)\underline{\delta}^{X}\] in \(\int^{\operatorname{op}}\!F\) is a \(2\)-cell that lifts \(\delta\) to the morphism \((g,\beta)\) that lives above the codomain \(g\) of \(\delta\). Recall from Construction 2.5 that all the \(2\)-cells in \(\int^{\operatorname{op}}\!F\) emerge in this way. As we noticed there, having a \(2\)-cell \(\delta\) in \(\int^{\operatorname{op}}\!F\) as above is actually a property for \(\delta\). Thus, fixed \(\delta\colon f\Rightarrow g\) in \(\mathcal{B}\) and \((g,\beta)\) in \(\int^{\operatorname{op}}\!F\) that lives above the codomain \(g\) of \(\delta\), the liftings of \(\delta\) to \((g,\beta)\) are unique. And we conclude that \(\mathcal{G}\left(F\right)\), further than being an ordinary Grothendieck opfibration, is also locally a discrete fibration (see Definition 3.23). It is interesting to notice the change of direction from opfibration on objects to fibration locally. This is given by the fact that we are classifying from _1_: \(\mathcal{I}\to\mathcal{C}\!\mathcal{A}\mathcal{T}\) using lax comma objects. We will see in Remark 3.26 that we have other classifiers as well, with all the co-op flavours, but we believe that this is the most natural one for what we want to call \(2\)-\(\mathcal{S}\!\mathcal{E}\)_-opfibration_. **Definition 3.23**.: Let \(\mathcal{B}\) be a \(2\)-category. A _\(2\)-\(\mathcal{S}\!\mathcal{E}\)-opfibration over \(\mathcal{B}\)_ (or _discrete \(2\)-fibration_ in the language of Lambert's [12]) is a \(2\)-functor \(P\colon\mathcal{E}\to\mathcal{B}\) such that 1. the underlying functor \(P_{0}\) of \(P\) is an ordinary Grothendieck opfibration; 2. for every pair \(X,Y\in\mathcal{E}\) the functor \[P_{X,Y}\colon\mathcal{E}\left(X,Y\right)\to\mathcal{B}\left(P(X),P(Y)\right)\] is a discrete fibration. We say that \(P\) is _split_ if \(P_{0}\) is so. **Remark 3.24**.: The following theorem is proved in Lambert's [12]. One can find there some examples of \(2\)-\(\mathcal{S}\!\mathcal{E}\)-opfibrations as well. We will extend such theorem to a complete \(2\)-equivalence betwen \([\mathcal{B},\mathcal{C}\!\mathcal{A}\mathcal{T}]\) with various laxness flavours on morphisms and corresponding \(2\)-categories of \(2\)-\(\mathcal{S}\!\mathcal{E}\)-opfibrations in Section 4, after showing that the lax comma object square of Theorem 3.11 exhibits a _weak Kan extension_ in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\). **Theorem 3.25**.: _Let \(\mathcal{B}\) be a \(2\)-category. The essential image of the \(2\)-functor_ \[\mathcal{G}(-):\;[\mathcal{B},\mathcal{CAT}]\to 2\)_-\(\mathcal{CAT}\)_/\(\mathcal{B}\] is given by the split \(2\)-\(\mathcal{Set}\)-opfibrations with small fibres._ **Remark 3.26**.: The following table shows the four co-op versions of the \(2\)-\(\mathcal{Set}\)-enriched Grothendieck construction, with the corresponding notions of fibration. We have named the various notions of fibration from an enriched point of view, where the op always makes sense while the co does not. ## 4. A pointwise Kan extension result In this section, we investigate a pointwise Kan extension result for the \(2\)-\(\mathcal{Set}\)-enriched Grothendieck construction. The difficulty we encounter is that no notion of (pointwise) Kan extension in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\) seems to appear in the literature. The pointwise version in particular is hard to establish, as it requires a concept of colimit "internal to \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\)", i.e. in a \(2\)-\(\mathcal{Set}\)-category, that does not seem to have been introduced yet. We firstly give a definition of weak Kan extension in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\) (actually, in any lax \(3\)-category), and prove the weak version of the pointwise Kan extension result we aim at establishing. This weak result will be enough to prove \(2\)-fully faithfulness results for the \(2\)-\(\mathcal{Set}\)-enriched Grothendieck construction (in three laxness flavours) and complete Theorem 3.25 to \(2\)-equivalences between \(2\)-copresheaves (on a \(2\)-category) and \(2\)-categories of \(2\)-\(\mathcal{Set}\)-opfibrations. Then, we propose a notion of "internal colimit in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\)", i.e. of colimit in a \(2\)-\(\mathcal{Set}\)-enriched category, and originally define the concept of _pointwise left Kan extension in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\) along a \(2\)-\(\mathcal{Set}\)-opfibration_ using it. We prove that the lax comma object square produced by the \(2\)-\(\mathcal{Set}\)-enriched Grothendieck construction (Theorem 3.11) exhibits a _pointwise left Kan extension in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\)_. This is the third main result of this paper, presented in Theorem 4.17. In the last part of this section, we show the fourth main result of this paper, which is that a pointwise left Kan extension in \(2\)-\(\mathcal{CAT}_{\mathrm{lax}}\) (as defined here) is always a weak one as well. This (presented in Proposition 4.22) will be based on an oplax\({}^{\mathrm{n}}\) - lax generalization of the parametrized Yoneda lemma that does not seem to appear in the literature. Such lemma, proved in Theorem 4.20, also sheds more light on the concept of lax normal natural transformation (Definition 2.8). Indeed it shows how the oplax normal naturality is the minimum amount of strictness that is required to expand the data on the identities to a lax natural transformation. **Definition 4.1**.: A diagram in a lax \(3\)-category \(\mathcal{Q}\) (that is a category enriched over the \(1\)-category of \(2\)-categories and lax functors), exhibits \(L\) as the _weak left Kan extension of \(F\) along \(K\)_, written \(L=\operatorname{lan}_{K}F\), if pasting with \(\lambda\) gives an isomorphism of categories \[\mathcal{Q}\left(\mathcal{A},\,\mathcal{C}\right)\left(L,U\right)\cong\mathcal{ Q}\left(\mathcal{B},\,\mathcal{C}\right)\left(F,U\circ K\right) \tag{10}\] for every \(U\in\mathcal{Q}\left(\mathcal{A},\,\mathcal{C}\right)\) (the \(2\)-naturality in \(U\) is granted automatically). **Remark 4.2**.: Lambert showed in [12] that \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\) is a lax \(3\)-category with hom-\(2\)-categories \[2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\left(\mathcal{A},\, \mathcal{C}\right)\coloneqq\left[\mathcal{A},\mathcal{C}\right]_{\mathrm{lax}}\] where \(\left[\mathcal{A},\mathcal{C}\right]_{\mathrm{lax}}\) is the \(2\)-category of \(2\)-functors from \(\mathcal{A}\) to \(\mathcal{C}\), lax natural transformations and modifications. So, in the notation of Definition 4.1, the isomorphism of categories of equation (10) becomes, for every \(U\in\left[\mathcal{A},\mathcal{C}\right]_{\mathrm{lax}}\), \[\left[\mathcal{A},\mathcal{C}\right]_{\mathrm{lax}}\left(L,U\right)\cong \left[\mathcal{B},\mathcal{C}\right]_{\mathrm{lax}}\left(F,U\circ K\right).\] **Proposition 4.3**.: _Let \(F\colon\mathcal{A}\to\mathcal{C}\mathcal{A}\mathcal{T}\) be a \(2\)-functor and consider its \(2\)-\(\mathcal{S}\mathcal{E}\mathcal{t}\)-enriched Grothendieck construction_ _Call \(\lambda^{F}\) the lax normal natural transformation that presents such lax comma object in \(2\)-\(\mathcal{C}\mathcal{A}\mathcal{T}_{\mathrm{lax}}\). Then, for every \(U\colon\mathcal{A}\to\mathcal{C}\mathcal{A}\mathcal{T}\), pasting with \(\lambda^{F}\) gives an isomorphism of categories_ \[\left[\mathcal{A},\mathcal{C}\mathcal{A}\mathcal{T}\right]_{\mathrm{lax}} \left(F,U\right)\cong\left[\int^{\mathrm{op}}\!F,\mathcal{C}\mathcal{A} \mathcal{T}\right]_{\mathrm{lax}}\left(\Delta 1,U\circ\mathcal{G}\left(F\right)\right),\] _where \(\Delta 1\colon\int^{\mathrm{op}}\!F\to\mathcal{C}\mathcal{A}\mathcal{T}\) is the \(2\)-functor constant at \(1\). Moreover, this restricts to isomorphisms_ \[\left[\mathcal{A},\mathcal{C}\mathcal{A}\mathcal{T}\right]_{\mathrm{ps}} \left(F,U\right)\cong\left[\int^{\mathrm{op}}\!F,\mathcal{C}\mathcal{A} \mathcal{T}\right]_{\mathrm{sigma}}\left(\Delta 1,U\circ\mathcal{G}\left(F\right)\right)\] \[\left[\mathcal{A},\mathcal{C}\mathcal{A}\mathcal{T}\right]\left(F,U\right) \cong\left[\int^{\mathrm{op}}\!F,\mathcal{C}\mathcal{A}\mathcal{T}\right]_{ \mathrm{lax}^{\mathrm{n}}}\left(\Delta 1,U\circ\mathcal{G}\left(F\right)\right),\] _where \(\mathrm{ps}\) means to restrict to pseudonatural transformations and \(\mathrm{sigma}\) means to restrict to sigma natural transformations, that have been defined in Descotte, Dubuc and Szyld's paper [4] and are a pseudo version of the lax normal natural transformations (asking the special structure \(2\)-cells to be isomorphisms rather than identities)._ Proof.: The first isomorphism is proved in Bird's PhD thesis [2]. Looking at the explicit construction, it is straightforward to see that it restricts to the other two. In this, what works very well is that \(\lambda^{F}\) is a lax normal natural transformation, and thus always contributes with an identity in the special structure \(2\)-cells. **Remark 4.4**.: The third isomorphism of Proposition 2.18 offers a shorter but less intuitive and elementary proof to Theorem 2.18 (reduction of the weighted \(2\)-limits to lax normal conical ones). Indeed this is what Street did in [15]. **Theorem 4.5**.: _Let \(F\colon\mathcal{B}\to\mathcal{C}\mathcal{A}\mathcal{T}\) be a \(2\)-functor. Then the lax normal natural transformation \(\lambda^{F}\) of Proposition 4.3 exhibits_ \[F=\operatorname{lan}_{\mathcal{G}(F)}\Delta 1.\] Proof.: The proof is a combination of Remark 4.2 and Proposition 4.3. **Remark 4.6**.: Exactly as can be done in dimension \(1\) (see Remark 1.5), we can use Theorem 4.5 (or actually Proposition 4.3 to prove the \(2\)-fully faithfulness of the \(2\)-\(\mathcal{S}\mathcal{E}\)-enriched Grothendieck construction (in three laxness flavours) and complete Theorem 3.25 to \(2\)-equivalences between \(2\)-copresheaves and \(2\)-\(\mathcal{S}\mathcal{E}\)-opfibrations. The fact that the first \(2\)-functor \(\mathcal{G}(-)\) of Theorem 4.7 is \(2\)-fully faithful is proved also in Bird's [2], but without mentioning that it comes from a weak Kan extension result. None of the three \(2\)-equivalence results of Theorem 4.7 seems to appear in the literature. **Theorem 4.7**.: _Let \(\mathcal{A}\) be a \(2\)-category. The \(2\)-\(\mathcal{S}\mathcal{E}\)-enriched Grothendieck construction \((\)extended to consider lax natural transformations as in the proof of Proposition 3.13\()\) produces a \(2\)-equivalence_ \[\mathcal{G}(-):\;\left[\mathcal{A},\mathcal{C}\mathcal{A}\mathcal{T}\right]_{ \operatorname{lax}}\overset{\sim}{\to}2\text{-}\mathcal{S}\mathcal{E}\mathcal{ E}\mathcal{O}\mathcal{P}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} \mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F}\mathcal{F} The composite isomorphism above then restricts to the following two: \[[\mathcal{A},\mathcal{CAT}]_{\mathrm{ps}}\left(F,G\right)\cong\left[\int^{\mathrm{ op}}\!F,\mathcal{CAT}\right]_{\mathrm{sigma}}\left(\Delta 1,G\circ\mathcal{G}(F)\right)\cong 2\text{-}\mathcal{S}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E}\mathcal{E} \mathcal{E}\mathcal{E}\mathcal{E}\mathcal Recall that in Section 2 we showed that at least the oplax normal conical \(2\)-colimits are as expressive as the weighted \(2\)-colimits. But there is a change of perspective here. We prefer specifying a diagram, a weight and a marking rather than condensing the three to some modified diagram and weight. **Definition 4.10**.: Let \(M\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}T\) (the marking), \(F\colon\int\!M\to\mathcal{C}\) (the diagram) and \(W\colon\left(\int\!M\right)^{\mathrm{op}}\to\mathcal{C}\mathcal{A}T\) (the weight) be \(2\)-functors with \(\mathcal{A}\) small. The _oplax normal \(2\)-colimit of \(F\) marked by \(M\) and weighted by \(W\)_, denoted as \(\operatorname{oplax}^{\mathrm{n}}_{M}\)-\(\operatorname{colim}^{W}F\), is (if it exists) an object \(C\in\mathcal{C}\) together with an isomorphism of categories \[\mathcal{C}\left(C,U\right)\cong\left[\left(\int\!M\right)^{\mathrm{op}}, \mathcal{C}\mathcal{A}T\right]_{\operatorname{oplax}^{\mathrm{n}}}\left(W, \mathcal{C}\left(F(-),U\right)\right)\] \(2\)-natural in \(U\in\mathcal{C}\). When \(\operatorname{oplax}^{\mathrm{n}}_{M}\)-\(\operatorname{colim}^{W}F\) exists, taking \(U=C\) and considering the identity on \(C\) gives us in particular an oplax normal natural transformation \[\mu\colon W\xrightarrow[\operatorname{oplax}^{\mathrm{n}}]{}\mathcal{C} \left(F(-),C\right),\] called the _universal oplax normal cocylinder_. We will need to consider also the case in which the domain of \(F\) is expressed as \(\int^{\mathrm{op}}\!M\) for some \(2\)-functor \(M\colon\mathcal{A}\to\mathcal{C}\mathcal{A}T\), and \(W\colon\left(\int^{\mathrm{op}}\!M\right)^{\mathrm{op}}\to\mathcal{C}\mathcal{A}T\). The _oplax normal \(2\)-colimit of \(F\) opmarked by \(M\) and weighted by \(W\)_, denoted as \(\operatorname{oplax}^{\mathrm{op-n}}_{M}\)-\(\operatorname{colim}^{W}F\), is (if it exists) an object \(C\in\mathcal{C}\) together with an isomorphism of categories \[\mathcal{C}\left(C,U\right)\cong\left[\left(\int^{\mathrm{op}}\!M\right)^{ \mathrm{op}},\mathcal{C}\mathcal{A}T\right]_{\operatorname{oplax}^{\mathrm{n }}}\left(W,\mathcal{C}\left(F(-),U\right)\right)\] \(2\)-natural in \(U\in\mathcal{C}\). **Definition 4.11**.: Recall from Remark 2.11 what we can now call _the trivial marking_ and denote as triv. That is, given \(\mathcal{A}\) a \(2\)-category, we can view \(\mathcal{A}\) as the \(2\)-\(\mathcal{S}\mathcal{E}\)-enriched Grothendieck construction of \(\Delta 1\colon\mathcal{A}\to\mathcal{C}\mathcal{A}T\) (that produces the identity on \(\mathcal{A}\) as \(2\)-\(\mathcal{S}\mathcal{E}\)-opfibration). And \(\operatorname{oplax}^{\mathrm{n}}\) with respect to the trivial marking coincides with the strict \(2\)-naturality (see also Example 2.12). **Remark 4.12**.: We can now rephrase Theorem 2.25 as follows: _In \(2\)-\(\mathcal{C}\mathcal{A}T_{\mathrm{lax}}\) every trivially-marked weighted \(2\)-colimit can be equivalently expressed as a marked trivially-weighted \(2\)-colimit. More precisely, given \(2\)-functors \(F\colon\mathcal{A}\to\mathcal{C}\) and \(W\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}T\) with \(\mathcal{A}\) small,_ \[\operatorname{oplax}^{\mathrm{n}}_{\mathrm{triv}}\)-\(\operatorname{colim}^{W}F\cong\operatorname{oplax}^{\mathrm{n}}_{W}\)-\(\operatorname{colim}^{\Delta 1}(F\circ\mathcal{G}(W))\] So we could say that in \(2\)-\(\mathcal{C}\mathcal{A}T_{\mathrm{lax}}\) all the (trivially-marked) weighted \(2\)-colimits are (now strictly speaking) conicalizable, up to changing the marking. **Example 4.13**.: Let \(F\colon\mathcal{A}^{\mathrm{op}}\to\mathcal{C}\mathcal{A}T\) be a \(2\)-functor with \(\mathcal{A}\) small. Then by Example 2.28 \[F\cong\operatorname{oplax}^{\mathrm{n}}_{F}\)-\(\operatorname{colim}^{\Delta 1}(\mathrm{y}\circ\mathcal{G}(F))\] In particular, taking \(\mathcal{A}=1\), we obtain that for every small category \(\mathcal{D}\) \[\mathcal{D}\cong\operatorname{oplax}^{\mathrm{n}}_{\mathcal{D}}\)-\( \operatorname{colim}^{\Delta 1}\Delta 1.\] Notice that the marking given by \(\mathcal{D}\) is "chaotic", in the sense that \(\operatorname{oplax}^{\mathrm{n}}\) with respect to it simply becomes oplax. So in \(2\)-\(\mathcal{C}\!\mathcal{A}T_{\mathrm{lax}}\) the \(2\)-functor \(\boldsymbol{1}\colon\boldsymbol{1}\to\mathcal{C}\!\mathcal{A}T\), that we can think of as the \(3\)-dimensional classifier of \(2\)-\(\mathcal{C}\!\mathcal{A}T_{\mathrm{lax}}\), is conically dense (with respect to Definition 4.10), considering the chaotic marking. This completes the idea of Remark 2.29. Recall that the \(1\)-dimensional analogue of this, which is that \(1\colon\boldsymbol{1}\to\mathcal{S}\!\mathcal{A}\boldsymbol{t}\) (that is the \(2\)-dimensional classifier of \(\mathcal{C}\!\mathcal{A}T\)) is (conically) dense, was very useful in proving Theorem 1.4 (the pointwise Kan extension result for the \(\mathcal{S}\!\mathcal{A}\)-enriched Grothendieck construction). **Remark 4.14**.: We are now ready to propose an original notion of pointwise left Kan extension in \(2\)-\(\mathcal{C}\!\mathcal{A}T_{\mathrm{lax}}\) along a \(2\)-\(\mathcal{S}\!\mathcal{A}\boldsymbol{t}\)-opfibration, using our definition of colimit in such setting (Definition 4.10). Our idea is to keep the corresponding diagram and weight considered in the (ordinary) enriched setting (see Kelly's [11] for the classical definition), but adding the marking that we naturally have when we extend along a \(2\)-\(\mathcal{S}\!\mathcal{A}\boldsymbol{t}\)-opfibration. **Definition 4.15**.: Consider a diagram in \(2\)-\(\mathcal{C}\!\mathcal{A}T_{\mathrm{lax}}\) with \(\mathcal{B}\) small and \(K\) a \(2\)-\(\mathcal{S}\!\mathcal{A}\boldsymbol{t}\)-opfibration. Then by Theorem 3.25, \(K\) is isomorphic in the slice \(2\)-\(\mathcal{C}\!\mathcal{A}T\)/\(\mathcal{A}\) to \(\mathcal{G}(M)\) for some \(2\)-functor \(M\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}T\). We can assume \(K\) is in the form \(\mathcal{G}(M)\), up to whiskering the diagram with the isomorphism in the slice. Assume then that \(\lambda\) is a lax normal natural transformation with respect to \(M\). We say that \(\lambda\) exhibits \(L\) as the _pointwise left Kan extension of \(F\) along \(K\)_, written \(L=\operatorname{Lan}_{K}F\), if for every \(A\in\mathcal{A}\) \[L(A)\cong\operatorname{oplax^{op\text{-}n}}_{M}\text{-colim}^{\mathcal{A}(K(- ),A)}F\] with universal oplax normal cocylinder \[\mathcal{A}\left(K(-),A\right)\xRightarrow{L}\,\mathcal{C}\left((L\circ K)( -),L(A)\right)\xRightarrow{C(\lambda_{-},\operatorname{id})\overleftarrow{ \operatorname{oplax}^{n}}}\,\mathcal{C}\left(F(-),L(A)\right); \tag{11}\] or equivalently if for every \(A\in\mathcal{A}\) and every \(C\in\mathcal{C}\) the functor \[\mathcal{C}\left(L(A),C\right)\xRightarrow{\left[\mathcal{B}^{\mathrm{op}}, \mathcal{C}\!\mathcal{A}T\right]}_{\operatorname{oplax}^{n}}\left(\mathcal{A} \left(K(-),A\right)\text{, }\mathcal{C}\left(F(-),C\right)\right)\] given by the oplax normal natural transformation of equation (11) is an isomorphism of categories (notice that the \(2\)-naturality in \(C\) and \(A\) is granted, where the latter is using that \(L\) is a \(2\)-functor). **Remark 4.16**.: We now prove the pointwise Kan extension result for the \(2\)-\(\mathcal{S}\!\mathcal{A}\boldsymbol{t}\)-enriched Grothendieck construction we were aiming at (from Remark 4.9). Such result is original. **Theorem 4.17**.: _Let \(F\colon\mathcal{A}\to\mathcal{C}\!\mathcal{A}T\) be a \(2\)-functor with \(\mathcal{A}\) a small \(2\)-category. Then the \(2\)-\(\mathcal{S}\!\mathcal{A}\boldsymbol{t}\)-enriched Grothendieck construction lax comma object square_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ _[_1_]_ [_1_]_ _[_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [1_]_ [_1_]_ [_1_]_ [_1_]_ [1_]_ [_1_]_ [_1_]_ [1_]_ [_1_]_ [_1_]_ [1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_ [1_]_ _]_ [_1_ [1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_]_ [_1_ exhibits_ \[F=\operatorname{Lan}_{\mathcal{G}(F)}\Delta 1.\] Proof.: By Theorem 3.11 (and Proposition 3.2) we know that the lax natural transformation \(\lambda\) that presents the lax comma object is lax normal. Given \(A\in\mathcal{A}\) and \(C\in\mathcal{CAT}\), we prove that the oplax normal natural transformation \[\mathcal{A}\left(\mathcal{G}\left(F\right)(-),A\right)\xrightarrow{F} \mathcal{CAT}\left((F\circ\mathcal{G}(F))(-),F(A)\right)\xrightarrow{\mathcal{ CAT}(\lambda_{-},\operatorname{id})}\mathcal{CAT}\left(\Delta 1(-),F(A)\right),\] that we call \(\mu\), is \(2\)-universal. Explicitly, \(\mu\) has components \[\mu_{(B,X)}:\qquad\mathcal{B}\left(B,A\right)\ \longrightarrow\ \mathcal{CAT}\left(1,F(A)\right)\] for every \((B,X)\in\int^{\operatorname{op}}\!F\) and structure \(2\)-cells \[\left(\mu_{(g,\gamma)}\right)_{u}=F(u)(\gamma)\colon F(u\circ g)(X^{\prime}) \to F(u)(X)\] on every \((g,\gamma)\colon(B,X)\leftarrow(B^{\prime},X^{\prime})\) in \(\int^{\operatorname{op}}\!F\), for every \(u\colon B\to A\) in \(\mathcal{A}\). Given \[\sigma\colon\mathcal{A}\left(\mathcal{G}\left(F\right)(-),A\right) \xrightarrow{\ And this \(\xi\) works since it is readily shown to be natural (using that \(\Xi\) is a modification) and for every \((B,X)\in\int^{\mathrm{op}}F\) and \(u\colon B\to A\) in \(\mathcal{A}\) \[\Xi_{(B,X),u}=\Xi_{(A,F(u)(X)),\mathrm{id}_{A}}=\xi_{F(u)(X)}=\xi_{\mu_{(B,X)}( u)}.\] We have thus shown that \(\mu\) is \(2\)-universal and this concludes the proof. **Remark 4.18**.: The rest of this section is dedicated to the proof that every pointwise left Kan extension in \(2\)-\(\mathcal{C}\mathcal{A}T_{\mathrm{lax}}\) along a \(2\)-\(\mathcal{S}\mathcal{E}\)-opfibration (as defined in Definition 4.15) is a weak left Kan extension in \(2\)-\(\mathcal{C}\mathcal{A}T_{\mathrm{lax}}\) as well. For this, we need a oplax\({}^{\mathrm{n}}\) - lax generalization of the parametrized Yoneda lemma (Theorem 4.20) that does not seem to appear in the literature. This lemma will also shed further light on the concept of oplax normal natural transformation. Indeed the idea is the following: a fully lax parametrized Yoneda lemma is not possible, since it is thanks to the strict naturality that we can classically expand the datum on the identity to a complete natural transformation, but our version shows the minimal strictness that we need in order to do so. Interestingly, such expansion in the fully strict \(2\)-natural case classically depends on the naturality of what will be our parameter \(A\). Instead, we will need to expand through the slight strictness of the oplax normal naturality in \(B\). And an expansion through \(B\) is harder to achieve than one through \(A\). **Definition 4.19**.: Let \(G,H\colon\mathcal{B}^{\mathrm{op}}\times\mathcal{C}\to\mathcal{E}\) be \(2\)-functors. An _oplax\({}^{\mathrm{n}}\)-lax natural transformation_\(\alpha\) from \(G\) to \(H\) is a collection of morphisms \[\alpha_{B,C}\colon G(B,C)\to H(B,C)\] in \(\mathcal{E}\) for every \((B,C)\in\mathcal{B}^{\mathrm{op}}\times\mathcal{C}\) and, for every \(f\colon B^{\prime}\to B\) in \(\mathcal{B}^{\mathrm{op}}\) and \(g\colon C\to C^{\prime}\) in \(\mathcal{C}\), structure \(2\)-cells such that \(\alpha_{-,C}\) is oplax normal natural in \(B\in\mathcal{B}^{\mathrm{op}}\), \(\alpha_{B,-}\) is lax natural in \(C\in\mathcal{C}\) and We call the last axiom the compatibility axiom. A _modification_\(\Theta\colon\alpha\xRightarrow{\beta}\colon G\xRightarrow{\text{\rm\small\raisebox{-1.0pt}{\scalebox{0.8}{$\frac{\alpha_{B,C}}{\text{\rm\small\raisebox{-1.0pt}{ \scalebox{0.8}{$\frac{\alpha_{B,C}}{\text{\rm\small\raisebox{-1.0pt}{\scalebox{0.8}{$ \frac{\beta_{B,C}}{\text{\rm\small\raisebox{-1.0pt}{\scalebox{0.8}{$\frac{\beta_{B,C} }{\text{\rm\small\raisebox{-1.0pt}{\scalebox{0.8}{$\beta_{B,C}}$}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \, \,H\,H\, surely respect the identities, and they also respect the composition by the compability axiom of \(\mathrm{oplax}^{\mathrm{n}}\) - lax (and \(\mathrm{oplax}\) and \(\mathrm{lax}\) naturality). The two dimensional axiom is satisfied as well by moving the external \(2\)-cell through the diagram using the three \(2\)-dimensional properties that we have. We can then apply this result to \(\eta_{(B,Y)}\) since \(\mathrm{id}_{K(B,Y)}\) is extraordinary natural in \((B,Y)\) and \(\alpha_{(B,Y),K(B^{\prime},Y^{\prime})}\) is \(\mathrm{oplax}^{\mathrm{n}}\) - lax natural in \(((B,Y),(B^{\prime},Y^{\prime}))\) (as \(\alpha(B,Y),A\) is \(\mathrm{oplax}^{\mathrm{n}}\) - lax in \(((B,Y),A)\)). Now, given \(\eta_{B}\colon 1\to F((B,Y),B)\) extraordinary lax natural in \((B,Y)\in\int^{\mathrm{op}}\!M\), we expand it to functors \[\alpha_{(B,Y),A}\colon\mathcal{A}\left(K(B,Y),A\right)\to F((B,Y),A)\] \(\mathrm{oplax}^{\mathrm{n}}\) - lax natural in \(((B,Y),A)\in\left(\int^{\mathrm{op}}\!M\right)^{\mathrm{op}}\times\mathcal{A}\) as follows, using the \(\mathrm{oplax}\) normal naturality in \((B,Y)\) (that is the only strictness we have). Given \(u\colon B\to A\) in \(\mathcal{A}\), considering \(\underline{u}^{Y}=(u,\mathrm{id})\), the structure \(2\)-cell \(\alpha_{(u,\mathrm{id}),A}=\mathrm{id}\) will give us a commutative square So, looking at how we constructed \(\eta\) from \(\alpha\), in order to reach the bijection we want, we define \[\alpha_{(B,Y),A}(u)\coloneqq F((u,\mathrm{id}),A)\left(\eta_{(A,M(u)(Y))} \right).\] Given \(\theta\colon u\Rightarrow v\colon B\to A\) in \(\mathcal{A}\), considering \[\underline{\theta}^{Y}\colon(u,M(\theta)_{Y})\Rightarrow(v,\mathrm{id})\colon (B,Y)\to(A,M(v)(Y))\] and using that \(\alpha_{(v,\mathrm{id}),A}=\mathrm{id}\), we will have by the \(2\)-dimensional axiom of \(\mathrm{oplax}\) normal naturality that \[\alpha_{(B,Y),A}(\theta)=F(\underline{\theta}^{Y},A)_{\alpha_{(A,M(v)(Y)),A }(\mathrm{id}_{A})}\circ\left(\alpha_{(u,M(\theta)_{Y}),A}\right)_{\mathrm{id }_{A}}\] So we firstly define the components of the structure \(2\)-cells that express the \(\mathrm{oplax}\) normal naturality of \(\alpha_{(B,Y),A}\) in \((B,Y)\) and then we will read how to define the action of \(\alpha_{(B,Y),A}\) on morphisms \(\theta\). Looking at the diagram of equation (12) applied to \((\mathrm{id}_{B},\gamma)\colon(B,Y)\to(B,Y^{\prime})\) in \(\int^{\mathrm{op}}\!M\), we see that, in order to have a bijection between the \(\alpha\)'s and the \(\eta\)'s, we need to define \[\left(\alpha_{(\mathrm{id}_{B},\gamma),B}\right)_{\mathrm{id}_{B}}\coloneqq \eta_{(\mathrm{id}_{B},\gamma)}.\] Whence, given arbitrary \((g,\gamma)\colon(B,Y)\to(B^{\prime},Y^{\prime})\) in \(\int^{\mathrm{op}}\!M\) and \(u\colon B\to A\) in \(\mathcal{A}\), since \[\left(u,\mathrm{id}\right)\circ(g,\gamma)=(\mathrm{id}_{A},M(u)(\gamma)) \circ(u\circ g,\mathrm{id}),\] we need to define \[\left(\alpha_{(g,\gamma),A}\right)_{u}\coloneqq F((u\circ g,\mathrm{id}),A) \left(\eta_{(\mathrm{id}_{A},M(u)(\gamma))}\right).\] And at this point we define, by the argument above, \[\alpha_{(B,Y),A}(\theta)\coloneqq F(\underline{\theta}^{Y},A)_{\eta_{(A,M(v)( Y))}}\circ F((u,\mathrm{id}),A)\left(\eta_{(\mathrm{id}_{A},M(\theta)_{Y})}\right)\] for every \(\theta\colon u\Rightarrow v\colon B\to A\) in \(\mathcal{A}\). Looking at the diagram of equation (12) applied to \((g,\mathrm{id})\colon(B,Y)\to(B^{\prime},M(g)(Y))\) in \(\int^{\mathrm{op}}\!M\), we see that, in order to have a bijection between the \(\alpha\)'s and the \(\eta\)'s, we need to define \[\left(\alpha_{(B,Y),g}\right)_{\mathrm{id}_{B}}\coloneqq\eta_{(g,\mathrm{id})}.\] Whence, given an arbitrary \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) and \(u\colon K(B,Y)\to A\) in \(\mathcal{A}\), by the compatibility axiom of oplax\(\,\)-\(\,\)lax applied to \((u,\operatorname{id})\colon(B,Y)\to(A,M(u)(Y))\) in \(\int^{\operatorname{op}}\!M\) and \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\), we need to define \[\big{(}\alpha_{(B,Y),f}\big{)}_{u}\coloneqq F((u,\operatorname{id}),A^{\prime })\,\big{(}\eta_{(f,\operatorname{id})}\big{)}\,.\] Now, we verify that such assignments work. To show that \(\alpha_{(B,Y),A}\) is a functor, consider \[u\xrightarrow{\theta}v\xrightarrow{\theta}w\colon B\to A\] in \(\mathcal{A}\). Then \[\alpha_{(B,Y),A}(\rho\circ\theta)\coloneqq F(\underbrace{(\rho\circ\theta)}^{ Y},A)_{\eta_{(A,M(w)(Y))}}\circ F((u,\operatorname{id}),A)\,\big{(}\eta_{( \operatorname{id}_{A},M(\rho\circ\theta)_{Y})}\big{)}\] while \(\alpha_{(B,Y),A}(\rho)\circ\alpha_{(B,Y),A}(\theta)\) is equal to \[F(\underline{\rho}^{Y},A)_{\eta_{(A,M(w)(Y))}}\circ F((v,\operatorname{id}),A )\,\big{(}\eta_{(\operatorname{id}_{A},M(\rho)_{Y})}\big{)}\circ F(\underline{ \theta}^{Y},A)_{\eta_{(A,M(w)(Y))}}\circ F((u,\operatorname{id}),A)\,\big{(} \eta_{(\operatorname{id}_{A},M(\theta)_{Y})}\big{)}\] By the extraordinary naturality of \(\eta\), \[\eta_{(\operatorname{id}_{A},M(\rho\circ\theta)_{Y})}=F((\operatorname{id}_{ A},M(\theta)_{Y}),A)\,\big{(}\eta_{(\operatorname{id}_{A},M(\rho)_{Y})} \big{)}\circ\eta_{(\operatorname{id}_{A},M(\theta)_{Y})}\] And by the uniqueness of the liftings of \(2\)-cells through \(\mathcal{G}(M)\), we have that \[\underline{(\rho\circ\theta)}^{Y}=\underline{\rho}^{Y}\circ(\operatorname{id }_{A},M(\rho)_{Y})\underline{\theta}^{Y}.\] So, by \(2\)-functoriality of \(F\), it suffices to prove that \[F(\underline{\theta}^{Y},A)_{F((\operatorname{id}_{A},M(\rho)_{Y}),A)\,(\eta _{(A,M(w)_{Y})})}\circ F((u,M(\theta)_{Y}),A)\,\big{(}\eta_{(\operatorname{id }_{A},M(\rho)_{Y})}\big{)}\] is equal to \[F((v,\operatorname{id}),A)\,\big{(}\eta_{(\operatorname{id}_{A},M(\rho)_{Y}) }\big{)}\circ F(\underline{\theta}^{Y},A)_{\eta_{(A,M(v)(Y))}}.\] But this is true by naturality of \(F(\underline{\theta}^{Y},A)\) applied to the morphism \[\eta_{(\operatorname{id}_{A},M(\rho)_{Y})}\colon\eta_{(A,M(v)(Y))}\to F(( \operatorname{id}_{A},M(\rho)_{Y}),A)\,\big{(}\eta_{(A,M(w)(Y))}\big{)}\,.\] The fact that \(\big{(}\alpha_{(B,Y),f}\big{)}_{u}\) is a natural transformation is checked with techniques similar to the above ones, noticing that \[\underline{(f\theta)}^{Y}=(f,\operatorname{id})\underline{\theta}^{Y}.\] Whereas showing that \(\big{(}\alpha_{(g,\gamma),A}\big{)}_{u}\) is a natural transformation uses that for \((g,\gamma)\colon(B,Y)\to(B^{\prime},Y^{\prime})\) \[\underline{(\theta g)}^{Y}=\underline{\theta}^{M(g)(Y)}(g,\operatorname{id}) \quad\text{and}\quad\underline{\theta}^{Y^{\prime}}(\operatorname{id},\gamma )=(\operatorname{id},M(v)(\gamma))\underline{\theta}^{M(g)(Y)}.\] At this point, it is straightforward to check that \(\alpha_{(B,Y),A}\) is oplax\({}^{\text{n}}\,\)-\(\,\)lax in \(((B,Y),A)\). And it is immediatly seen that we obtain a bijection between the \(\alpha\)'s and the \(\eta\)'s by construction. Finally, we extend such bijection to an isomorphism of categories. Given a modification \[\Theta_{(B,Y),A}\colon\alpha_{(B,Y),A}\Rightarrow\beta_{(B,Y),A}\colon \mathcal{A}\,(K(B,Y),A)\to F((B,Y),A)\] between oplax\({}^{\text{n}}\,\)-\(\,\)lax natural transformations in \(((B,Y),A)\), we send it to the modification between extraordinary lax natural transformations (that the latter is such is easily checked). And such assignment is surely functorial. Given a modification \[\Gamma_{(B,Y)}\colon\eta_{(B,Y)}\Rightarrow\eta^{\prime}_{(B,Y)}\colon 1\to F((B,Y),B)\] between extraordinary lax natural transformations, we construct a corresponding modification \(\Theta\). We see that, if we want to reach an isomorphism of categories, we need to define \[\left(\Theta_{(B,Y),B}\right)_{\operatorname{id}_{B}}\coloneqq\Gamma_{(B,Y)}.\] Whence, given an arbitrary \(u\colon B\to A\) in \(\mathcal{A}\), since we want \(\Theta_{-,A}\) to be a modification, considering \((u,\operatorname{id})\colon(B,Y)\to(A,M(u)(Y))\) in \(\int^{\operatorname{op}}M\), we need to define \[\left(\Theta_{(B,Y),A}\right)_{u}\coloneqq F((u,\operatorname{id}),A)\left( \Gamma_{(A,M(u)(Y))}\right).\] It is straightforward to check that \(\Theta\) is then a modification between \(\operatorname{oplax}^{n}\) - lax natural transformations. And such assignment is surely functorial. At this point, it is immediate to see that the two functors are, by construction, inverses of each other, giving the desidered isomorphism of categories. **Remark 4.21**.: We are now ready to show that a pointwise left Kan extension in \(2\) - \(\mathcal{C}\mathcal{A}T_{\operatorname{lax}}\) along a \(2\)-\(\mathcal{S}\mathcal{E}\)-opfibration is always a weak left Kan extension. **Proposition 4.22**.: _Consider a diagram_ _in \(2\) - \(\mathcal{C}\mathcal{A}T_{\operatorname{lax}}\) with \(\mathcal{B}\) small and \(K\) a \(2\)-\(\mathcal{S}\mathcal{E}\)-opfibration. Assume that \(\lambda\) exhibits \(L=\operatorname{Lan}_{K}F\)_(_in the sense of Definition 4.15_)_. Then \(\lambda\) also exhibits \(L=\operatorname{lan}_{K}F\)._ Proof.: Since \(L=\operatorname{Lan}_{K}F\), for every \(C\), the \(2\)-cell is \(2\)-universal, giving an isomorphism of categories \[\mathcal{C}\left(L(A),C\right)\xrightarrow{}[\mathcal{B}^{\operatorname{op}}, \mathcal{C}\mathcal{A}T]_{\operatorname{oplax}^{n}}\left(\mathcal{A}\left(K( -),A\right)\text{, }\mathcal{C}\left(F(-),C\right)\right) \tag{13}\] We need to prove that, for every \(U\in[\mathcal{A},\mathcal{C}]_{\operatorname{lax}}\), pasting with \(\lambda\) gives an isomorphism of categories \[[\mathcal{A},\mathcal{C}]_{\operatorname{lax}}\left(L,U\right)\cong[ \mathcal{B},\mathcal{C}]_{\operatorname{lax}}\left(F,U\circ K\right).\] So consider a lax natural transformation \(\varphi\colon L\underset{\operatorname{lax}}{\Longrightarrow}U\). For every \(A\in\mathcal{A}\), the component \(\varphi_{A}\colon L(A)\to U(A)\) corresponds to an oplax normal natural transformation \[\alpha_{-,A}\colon\mathcal{A}\left(K(-),A\right)\underset{\operatorname{oplax }^{n}}{=}\mathcal{C}\left(F(-),U(A)\right)\] via the isomorphism of equation (13). And \(\varphi_{A}\) being lax natural in \(A\in\mathcal{A}\) precisely corresponds to the oplax normal natural transformations \(\alpha_{-,A}\) being lax natural in \(\mathcal{A}\), with structure \(2\)-cell on \(f\colon A\to A^{\prime}\) in \(\mathcal{A}\) given by the image of \(\varphi_{f}\) through the isomorphism of equation (13). This means that the lax natural transformations \(\varphi\) precisely correspond to functors \(\alpha_{B,A}\)\(\operatorname{oplax}^{n}\) - lax natural in \((B,A)\in\mathcal{B}^{\operatorname{op}}\times\mathcal{A}\). Consider then a modification \(\Sigma\colon\varphi\xRightarrow{\varphi}\psi\colon L\mathrel{\mathop{\hbox{\hbox to 0.0pt{ \vrule height 6.0pt width 0.4pt depth -0.25pt\hss}\hbox{$\rightarrow$} \hss}}\limits}U\). The components \(\Sigma_{A}\) with \(A\in\mathcal{A}\) correspond to modifications \(\Theta_{-,A}\) between oplex normal natural transformations \(\alpha_{-,A}\) and \(\beta_{-,A}\). And the modification axiom for \(\Sigma\) corresponds to the modification axiom for \(\Theta_{B,-}\) for every fixed \(B\in\mathcal{B}\). So the modifications \(\Sigma\) precisely correspond to modifications \(\Theta\) between oplex\({}^{\mathrm{n}}\) - lax natural transformations \(\alpha\) and \(\beta\). By the functoriality of the isomorphism of equation (13), we obtain an isomorphism of categories between \(\left[\mathcal{A},\mathcal{C}\right]_{\mathrm{lax}}(L,U)\) and the category of oplex\({}^{\mathrm{n}}\) - lax natural transformations \[\alpha_{B,A}\colon\mathcal{A}\left(K(B),A\right)\mathrel{\mathop{\hbox{ \hbox to 0.0pt{\vrule height 6.0pt width 0.4pt depth -0.25pt\hss}\hbox{$\rightarrow$} \hss}}\limits_{\mathrm{oplex}^{\mathrm{n}}}}\mathcal{C}\left(F(B),U(A)\right)\] in \((B,A)\in\mathcal{B}^{\mathrm{op}}\times\mathcal{A}\) and modifications between them. By Theorem 4.20 (the oplax\({}^{\mathrm{n}}\) - lax parametrized Yoneda lemma), the latter category is then isomorphic to the category of extraordinary lax natural transformations \[1\rightarrow\mathcal{C}\left(F(B),U(K(B))\right)\] in \(B\in\mathcal{B}\) and modifications between them, which is isomorphic (for example, by Hirata's paper [8]) to \(\left[\mathcal{B},\mathcal{C}\right]_{\mathrm{lax}}(F,U\circ K)\). Therefore we have produced an isomorphism of categories \[\left[\mathcal{A},\mathcal{C}\right]_{\mathrm{lax}}(L,U)\cong\left[\mathcal{B },\mathcal{C}\right]_{\mathrm{lax}}(F,U\circ K)\,,\] and we can read that this is given by pasting with \(\lambda\).
2301.05328
Tetra-penta-deca-hexagonal-graphene (TPDH-graphene) hydrogenation patterns: dynamics and electronic structure
The advent of graphene has renewed the interest in other 2D carbon-based materials. Bhattacharya and Jana have proposed a new carbon allotrope, composed of different polygonal carbon rings containing 4, 5, 6, and 10 atoms, named Tetra-Penta-Deca-Hexagonal-graphene (TPDH-graphene). This unusual topology created material with interesting mechanical, electronic, and optical properties and several potential applications, including UV protection. Like other 2D carbon structures, chemical functionalizations can be used to tune their TPDH-graphene properties. In this work, we investigated the hydrogenation dynamics of TPDH-graphene and its effects on its electronic structure, combining DFT and fully atomistic reactive molecular dynamics simulations. Our results show that H atoms are mainly incorporated on tetragonal ring sites (up to 80% at 300 K), leading to the appearance of well-delimited pentagonal carbon stripes. The electronic structure of the hydrogenated structures shows the formation of narrow bandgaps with the presence of Dirac cone-like structures, indicative of anisotropic transport properties.
Caique C. Oliveira, Matheus Medina, Douglas S. Galvao, Pedro A. S. Autreto
2023-01-12T23:27:01Z
http://arxiv.org/abs/2301.05328v1
# Tetra-penta-deca-hexagonal-graphene ###### Abstract The advent of graphene has renewed the interest in other 2D carbon-based materials. Bhattacharya and Jana have proposed a new carbon allotrope, composed of different polygonal carbon rings containing 4, 5, 6, and 10 atoms, named Tetra-Penta-Deca-Hexagonal-graphene (TPDH-graphene). This unusual topology created material with interesting mechanical, electronic, and optical properties and several potential applications, including UV protection. Like other 2D carbon structures, chemical functionalizations can be used to tune their TPDH-graphene properties. In this work, we investigated the hydrogenation dynamics of TPDH-graphene and its effects on its electronic structure, combining DFT and fully atomistic reactive molecular dynamics simulations. Our results show that H atoms are mainly incorporated on tetragonal ring sites (up to \(80\%\) at 300 K), leading to the appearance of well-delimited pentagonal carbon stripes. The electronic structure of the hydrogenated structures shows the formation of narrow bandgaps with the presence of Dirac cone-like structures, indicative of anisotropic transport properties. ## 1 Introduction The versatility in chemical bonding (different hybridizations) of carbon atoms allows the existence of a wide variety of different structures (allotropes) [1], such as fullerenes [2], nanotubes [3], and graphene [4]. Graphene is a 2D allotrope of \(sp^{2}\) carbon atoms tightly packed into a hexagonal honeycomb lattice. It presents high carrier mobility (\(5000cm^{2}/V.s\))[4, 5], high thermal conductivity (\(5000WmK^{-1}\)) [6], and Young modulus value of \(1\) TPa [7], one of the highest values ever measured. It has unveiled new and unique physics phenomena, including the quantum Hall effect [8], the ambipolar electric field effect [4], and the massless charge carriers of Dirac fermions [9]. These remarkable properties have made graphene the subject of a large number of theoretical and experimental studies in different areas, such as catalysis [10], electronics [11], spintronics [12], twistronics [13], and gas sensors [14], to name just a few. However, graphene is a null electronic gap material, even exhibiting extraordinary electronic properties, which limits its use in some applications [4]. Chemical functionalizations, such as hydrogenation, are one viable mechanism for altering graphene-like structures' properties (including opening the gap [15, 16, 17] or changing the Fermi level [18]). Structural and electronic changes are introduced when the chemical species form covalent bonds. The partial hydrogenation of graphene introduces unsaturated \(sp^{3}\) carbon atoms that can be used to attach additional functional groups. Despite these limitations, the advent of graphene created a revolution in materials science and renewed the interest in 2D carbon allotropes. Among these structures, it is worth mentioning graphynes and biphenylene carbon networks [19]. Graphynes are the generic name for families of 2D carbon porous structures containing hexagon rings connected by acetylenic groups and with \(sp\) and \(sp^{2}\) hybridized carbon atoms in the same lattice [19]. Graphdyines refer to the structural families where two acetylenic groups connect the hexagons [20]. They can exhibit metallic and semiconducting behaviors [21] and have been exploited in different technological applications [22]. Biphenylene carbon networks (including biphenylene carbon and graphenylenes) are families of porous structures composed of mixed carbon rings (pentagons, hexagons, heptagons, octagons, etc.) [19; 23; 24]. Similarly to graphynes, they can be metallic, or semiconductors and have potential applications in catalysis [23], gas sensors [25], batteries [26], and energy storage applications [27]. Recently, new synthetic routes for graphynes [28; 29] and biphenylene carbon networks [30] have been reported increasing the interest in these materials. Bhattacharya and Jana [31] have proposed a new structure composed of two pentagons and a tetragonal ring called tetra-penta-octogonal graphene (TPO-graphene). It is metallic with a Dirac cone at \(3.7\) eV above the Fermi level. More recently, the proposed another structure belonging to the tetra-pentagonal graphene family composed of \(sp^{2}\) carbon rings with 4, 5, 12, and 6 atoms (Fig. 1) named tetra-penta-deca-hexagonal graphene (TPDH-graphene). It possesses thermal and dynamical stability and exhibits elastic anisotropy with Young's modulus value larger than that of graphene in a specific direction. Depending on the morphology, TPDH-graphene nanoribbons can exhibit metallic, or semiconductor behavior [32]. In this work, we have investigated the effects of hydrogenation on the structural and electronic properties of TPDH-graphene (TPDH-gr). The hydrogenation of TPDH-gr sheets was investigated through reactive molecular dynamics simulations. Structural optimization, energy, and electronic properties were further analyzed using ab initio (DFT) calculations. In spite of graphene's extraordinary properties, it is a null gap material, which Chemical functionalization is one viable mechanism to introduce specific modifications into graphene-like structures. Structural and electronic changes are introduced when the chemical species being introduced form a covalent bond. For example, graphite oxides can form oxygen groups in graphene sheets dispersed in water and organic solvents [33]. Stankovich et al. prepared graphite oxides functionalized with isocyanates that were later exfoliated into graphene oxides dispersed in an aprotic polar solvent [34] in a stable manner. Partial hydrogenation of graphene sheets introduces unsaturated carbon atoms \(sp^{3}\) that neighbor unpaired with electrons that can be used to attach additional functional groups. Chemical functionalization also allows one to change the electronic properties of the structure by opening a bandgap [15; 16; 17] or changing the Fermi level [18]. Figure 1: (a) Schematics of the unit cell of tetra-penta-deca-hexagonal-graphene (TPDH) and the corresponding carbon-carbon bond-length values. The different colors indicate non-equivalent carbon atoms. (b) A \(2\times 2\) supercell illustrating the TPDHG rings and the pores of the structure. The corresponding Unit cell vector values are indicated in the highlighted red rectangle. (c) The structural setup simulation used in the simulations. A TPDH membrane (indicated in blue) is deposited on a graphene frame (gray), and the TPDHG/graphene structure is immersed in a hydrogen atmosphere (yellow). See text for discussions. ## 2 Computational Methods First-principles calculations were carried out within the Density Functional Theory (DFT) framework as implemented in Quantum Espresso code [35]. Electron-ion interactions were dealt with Projected Augmented wave (PAW) and Ultra-soft pseudopotentials for C and H atoms, respectively. They were obtained from the Standard Solid State Pseudopotentials library (SSSP) [36; 37]. Exchange and correlation potential were used within the Generalized Gradient Approximation (GGA) with the parameterization of Perdew, Burke, and Ernzerhof (GGA-PBE functional) [38]. Valence electrons were treated with a set of plane waves basis set with a kinetic energy cutoff of 680 eV. The diagonalization of the density matrix was performed with the Davidson iterative method with matrix overlap using the self-consistency threshold of \(10^{-6}\) eV. In the ionic relaxation calculations, the convergence thresholds were set to \(10^{-3}\) eV and \(10^{-2}\) eV/A for energy and forces, respectively. Brillouin zone (BZ) sampling was performed using a \(12\times 12\times 1\) (\(16\times 16\times 1\)) k-point grid for SCF (NSCF) calculations following the scheme proposed by Monkhorst and Pack [39]. For electronic structure calculations, the k-points were chosen along the following path in the BZ: \(\Gamma(0,0,0)\) - \(M(0.5,0.5,0)\) - \(X(0.5,0,0)\) - \(\Gamma(0,0,0)\) - \(Y(0,0.5,0)\) - \(M(0.5,0.5,0)\) - \(\Gamma(0,0,0)\). We have also carried out fully atomistic molecular dynamics (MD) simulations using the large-scale atomic/molecular massively parallel simulator (LAMMPS) code[40]. Atomic interactions were treated with the reactive force field (ReaxFF) [41], with C-C interaction parameters developed by Chenoweth _et al.[42]_. All MD simulations were carried out in the canonical (_NVT_) ensemble, with a time step of \(0.25\) fs, and using a Nose-Hoover thermostat [43]. The hydrogenation simulations were carried out considering a TPDH-gr membrane deposited on a graphene frame, as shown in Fig. 1.c. The TPDH-graphene membrane is a 24x15 supercell, in which only its central part (16x11) is exposed to the hydrogen atmosphere, resulting in a total number of 2112 available adsorption/reaction sites. The hydrogen atmosphere was composed of 500 atoms in a volume of 60 000 A\({}^{3}\) on each side of the membrane, constrained to the exposed region of the membrane. This methodology has been successfully applied to other systems, such as Me-graphane[16] and graphone[44]. Figure 2: Adsorption energies for TPDH-gr in a) the non-equivalent sites, b) with an H atom adsorbed in the C1 site, c) two H atoms adsorbed in the C1 and C7’ sites, and d) tree H atoms adsorbed in the C1, C7’ and C5 sites. e) Non-equivalent sites in TPDH-gr. f) remaining sites in the tetragonal ring with the C1 site occupied. The top sites are indicated by the solid line, while the bottom sides are indicated by the dashed ones and prime labels. g) Top and side views of TPDH-gr with C1 and C7’ sites occupied. The side view also shows the buckling height. h) Top and side views of TPDH-gr with a fully hydrogenated tetragonal ring. ## 3 Results and Discussion ### Ab initio Binding Energy and Hydrogenation Dynamics TPDH-gr has a \(Pmmm\) (space group \(\#47\)) symmetry; the \(12\) carbon atoms in its unit cell are arranged in an orthorhombic lattice. The obtained optimized lattice parameters were: \(a=4.94\) A, \(b=6.97\) A with \(\gamma=90^{o}\). There are three different bond lengths (\(1.41\), \(1.50\), and \(1.44\) A.) involving the C atoms, as shown in Fig. 1.a. Except for the C atoms bonded along the \(\vec{a}\) direction in the tetragonal ring, the bond lengths are close to those \(sp^{2}\) in graphene (\(1.41\) A)[45]. These results agree well with those reported by Bathacharya and Jana [32]. The most favorable sites for H adsorption/reaction were investigated by evaluating the _binding energy_ per adsorbed atom, calculated as the energy difference between the hydrogenated structure and its parts: \[E_{b}=-\left[\frac{E_{TPDH+nH}-(E_{TPDH}+nE_{H})}{n}\right]\] where \(E_{TPDH+nH}\) is the energy of TPDH-gr with n adsorbed H atoms, \(E_{TPDH}\) is the energy of a TPDH-gr unit cell, and \(E_{H}\) the energy of an isolated H atom. The negative sign means that high energies indicate more favorable sites for adsorption than others in the same structure. First, an H atom is adsorbed at each of the non-equivalent sites (Fig. 1.a). The site corresponding to the highest energy is taken as the most favorable. Then, a second H is adsorbed at each of the remaining sites, and the most favorable one is evaluated according to \(E_{b}\). This process is repeated until the tetragonal ring on the TPDH-gr is fully hydrogenated. We present the binding energies and obtained structures in Fig. 2. The adsorption of the first H atom is more favorable on the \(C1\) site, as seen in Fig. 2.a with \(E_{b}\) of \(3.35\) eV/atom. After C1-Cx adsorption (with x = 2, 5, and 7), the bond length values increased to \(1.51\), \(1.55\), and \(1.53\) A, respectively, indicating a transition to \(sp^{3}\)=like-bond in the C1 atom. It is worth mentioning that for sites located in the tetragonal ring, the top and bottom configurations (Fig. 1.d) were considered. Adsorption of a single H atom at each of these sites resulted in roughly the same results for \(E_{b}\), as can be seen in Table 1S in Supplementary Material. The adsorption of the second H atom (resulting in 16% hydrogen coverage) is more favorable at the \(C7^{\prime}\) site (Fig. 2.c), with an \(E_{b}\) of \(+3.75\) eV/atom. The resulting lattice distortions in the direction perpendicular to the structure plane lead to a significant buckling of \(h=0.87\) A, as seen in Fig. 2.f. The distortions of the structure and the fact that two neighboring C atoms adsorb the pair of H atoms (but on opposite sides of the sheet) are in accordance with the results reported by Boukhvalov and Katsnelson for the hydrogenation of graphene sheets [46]. Interestingly, the adsorption of a third H atom gives the same \(E_{b}\) for both C5 and C6' sites, as seen in Fig. 2.e. In this case, the configuration in which the C1, C7' and C5 sites are occupied was imposed, which will be justified later. The resulting structure presents an overall increase in the Cx-C bond lengths (with x = 1, 7, and 5). The vertical distance separating the C1 and C7 atoms is 1.02 A versus 0.84 A for the corresponding value between the C5 and C6 atoms. The adsorption of a fourth H atom (33% hydrogen coverage) is more favorable at the C6' site with \(E_{b}=4.0\) eV/ A and buckling of \(h=1.185\) A(Fig. 2.g, h). It is clear that choosing the C5 or C6' sites in the adsorption of the third H atom leads basically to the same configuration (C1-C7'-C5-C6'). Therefore, choosing C5 or C6' for the adsorption of the third H atom is equivalent. These calculations reveal a pattern for the hydrogenation of the tetragonal ring, which consists of two lines of H atoms on opposite sides of the basal plane sheet, leading to the formation of well-delimited pentagonal ring strips along the direction of the lattice vector \(a\). DFT calculations confirm that this configuration is indeed more favorable. Molecular Dynamics simulations, discussed below, produced similar results, Reactive molecular dynamics simulations were carried out to study the dynamics and temperature effects on hydrogen adsorption of bigger TPDH-graphene membranes (Fig. 1), which would be cost-prohibitive with DFT methods. Representative MD snapshots of both sides of the TPDH-graphene membrane during the hydrogenation process (at 300K) are presented in Fig. 3 (a) - (c). The H atoms are predominantly incorporated throughout the MD simulations on the \(C_{1}\) sites. Analyzing the hydrogenation process, from Fig. 3 (a) to (c), we can see that the hydrogen-adsorbed \(C_{1}\) sites act as seeds to the hydrogenation of their \(C_{1}\) neighbors, forming lines through the structure surface, which is an expected result, based on the DFT binding energy ordering values. In Fig. 4, we present the number of adsorbed/bonded hydrogen atoms at each site of the TPDH-gr unit cell, as a function of the simulation time, for the different temperature values considered here. The hydrogenation occurs mainly at the \(C_{1}\) sites for all temperatures. High rates of H incorporation indicate high reactivity for hydrogenation. At low temperatures (150K), the \(C_{2}\) and \(C_{4}\) sites have approximately the same low adsorption rates, while the \(C_{3}\) sites exhibit insignificant or no hydrogen incorporations. Increasing the temperature, \(C_{4}\), \(C_{2}\), and \(C_{3}\) sites become more reactive, while above 300K, the \(C_{1}\) site has a slight decrease in reactivity. ### Electronic Structure In Fig. 5.a), we present pristine (non-Hydrogenated) TPDH-gr electronic band structure and the corresponding projected density of states (pDOS) (obtained from DFT-GGA-PBE calculations). We can see that pristine TPDG-gr exhibits a Figure 4: The number of adsorbed/bonded hydrogen atoms at each site of the TPDH-gr unit cell as a function of the simulation time (at %) for 150, 300, 500, and 800 K. The color of the curves indicates the corresponding sites in the unit cell (left, upper). Figure 3: Representative MD snapshots at different simulation times: a) 4 ps, b) 7.5 ps, and c) 200 ps of the hydrogenation of the TPDH-gr membrane. Results from simulations at 300k. semimetallic behavior. The highest (lowest) valence (conduction) band is partially filled. These results are consistent with previous works published in the literature [32]. The effects of H adsorption in the tetragonal ring were investigated for the cases with a pair adsorbed in neighboring atoms, in opposite sites of the sheet, and with all four sites of the ring occupied (Fig. 2.g,h respectively). The adsorption of two hydrogen atoms in the C1 and C7 sites results in the opening of the direct gaps by approximately \(1\) eV at k-points \(M\) and \(\Gamma\), as shown in Fig. 5.b. Surprisingly, the valence and conduction bands overlap at the Fermi level, giving rise to a Dirac cone-like, between the k-points \(Y\) and \(M\). Near this point, the electronic dispersion is unusually linear, and charge carriers behave like massless fermions, obeying the Dirac relativistic equation. It is expected that unusual transport properties arise from this pattern in the band structure, as predicted and experimentally observed for graphene [9]. The electronic band structure and the corresponding pDOS of the TPDH with full hydrogenation of the tetragonal ring are shown in Fig. 5.c. We can see the appearance of narrow gaps (\(0.5\) eV ) between k-points \(\Gamma\) and \(M\), and a very narrow direct gap at point \(Y\). The Dirac cone-like is shifted near the \(\Gamma\) points with respect to the half-hydrogenated structure. Figure 5: Electronic band structures and the corresponding projected density of states (pDOS) for a) non-hydrogenated TPDH-gr, b) TPDH-gr with the tetragonal ring partially hydrogenated (C1 and C7’ sites occupied), and c) tetragonal ring fully hydrogenated. The total density of states is shown in black, while the blue and green curves represent the projected DOS into orbitals s and p, respectively. ## 4 Conclusions This work investigated the effects of hydrogenation on the structural and electronic properties of tetra-penta-deca-hexagonal-graphene (TPDH-gr) sheets. Molecular dynamics (MD) simulations revealed that H atoms are mainly incorporated in the tetragonal ring (\(C_{1}\) sites) with up to \(80\%\) adsorption at \(300\) K (Fig. 4). The number of H atoms incorporated on C2 and C4 sites varies according to the temperature. Hydrogenation produces a pattern where H lines are formed on both sides of the sheet (Figs. 1.h and 3.c) generating well delimited pentagonal ring strips along \(\vec{a}\) direction. DFT calculations further corroborate that the complete hydrogenation of the tetragonal ring is energetically favorable. Electronic structure calculations for the partially hydrogenated structure show the formation of gaps and the emergence of a Dirac cone-like between the points \(\Gamma\) and \(M\). For the fully hydrogenated ring, narrow band gaps followed by wide gaps are identified, and the Dirac cone-like is translated near the \(\Gamma\) point. This electronic profile strongly indicates anisotropic transport properties, although these remain to be further explored in future works. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements The authors thank PRH.49 for funding and CCM-UFABC for the computational resources provided and CNPq (#310045/2019-3)
2305.11872
Predictive Wand: a mathematical interface design for operations with delays
Action-feedback delay during operation reduces both task performance and sense of agency (SoA). In this study, using information-theoretic free energy, we formalized a novel mathematical model for explaining the influence of delay on both task performance and SoA in continuous operations. Based on the mathematical model, we propose a novel interface design called Predictive Wand for predicting future outcomes to prevent task performance and SoA degradation resulting from response delays. Model-based simulations and operational experiments with participants confirmed that operational delay considerably reduces both task performance and SoA. Furthermore, the proposed Predictive Wand mitigates these problems. Our findings support the model-based interface design for continuous operations with delay to prevent task performance and SoA degradation.
Masaki Isono, Hideyoshi Yanagisawa
2023-05-03T02:23:27Z
http://arxiv.org/abs/2305.11872v1
## Predictive Wand: a mathematical interface design for operations with delays ## Abstract Action-feedback delay during operation reduces both task performance and sense of agency (SoA). In this study, using information-theoretic free energy, we formalized a novel mathematical model for explaining the influence of delay on both task performance and SoA in continuous operations. Based on the mathematical model, we propose a novel interface design called Predictive Wand for predicting future outcomes to prevent task performance and SoA degradation resulting from response delays. Model-based simulations and operational experiments with participants confirmed that operational delay considerably reduces both task performance and SoA. Furthermore, the proposed Predictive Wand mitigates these problems. Our findings support the model-based interface design for continuous operations with delay to prevent task performance and SoA degradation. _Keywords_: interface; free energy; delay; agency; performance ## 1 Introduction Advances in information technology have enabled remote operations in several applications. However, action-feedback delays typically occur in remote operations depending on the status of the network. For example, if a delay occurs in the remote control of a mobile robot, the response to operational inputs such as start/stop and direction change are delayed, which may result in an accident. A response delay diminishes the sense of agency (SoA), which refers to the perception of control over actions and their consequences (Haggard and Tsakiris, 2009). Blakemore et al. (1999) studied the effect of delay on the perception of self-produced stimuli using a self-touch paradigm and revealed that people experienced tickling when a delay occurred between voluntary action and tactile stimuli. Farrer et al. (2008) determined that delayed visual feedback caused people to perceive that they were viewing temporally displaced movements. Yang and Yanagisawa (2021) verified that delay diminishes the SoA in the same manner for both discrete and continuous operations. Studies have investigated the relationship between delays and SoA (e.g., Oishi et al., 2018; Rossetti et al., 2022; Shimada et al., 2009; Wen et al., 2019) and proven that the absence of SoA results in a person feeling less responsible for the operation (Haggard and Tsakiris, 2009; Moore, 2016; Moretto et al., 2011). Therefore, designing interfaces that prevent delays from reducing the SoA as well as task performance is critical. Automation mitigates the problem of SoA loss (Wen et al., 2015); however, excessive automated operations can diminish SoA (Ueda et al., 2021; Zanatto et al., 2021). In this study, we proposed a novel visual assistance interface named Predictive Wand to prevent the degradation of SoA and task performance resulting from delays in continuous operations. Because the quantitative relationship between delay and SoA depends on task settings (Wen et al., 2019), we devised an interface based on a mathematical model for considering the specifications of operation systems. First, we present the mathematical modeling of task performance and SoA in delayed continuous operations (Chapter 2), followed by a derivation of Predictive Wand (Chapter 3). We conducted model simulations on the effects of delay expectation, delay variance, and Predictive Wand on task performance and SoA and developed hypotheses for experiments based on simulation results (Chapter 4). We experimentally validated the effects of delay expectation, delay variance, and Predictive Wand on task performance and SoA (Chapter 5). Next, we detail the results (Chapter 6). Finally, we describe the conclusions (Chapters 7 and 8). ## 2 Modeling ### Delayed continuous operation model In this section, we formulate delayed continuous operations. We applied the comparator model proposed by Frith et al. (2000) to our delayed operation model (Fig. 1). Initially, this model was used to represent the motor control system to explain schizophrenia symptoms (Blakemore et al., 2002; Frith et al., 2000). Subsequently, Synofzik et al. (2008) applied this model to explain SoAs (see Section 2.4). Frith et al. (2000) postulated that an agent has three states (e.g., arm joint angle), namely the desired, predicted, and estimated actual states. The desired state is the future target state, and to achieve this state, inverse models generate motor commands, which are used by the forward model to generate the predicted state. Actual state transition is caused by motor commands, and the agent observes their execution as sensory feedback. Finally, the agent estimates the actual state based on the observations. The three states were then compared. The results revealed that these states did not differ when controls are normal (Blakemore et al., 2002; Frith et al., 2000). We used this model to formulate delayed operations. We added actual delay to the state transition (represented by (ii) in Fig. 1) and the estimated delay to the internal models (represented by (i) and (iv) in Fig. 1). For example, the state represents the position of the robot in our model. We denote the desired, predicted, actual, and estimated actual states by \(\hat{x}_{[t]},~{}\hat{x}_{[t]}\)\(\sim\)\(\mathcal{N}(\hat{\mu}_{t},\hat{\sigma}_{t}^{2}),~{}\mathbf{x}_{[t]}\), and \(~{}x_{[t]}\)\(\sim\)\(\mathcal{N}(\mu_{t},\sigma_{t}^{2})\), respectively. The prediction \(~{}\hat{x}_{[t]}~{}\) and estimation \(~{}x_{[t]}~{}\) are random variables according to Bayesian estimation. In this paper, real-world variables are written in bold and internal model variables are written in italics. The variables are formulated based on the state-space model and Bayesian estimation. Consider a state equation in which the operation input is reflected by a \(~{}D\)-step delay as follows: Figure 1: Delayed operation model. First, the desired state is generated from the goal for operation. (i) To achieve the desired state, the operator’s controllers (inverse models that convert perception to movement) work to generate operation inputs. (ii) Inputs are delayed from reaching the operation object, and state transition occurs. (iii) The operator observes state transition. (iv) The efference copy of the delayed operation inputs reach the operator’s predictors (forward models that convert movement to perception), and the predicted state is generated. (v) Finally, the operator estimates the actual state from the observation and predicted state. The processes (ii) and (iii) correspond to the state-space model. Processes (iv) and (v) correspond to the Kalman filter (Kalman, 1960). The model has four states and four comparators. The comparator between the desired and estimated actual states (Comparator 1) represents the estimated operation error. The comparator between the desired and predicted states (Comparator 2) represents unpredictability (or, inexperienced operation). The comparator between the predicted and estimated actual states (Comparator 3) represents the prediction error. The comparator between the desired and the actual states (Comparator 4) represents the actual operation error. This figure is modified based on results from Frith et al. (2000); Synofzik et al. (2008). \[X_{[t+D+1]}=X_{[t+D]}+b_{u}u_{[t]}+\epsilon_{x}, \tag{1}\] where \(X\) is a stochastic state value (actual or internal), \(D\) is the delay (actual or internal), \(b_{u}\) is the constant parameter for the operation input, \(u_{[t]}\) is the operation input that takes the continuous value, and \(\epsilon_{x}\sim\mathcal{N}(0,\sigma_{x}^{2})\) is state transition noise (actual or internal). The operator selects an operation input to achieve the desired future state \(\hat{\mathcal{R}}_{[t+d+1]}\) based on Eq. (1). \[b_{u}u_{[t]}=\hat{\mathcal{R}}_{[t+d+1]}-\tilde{x}_{[t+d]} \tag{2}\] Because we focus on the effect of delay distribution, we assume the operator selects an optimal operation input \(u_{[t]}\) from the expected values of the desired \(\hat{\mathcal{R}}_{[t+d+1]}\) and predicted \(\tilde{x}_{[t+d]}\). The operation input is reflected in the actual state with a \(\mathbf{d}\)-step delay that the operator observes. Here, bold \(d\) denotes actual delay as follows: \[\mathbf{x}_{[t+d+1]}=\mathbf{x}_{[t+d]}+b_{u}u_{[t]}+\boldsymbol{\epsilon}_{x} \tag{3}\] \[\mathcal{y}_{[t+d+1]}=\mathbf{x}_{[t+d+1]}+\epsilon_{y}, \tag{4}\] where \(\mathcal{y}_{[t+d+1]}\) is the observation value, and \(\epsilon_{y}\sim\mathcal{N}\big{(}0,\sigma_{y}^{2}\big{)}\) is observation noise. Eqs. (3) and (4) are based on the state-space model. The operator predicts the future state when selecting an operational input based on Eq. (1). The italicized \(d\) denotes perceived delay. \[\tilde{x}_{[t+d+1]}=\tilde{x}_{[t+d]}+b_{u}u_{[t]}+\epsilon_{x} \tag{5}\] Finally, the Bayesian operator estimates the actual state from observations (Eq. (4)) and prediction (Eq. (5)). \[\begin{cases}\mu_{t+d+1}=\tilde{\mu}_{t+d+1}+\dfrac{\tilde{\sigma}_{t+d+1}^{2} }{\tilde{\sigma}_{t+d+1}^{2}+\sigma_{y}^{2}}\big{(}\mathcal{y}_{[t+d+1]}- \tilde{\mu}_{t+d+1}\big{)}\\ \sigma_{t+d+1}^{2}=\left(1-\dfrac{\tilde{\sigma}_{t+d+1}^{2}}{\tilde{\sigma}_{ t+d+1}^{2}+\sigma_{y}^{2}}\right)\tilde{\sigma}_{t+d+1}^{2}\end{cases} \tag{6}\] Eqs. (5) and (6) are based on the Kalman filter (Kalman, 1960). Eqs. (2)-(6) represent the delayed continuous operation process. ### Discrepancy between the internal model and real-world process This study modeled the effect of delay distributions on performance and SoA; therefore, we formulate how delay causes a discrepancy between the internal and real-world states. See Appendix A for detailed description. \(\text{Term }\tilde{x}_{[t+d]}\) represents the prediction of future state \(d\) steps ahead. The agent predicts \(\tilde{x}_{[t+d]}\) based on the perceived delay \(d\), so \(\tilde{x}_{[t+d]}\) is directly affected by the delay. In this section, we formulate the expected value \(\tilde{\mu}_{t+d}\) and variance \(\tilde{\sigma}_{t+d}^{2}\) of \(\tilde{x}_{[t+d]}\). The agent predicts \(\tilde{x}_{[t+d]}\) based on the memory of recent operation inputs. The greater the delay \(d\), the more uncertain the memory is. We assume that the agent recalls past operation inputs step by step, and the recollect noise \(\epsilon_{u}\sim\mathcal{N}(0,\sigma_{u}^{2})\) is added to each step. Because we considered short delays of less than 1000 ms, we linearly approximated recent operational inputs. Under these assumptions, we obtain the expected value \(\tilde{\mu}_{t+d}\) and variance \(\tilde{\sigma}_{t+d}^{2}\) of \(\tilde{x}_{[t+d]}\) as follows: \[\begin{cases}\tilde{\mu}_{t+d}=\mu_{t}+b_{u}du_{[t]}+b_{u}\frac{d(d+1)}{2} \Delta u\\ \tilde{\sigma}_{t+d}^{2}=\sigma_{t}^{2}+b_{u}^{2}\frac{d(d+1)}{2}\sigma_{u}^{ 2}+d\sigma_{x}^{2},\end{cases} \tag{7}\] where \(\Delta u\) represents the expected change in operation input \(u\) per time step. For example, during the operation of a mobile robot, the change in the operation per unit time corresponds to the frequency of turning and acceleration/deceleration. The upper row of Eq. (7) indicates that the expected value \(\tilde{\mu}_{t+d}\) of the prediction \(\tilde{x}_{[t+d]}\) is the estimated current actual position plus the sum of the linearly approximated recent operation inputs. The lower term indicates that the variance \(\tilde{\sigma}_{t+d}^{2}\) of prediction \(\tilde{x}_{[t+d]}\) is the sum of the uncertainty of the estimated current state \(\sigma_{t}^{2}\), uncertainty of remembering recent operation inputs \(\sigma_{u}^{2}\), and state transition noise \(\sigma_{x}^{2}\). The larger the perceived delay \(d\), the larger is the value of \(\tilde{\sigma}_{t+d}^{2}\). Therefore, Eq. (7) indicates that perceived delay influences the probability distribution of the prediction of \(\tilde{x}_{[t+d]}\). ### Modeling task performance In this section, we model task performance in delayed continuous operations. See Appendix A for detailed description. We define task performance in relation to the operation error, which is the difference between the desired state \(\tilde{x}\) and the actual state \(x\) (See Comparator 4 in Fig. 1). The operation error is as follows: \[\mathbf{x}_{[t+\mathbf{d}+1]}-\hat{x}_{[t+d+1]}\] \[=b_{u}\left((\mathbf{d}-d)\left(u_{[t]}+\frac{\mathbf{d}+d+1}{2} \Delta u\right)+\sum_{k=0}^{\mathbf{d}}\mathbf{\epsilon_{u}}_{[t-k]}-\sum_{k =1}^{d}\sum_{l=1}^{k}\mathbf{\epsilon_{u}}_{[t-k+l]}\right)+\sum_{k=0}^{ \mathbf{d}}\mathbf{\epsilon_{x}}_{[t-k]}-\sum_{k=1}^{d}\mathbf{\epsilon_{x}}_ {[t-k]}+2\mathbf{\epsilon_{y}}_{[t]}. \tag{8}\] Equation (8) suggests that the discrepancy between the desired and actual states increases with the actual delay \(d\), and perceived delay \(d\), and the discrepancy between them \(\mathbf{d}-d\) increases. The operation error is calculated by sampling \(\mathbf{d}\)\(\sim\)\(N\big{(}\bar{\mathbf{d}},\sigma_{d}^{2}\big{)}\), \(d\)\(\sim\)\(N\big{(}\bar{d},\sigma_{d}^{2}\big{)}\), \(\mathbf{\epsilon_{u}}\)\(\sim\)\(N(0,\sigma_{u}^{2})\), \(\mathbf{\epsilon_{u}}\)\(\sim\)\(N(0,\sigma_{u}^{2})\), \(\mathbf{\epsilon_{x}}\)\(\sim\)\(N(0,\sigma_{x}^{2})\), and \(\mathbf{\epsilon_{y}}\)\(\sim\)\(N\big{(}0,\sigma_{y}^{2}\big{)}\). Task performance is defined as the percentage of acceptable operation errors. \[(task\ performance)\equiv\frac{\text{count}\big{(}\big{|}\mathbf{x}_{[t+ \mathbf{d}+1]}-\hat{x}_{[t+d+1]}\big{|}\leq E_{max}\big{)}}{\text{count}(all)} \times 100\ [\%], \tag{9}\] where "\(count(*)\)" denotes that "the count of samples that meet the condition of \(*\)," and \(E_{max}\) represents the upper limit of the allowable operation error (e.g., road width). Eqs. (8) and (9) indicate that the greater the expected value of the delays, the greater the operation error \(\big{|}\mathbf{x}_{[t+\mathbf{d}+1]}-\hat{x}_{[t+d+1]}\big{|}\) and the worse the task performance are. In Chapter 4, we numerically simulate task performance. 2.4 Modeling SoA using free energy In this section, we model SoA for delayed continuous operations. The classical SoA model is known as the comparator model (Blakemore et al., 1998, 2000; Miall and Wolpert, 1996; Ohata et al., 2020; Wolpert et al., 1995). In the comparator model, the lack of SoA originates from prediction error, that is, the error between the predicted state and the estimated actual state (see Comparator 3 in Fig. 1). This model is a simple model in which the incongruence of the two states diminishes SoA. However, SoA varies continuously (Wen, 2019). Therefore, studies have proposed statistical (Wen et al., 2015) and mathematical models of SoA (Legaspi and Toyoizumi, 2019; Taniyama et al., 2021). In this study, a free-energy model was adopted (Taniyama et al., 2021). Free energy is an information quantity that represents prediction errors in the brain (Friston et al., 2006). In the model, this quantity is used to formulate prediction errors in the comparator model. Bayesian estimation discussed in 2.1.1 is approximated by variationally minimizing the free energy (Buckley et al., 2017; Friston et al., 2006). Free energy (Buckley et al., 2017; Friston et al., 2006) is a function related to information theory (Shannon, 1948). Free energy (\(F\)) is defined as the summation of internal energy and entropy. \[F[y,Q]\equiv\mathbb{E}_{q(x)}[-\ln P(y,x)]-\mathbb{E}_{q(x)}[-\ln q(x)], \tag{10}\] where \(x\) and \(y\) are the state and observation, respectively, as expressed in Eq. (1). Furthermore, \(q(x)\) is the _recognition density_ representing the agent's internal belief about \(x\). \(P(y,x)\), which is a joint probability distribution of \(y\) and \(x\), is a _generative model_ representing the statistical model of the relationship between an observation and its causes. Equation (10) indicates that the free energy is the average deviation of a generative model prediction from the belief (or recognition). Free energy is a dimensionless quantity and can be used regardless of the units of \(x\). Using Bayes' theorem, Eq. (10) is rearranged as follows: \[F[y,Q]=D_{KL}[q(x)\parallel p(x|y)]-\ln p(y) \tag{11}\] \[F[y,Q]\geq-\ln p(y). \tag{12}\] The first term on the right side of Eq. (11) is the KL divergence between the recognition density and posterior distribution of \(x\). The KL divergence is approximated to be zero when \(q(x)\) is approximated to the posterior by variationally minimizing free energy. The minimalized free energy is the Shannon surprise, \(-\ln p(y)\), representing the unpredictability of the observation \(y\). The comparator model suggests that the agent lost the SoA when the observation of the action outcome was unpredicted. Therefore, Taniyama et al. (2021) proposed that SoA is inversely proportional to the minimized free energy. \[(SoA)\propto-F=\log p(y) \tag{13}\] The minimized free energy is expressed by following equations when a Gaussian generative model is assumed (Taniyama et al., 2021; Yanagisawa, 2016, 2021): \[F=\frac{1}{2}\left(\frac{1}{s_{p}+s_{l}}\delta^{2}+\ln 2\pi\big{(}s_{p}+s_{l} \big{)}\right), \tag{14}\] where \(\delta,\ s_{p},\) and \(s_{l}\) represent the prediction error, prediction uncertainty, and system noise, respectively, and the free energy can be expressed as a function of three parameters. Taniyama et al. (2021) validated the free energy model of SoA through button-press task experiments, where the prediction error and uncertainty were controlled using operational delay and sensory modalities, respectively. We combined the free-energy model (Eqs. (13) and (14)) and the delayed continuous operation model (Eqs. (1)-(6)). Taniyama et al. (2021) considered prediction error \(\delta\) as the delay itself. However, in a continuous operation, the operator is assumed to know that a delay occurs. Therefore, the free energy associated with this state was considered. Prediction error \(\delta\) is the difference between the expected values of the predicted state \(\tilde{\mu}_{t+d+1}\) and the estimated actual state \(\mu_{t+d+1}\) (Comparator 3 in Fig. 1), prediction uncertainty \(s_{p}\) is the standard deviation of the predicted state \(\tilde{\sigma}_{t+d+1}\), and system noise \(s_{l}\) is the standard deviation of the observation noise \(\sigma_{y}\). As described in Section 2.3, these variables were formulated as the functions of the expected value and variance of delay. See Appendix A for detailed formulation and assumptions. \[\mu_{t+d+1}-\tilde{\mu}_{t+d+1}\] \[=\frac{\tilde{\sigma}_{t+d+1}^{2}}{\tilde{\sigma}_{t+d+1}^{2}+ \sigma_{y}^{2}}\Bigg{(}b_{u}(\mathbf{d}-d)\left(u_{[t]}+\frac{\mathbf{d}+d+1} {2}\Delta u\right)+b_{u}\sum_{k=0}^{\mathbf{d}}\boldsymbol{\epsilon}_{\mathbf{ u}[t-k]}+\sum_{k=0}^{\mathbf{d}}\boldsymbol{\epsilon}_{\mathbf{x}[t-k]}+ \boldsymbol{\epsilon}_{y[t+\mathbf{d}+1]}+\boldsymbol{\epsilon}_{y[t]}\Bigg{)} \tag{15}\] \[\tilde{\sigma}_{t+d+1}^{2}=\sigma_{y}^{2}+b_{u}^{2}\frac{d(d+1)}{2}\sigma_{u}^ {2}+(d+1)\sigma_{x}^{2} \tag{16}\] Eq. (15) suggests that the prediction error increases with the actual delay. Eq. (16) suggests that the prediction uncertainty increases as the perceived delay increases. The prediction error is calculated by sampling \(\mathbf{d}\)\(\sim\)\(\mathcal{N}\big{(}\bar{\mathbf{d}},\mathbf{\sigma}_{\mathbf{d}}^{2}\big{)}\), \(d\)\(\sim\)\(\mathcal{N}\big{(}\bar{\mathbf{d}},\sigma_{\mathbf{d}}^{2}\big{)}\), \(\boldsymbol{\epsilon}_{\mathbf{u}}\)\(\sim\)\(\mathcal{N}\big{(}0,\sigma_{\mathbf{u}}^{2}\big{)}\), \(\boldsymbol{\epsilon}_{\mathbf{x}}\)\(\sim\)\(\mathcal{N}\big{(}0,\sigma_{\mathbf{x}}^{2}\big{)}\), and \(\boldsymbol{\epsilon}_{y}\)\(\sim\)\(\mathcal{N}\big{(}0,\sigma_{y}^{2}\big{)}\). We define SoA as a value that increases with decreasing free energy. We mapped SoA values between 0 and 100 for comparison with experimental results. \[(SoA) \equiv\frac{1}{\text{count}(all)}\times\sum_{sample}\frac{F_{max}-F_ {sample}}{F_{max}}\times 100\ [\%] \tag{17}\] \[F_{sample} =\frac{1}{2}\left(\frac{|\mu_{t+d+1}-\tilde{\mu}_{t+d+1}|^{2}}{ \tilde{\sigma}_{t+d+1}+\sigma_{y}}+\ln 2\pi\big{(}\tilde{\sigma}_{t+d+1}+ \sigma_{y}\big{)}\right), \tag{18}\] where \(F_{max}\) is a suitable constant for mapping SoA from 0 to 100. In Section 4, we numerically simulate SoA. ## 3 Predictive Wand In this section, we detail the proposed visual interface named "Predictive Wand." As described in Section 2.2, the model suggests that the discrepancy between the predicted (\(\tilde{\boldsymbol{x}}_{[t+d]}\)) and actual (\(\mathbf{x}_{[t+d]}\)) states at time \(t+d\) causes degradation in both task performance and SoA. We hypothesize that accurate prediction \(\tilde{\boldsymbol{x}}_{[t+d]}\) reduces both operational and prediction errors. Based on this hypothesis, Predictive Wand increases the accuracy by visualizing the state prediction at time \(t+d\), as displayed in Fig. 2. We constructed a Predictive Wand using the actual current state \(\mathbf{x}_{[t]}\), current operation input \(u_{[t]}\), and expected value of the actual delay \(\mathbf{\bar{d}}\). The position, \(w_{[t]}\), pointed by the wand is represented by the following equation: \[\mathbf{w}_{[t]}=\mathbf{x}_{[t]}+b_{u}\mathbf{\bar{d}}u_{[t]}. \tag{19}\] Predictive Wand is drawn as an arrow extending from the current state. The length of the wall is \(b_{u}u_{[t]}\mathbf{\bar{d}}\), so the value \(w_{[t]}\) in Eq. (19) represents the position at which the wand points. The observational equation is as follows: \[z_{[t]}=\mathbf{w}_{[t]}+\mathbf{\epsilon}_{z}, \tag{20}\] where \(\mathbf{\epsilon}_{z}\)\(\sim\)\(N(0,\sigma_{z}^{2})\) is the observation noise. The agent uses a Kalman filter to predict and estimate the actual position of the wand. In this study, we assumed that the tip of the wand is a small circle and the observation noise \(\sigma_{z}^{2}\) is negligibly small compared with prediction uncertainty. With this assumption, the agent predicts \(\mathbf{\tilde{x}}_{[t+d]}\)\(\sim\)\(N(\mathbf{\tilde{\mu}}_{t+d},\mathbf{\tilde{\sigma}}_{t+d}^{2})\) as follows (Appendix B provides a detailed description): \[\begin{cases}\mathbf{\tilde{\mu}}_{t+d}=\mathbf{x}_{[t]}+b_{u}\mathbf{\bar{d }}u_{[t]}\\ \mathbf{\tilde{\sigma}}_{t+d}^{2}=\sigma_{z}^{2}+\sigma_{p}^{2},\end{cases} \tag{21}\] where \(\sigma_{p}^{2}\) represents the uncertainty in predicting \(\mathbf{\tilde{x}}_{[t+d]}\) from the position indicated by Predictive Wand. This result implies that the agent does not consider the pointed position as \(\mathbf{\tilde{x}}_{[t+d]}\) directly. Because \(\sigma_{p}^{2}\) is not a value that can be measured directly, similar to observation noise \(\sigma_{y}^{2}\), we set an appropriate value through simulations. Finally, we formulated prediction and operational errors. See Appendix B for a detailed derivation. \(\mu_{t+d+1}-\mathbf{\tilde{\mu}}_{t+d+1}\) \[=\frac{\sigma_{x}^{2}+2\sigma_{x}^{2}+\sigma_{p}^{2}}{\sigma_{x}^{2}+2 \sigma_{x}^{2}+\sigma_{p}^{2}+\sigma_{y}^{2}}\Bigg{(}b_{u}\Bigg{\{}(\mathbf{d} -\mathbf{\bar{d}})u_{[t]}+\frac{\mathbf{d}(\mathbf{d}+1)}{2}\Delta\mathbf{u} +\sum_{k=0}^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{u}[t-k]}\Bigg{\}}+\sum_{k =0}^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{x}[t-k]}+\mathbf{\epsilon}_{ \mathbf{y}[t+\mathbf{d}+1]}\Bigg{)} \tag{22}\] Figure 2: Predictive Wand drawn as an arrow extended from the actual current state \(x_{[t]}\). The length of the wand is \(b_{u}u_{[t]}\mathbf{\bar{d}}\). The pointed position is the observation of predicted \(\mathbf{\tilde{x}}_{[t+d]}\), and Kalman filtering is used to update prediction. \[\mathbf{X}_{[t+\mathbf{d}+1]}-\hat{x}_{[t+d+1]}\] \[=b_{u}\left(\mathbf{d}-\bar{\mathbf{d}}\right)u_{[t]}+\frac{ \mathbf{d}(\mathbf{d}+1)}{2}\Delta\mathbf{u}+\sum_{k=0}^{\mathbf{d}}\mathbf{ \epsilon}_{\mathbf{u}[\{t-k\}]}\right\}+\sum_{k=0}^{\mathbf{d}}\mathbf{ \epsilon}_{\mathbf{x}[t-k]}-2\mathbf{\epsilon}_{\mathbf{z}}-\mathbf{\epsilon}_ {\mathbf{p}} \tag{23}\] The task performance and SoA can be simulated using the definitions given in Eqs. (9) and (17), respectively. Eqs. (22) and (23) suggest that both operation and prediction errors increase when the actual delay \(\ d\), expected value of the actual delay \(\ \bar{d}\), and the discrepancy between them \(\ d-\bar{\mathbf{d}}\\) increase. Compared with Eqs. (8) and (15), the perceived delay \(\ d\\) does not affect operation and prediction errors of Predictive Wands. Recollect noise \(\ \mathbf{\epsilon}_{\mathbf{u}}\\) also does not affect the operation error. Thus, we hypothesize that showing Predictive Wand reduces operation and prediction errors and increases task performance and SoA. In Section 4, we numerically simulate the effects of Predictive Wand. ## 4 Model-based simulations ### Method We conducted a simulation assuming a specific operation. Fig. 3 presents an overview of the task. Figure 3: Overview of the task for our simulation. A red object and a white course are displayed. The agent controls the object horizontally to keep it inside the course, which is scrolling vertically in a fixed speed. The task setting is identical to the experimental setting. The contents of the task were identical to those of the experiment described in Section 5. A red square object (30 pixels on each side) and a white course (200 pixels wide) are displayed. The agent controlled the object horizontally to maintain it inside the course, which scrolled vertically at a fixed speed (200 pixel/s). A joystick was used for this operation. The deeper the tilt was, the faster was the movement of the object. The operation inputs were between \(-1\) (maximum to the left) and 1 (maximum to the right). The max speed was 220 pixel/s. The vertical position of the object was fixed, but the scrolling speed of the course functioned as the apparent speed. The waving shape of the course was the sum of two sine waves. Predictive Wand is displayed with the condition. The wand consists of a red line segment and a red dot (five-pixel radius). The line segment length is the (apparent) velocity of the object multiplied by its mean delay. The operation delay is Gaussian distributed. The range of the mean is 200 to 1000 ms. The variance is 10 ms\({}^{2}\) (called "low variance" condition) or 1000 ms\({}^{2}\) (high variance condition). Table 1 lists the parameters used in the simulations. The unit of time is not seconds but frames, and the rate is 100 fps. \begin{table} \begin{tabular}{l l l l} \hline Model & Definition & Value for & \\ variable & & simulation & Corresponding task condition \\ \hline \(\bar{\mathbf{d}},\bar{d}\) & Delay expectation & 20, 40, 60, 80, & \\ & & 100 & Delay means \\ \(\sigma_{\mathbf{d}}^{2},\sigma_{d}^{2}\) & Delay variance & 0.1, 10 & Delay variance \\ & Approximation error, Recollect & & \\ \(\sigma_{\mathbf{u}}^{2},\sigma_{u}^{2}\) & noise & 0.0001 & (A suitable value) \\ \(\sigma_{\mathbf{x}}^{2},\sigma_{x}^{2}\) & State transition noise & 1.0 & (A suitable value) \\ \(\sigma_{y}^{2}\) & Observation noise & 1.0 & (A suitable value) \\ \(\sigma_{x}^{2}\) & Observation noise for the Wand & 1.0 & (A suitable value) \\ \(\sigma_{p}^{2}\) & Uncertainty of prediction with the & 400 & (A suitable value) \\ & Wand & & \\ \(b_{u}\) & Parameter for operation inputs & 2.2 & Velocity of the object \\ & & & A suitable value between -1 \\ \(u_{\left[t\right]}\) & Current input & 0.0 & & and 1 \\ \(\Delta\mathbf{u},\Delta u\) & Change of \(u\) per time step & 0.005 & Course shape and scroll \\ & & & speed \\ \(E_{max}\) & Allowable operation error & 200 & Course width \\ \hline \end{tabular} \end{table} Table 1: Parameters for simulation. To investigate changes in task performance and SoA due to delay distribution, we used the same values in the real world and the brain for the parameters in Table 1. We calculated the task performance and SoA by substituting parameters and variables in Table 1 into Eqs. (8), (9), (15-18), (22), and (23). The simulation results are presented in Section 4.2. Python was used for simulations. The simulation was performed 25 times, and the average values were plotted. A total of 5000 samples were collected for each simulation. A three-way ANOVA was conducted on both task performance and SoA with delay expectation, delay variance, and the presence of Predictive Wand. ### Results Fig. 4 displays the result of the simulation for task performance. Fig. 4 reveals that task performance declines as delay expectations increase. The statistics of three-way ANOVA reveals that the effect of delay expectation is significant (\(F=9.8\times 10^{4}\), \(p<0.001\)). Therefore, we hypothesize that task performance degrades with an increase in delay expectation. Fig. 4 does not suggest that the delay variance affects task performance. The main effect of the delay variance is Figure 4: Simulation results for task performance as a function of delay expectations for the various conditions of delay variances, and the presence of Predictive Wand. The light gray bars represent the results under low variance (i.e., 10 ms\({}^{2}\)) conditions, whereas dark gray bars represent the results under high variance (i.e., 1000 ms\({}^{2}\)) conditions. The hatched bars represent the results under the Predictive Wand conditions. Error bars represent standard errors. not significant (\(F=0.069\), \(p=0.793\)). We hypothesized that the delay variance does not affect task performance. Fig. 4 suggests that Predictive Wand maintained task performance under longer delay conditions (800 and 1000 ms). The effect of Predictive Wand is significant (\(F=1.8\times 10^{5}\), \(p<0.001\)). The interaction effect between Predictive Wand and delay expectation is also significant (\(F=4.7\times 10^{4}\), \(p<0.001\)). Therefore, we hypothesized that Predictive Wand reduces the rate of decline in task performance owing to an increase in delay. Fig. 5 illustrates the result of simulation for task performance. Fig. 5 reveals that SoA degrades with an increase in delay expectation. The statistics of three-way ANOVA revealed that the main effect of delay expectation is significant (\(F=9.7\times 10^{4}\), \(p<0.001\)). Therefore, we hypothesized that SoA declines with the increase in delay expectation. Fig. 5 does not suggest that the delay variance affects SoA. The main effect of delay variance is not significant (\(F=0.870\), \(p=0.351\)). We hypothesized that delay variance does not affect SoA. Fig. 5 suggests that Predictive Wand prevents SoA from decreasing with an increase in delay. The main effect of Predictive Wand is significant (\(F=4.0\times 10^{4}\), \(p<0.001\)). The interaction effect between Predictive Wand and delay expectation is significant (\(F=2.6\times 10^{3}\), \(p<0.001\)). Therefore, we hypothesized that Predictive Wand reduces the rate of decline in SoA owing to a delay increase. Figure 5: Simulation result for SoA as a function of delay expectations for various conditions of delay variances, and presence of Predictive Wand. Light gray bars represent the results under low variance (i.e., 10 ms\({}^{2}\)) conditions, whereas dark gray bars represent results under high variance (i.e., 1000 ms\({}^{2}\)) conditions. Hatched bars represent the results under wand-present conditions. Error bars represent standard errors. ## 5 Experiments ### Hypotheses based on simulation results From the results of the simulation in Section 4.2, we proposed the following three hypotheses: Hypothesis 1: Both task performance and SoA decrease as delay expectations increase. Hypothesis 2: Effects of delay variance on task performance and SoA are insignificant. Hypothesis 3: Predictive Wand reduces the rate of decline in both task performance and SoA due to an increase in delay. The hypotheses were experimentally tested. In the experiments, participants performed operational tasks with a response delay. Delay expectation, delay variance, and presence or absence of Predictive Wand were the parameters. We measured task performance and subjective reports of SoA, and we statistically analyzed the main effects of delay expectation (for Hypothesis 1), the main effect of delay variance (for Hypothesis 2), the main effect of Predictive Wand, and the interaction effect between Predictive Wand and delay expectation (for Hypothesis 3). ### Participants Twenty-four university students (15 men, 6 women; mean age:\(22.1\pm 0.68\) years) participated in the experiment. The participants had normal finger motion and sight functions. This experiment was approved by the Research Ethics Committee of The University of Tokyo Graduate School of Engineering (approval number: KE21-97). All participants consented to participate in the study. ### Procedure We verified the simulation results presented in Section 4.2 as hypotheses by conducting experiments with human participants. Fig. 6 displays an overview of the experiment. Figure 6: Overview of the experiment consisting of 42 sets. The first 28 sets are called Block A and were conducted to examine the effects of delay expectation and delay variance on task performance and SoA (hypotheses 1 and 2). The remaining 14 sets are called Block B and were conducted to examine the effects of the Predictive Wand on task performance and SoA under various delay conditions (Hypothesis 3). Each block has several practice sets at the beginning. Each set consisted of three sessions, namely resetting, performance, and rating sessions. The resetting sessions were used to reset the delay perception of participants. In the performance session, participants performed delay operation tasks, and answered subjective evaluation of SoA in the following rating session. The experiment comprised 42 sets. The first 28 sets were called Block A. In Block A, we verified Hypotheses 1 and 2 described in Section 5.1. We examined changes in task performance and SoA with changes in delay expectations and variance. The remaining 14 sets were called Block B. In Block B, we verified Hypothesis 3 described in Section 5.1. We examined changes in task performance and SoA with and without Predictive Wand. The contents of one set of Blocks A and B were the same, with the exception for the presence of Predictive Wand as an experimental condition. The first seven sets of Block A and two sets of Block B were used for practice, but their data were not used for analysis. Each set comprised three sessions, namely resetting, performance, and ratings. In each resetting session, a red square object and a yellow frame were displayed. Participants used a joystick to control the object horizontally. The deeper the tilt of the joystick was, the faster the object moved (maximum speed:220 pixel/s). The vertical position of the object was fixed (350 pixels from the lower edge of the monitor). Participants were instructed to control the object in the frame. When the object was in the frame, the frame moved to the other side. No delay was observed in the operation during resetting sessions. The participants repeated this task three times, and ended the session. The sessions were reset to reset the participants' perceptions of delay. In each performance session, a red square object and a white course were displayed. Participants controlled the object as they did during reset sessions. Participants were instructed to control the object to maintain it inside the course, which was scrolled vertically at a fixed speed (200 pixel/s). The waving shape of the course was the sum of two sine waves, and three types of courses were prepared. The course width was fixed (200 pixels) from the beginning of the course to the goal. The task contents were identical to those of the simulations conducted in Section 4. In Block A, seven conditions of delay between the tilting joystick and movement of the object were prepared: six combinations of three types of delay expectation (200, 400, or 800 ms), two types of delay variance (10 or 1000 ms\({}^{2}\)), or a nondelayed condition. In Block B, 12 conditions were prepared: combinations of three types of delay expectation (200, 400, or 800 ms), two types of delay variance (10 or 1000 ms\({}^{2}\)), and two types of the presence of Predictive Wand (present or absent). The vertical size of Predictive Wand was fixed ( scrolling speed of the course \(\times\) delay expectation), whereas the horizontal size was variable (present input value was speed of the object \(\times\) delay expectation). Each performance session lasted for 45 s. Participants answered two questions in each rating session. The first question asked, "To what extent did you feel that the object was 'under your control'?" (0%-100 %), and the second question queried, "To what extent did you feel that you could operate 'as you desired'?" (0%-100 %). We evaluated SoA caused by the prediction error (Comparator 3 in Fig. 1), which corresponded to the simulated SoA. The second question evaluated the desirability of operation (Comparator 1 in Fig. 1) and used to distinguish it from the SoA that we wanted to verify. Before the experiments, the participants freely controlled the object for 30 s and performed a practice session. Experiments were conducted using the following devices: JC-U4013SBK (ELECOM) for the controller and XB323QKNVbmiiphuzx (Acer) for the monitor. All experimental conditions were counterbalanced among all participants. ### Data analysis The scores of task performance were calculated by the following equation: \[(task\;performance)=\frac{t_{in}}{T_{total}}\times 100\;[\%],\] where \(T_{total}\) is the total time of one task (45 s), and \(t_{in}\) is the total time that the object is inside the course in each task. Subjective reports of SoA were measured using the first questionnaires in the two rating sessions. We conducted a two-way ANOVA on both the task performance score and the subjective reports of SoA with delay expectation and delay variance for Block A. For Block B, we conducted three-way ANOVA with delay expectation, delay variance, and presence of a Predictive Wand. Scores from the nondelayed condition were excluded from analysis. ## 6 Experimental results ### Effect of the delay expectation and the delay variance (Block A) Fig. 7 displays the effects of delay expectation and delay variance on task performance. The results indicated that task performance declined as delay expectations increased. The main effect of delay expectations is significant (\(F=32.9\), \(p<0.001\)). These results suggest that task performance declines significantly as delay expectations increase, thus supporting Hypothesis 1. The main effect of the delay variance is not significant (\(F=0.101\), \(p=0.904\)). These results support Hypothesis 2, indicating that delay variance does not affect task performance. Fig. 8 details the results of the effects of delay expectation and delay variance on SoA. This result indicates that SoA declines with the increase in delay expectation. The main effect of delay expectations is significant (\(F=55.2\), \(p<0.001\)). This result suggests that SoA declines significantly with the increase in delay expectation, thereby supporting Hypothesis 1. The main effect of delay variance is not significant (\(F=0.377\), \(p=0.686\)). This result supports Hypothesis 2: Delay variance does not affect SoA. Figure 7: Scores of task performance for various combinations of delay expectation and delay variance. Light gray bars represent the results under low variance (i.e., 10 ms\({}^{2}\)) conditions, whereas dark gray bars represent results under high variance (i.e., 1000 ms\({}^{2}\)) conditions. The leftmost bar indicates the score under nondelayed condition. Error bars represent standard errors. Red cross marks represent simulation results. Here, “*” denotes “significant (\(p<0.05\))”, “***” denotes “significant (\(p<0.001\))”. ### Effect of Predictive Wand (Block B) Fig. 9 reveals the effects of Predictive Wand and delay distributions on task performance. The result indicates that Predictive Wand maintains task performance in the 800 ms delay condition. The main effect of Predictive Wand is significant (\(F=8.039,p=0.005\)). The interaction effect between Predictive Wand and delay expectations is significant (\(F=4.590,p=0.011\)). These results suggest that Predictive Wand significantly reduces the rate of decline caused by delays in task performance, thus supporting Hypothesis 3. Figure 8: Subjective reports of SoA for the various combinations of delay expectation and delay variance. Light gray bars represent results under low variance (i.e., 10 ms\({}^{2}\)) conditions, whereas dark gray bars represent results under high variance (i.e., 1000 ms\({}^{2}\)) conditions. The leftmost bar represents the score under the nondelayed condition. Error bars represent standard errors. Red cross marks represent simulation results. “***” denotes “significant (\(p<0.001\))”. Fig. 10 displays the results of effects of Predictive Wand and delay distributions on SoA. This result indicates that Predictive Wand maintains SoA in the 400 and 800 ms delay conditions. The main effect of Predictive Wand was significant (\(F=7.03\), \(p=0.0086\)). The interaction effect between Predictive Wand and delay expectation was not significant (\(F=2.56\), \(p=0.0798\)); however, the difference tended to increase as delay expectation increased. These results suggested that Predictive Wand significantly reduced the rate of decline owing to delays, which supported Hypothesis 3. Figure 9: Scores of task performance for various combinations of delay expectation, delay variance, and presence of Predictive Wand. Light gray bars represent results under low variance (i.e., 10 ms\({}^{2}\)) conditions, whereas the dark gray bars represent the results under high variance (i.e., 1000 ms\({}^{2}\)) conditions. Hatched bars represent the results under Predictive Wand-present conditions. Error bars represent standard errors. Red cross marks represent simulation results. “*******” denote “significant (\(p<0.001\)).” ## 7 Discussions ### Effect of delay distribution on task performance and SoA Based on results in Section 6.1, we verify Hypotheses 1 and 2. We confirmed that task performance and SoA declined as delay expectation increased and that delay variance did not affect task performance and SoA. Figs. 7 and 8 indicate that simulation results are consistent with experimental results. For the operational tasks in this study, we consider task performance and SoA model in Chapter 2 to be appropriate. The rates of decline in both task performance and SoA were greater in simulation than in the experimental results. We assume that cause is the limitation of linear approximation of recent operation inputs \(\ u\) (see Section 2.2). The linear approximation of \(\ u\) becomes less valid with the increase in delay expectation. To model cases in which delay expectation is greater than approximately 800 ms, we use a method without a linear approximation of \(\ u\). Our simulation and experimental results on the effects of delay on task performance and SoA are consistent with those presented in other studies (e.g., Oishi et al., 2018; Rossetti et al., 2022; Shimada et al., 2009; Wen et al., 2019). This study makes it possible to measure task performance and SoA not only by experiment but also by simulation. Figure 10: Subjective reports of SoA for various combinations of delay expectation, delay variance, and presence of Predictive Wand. Light gray bars represent results under low variance (i.e., 10 ms\({}^{2}\)) conditions, whereas dark gray bars represent results under high variance (i.e., 1000 ms\({}^{2}\)) conditions. Hatched bars represent results under Predictive Wand-present conditions. Error bars represent standard errors. Red cross marks represent simulation results. “**” denotes “significant (p \(<\) 0.05)”. Furthermore, “**” indicates “significant (\(p<\) 0.01)”. “****” denotes “significant (\(p<\) 0.001).” ### Effect of Predictive Wand on task performance and SoA Based on the results in Section 6.2, we verify Hypothesis 3. We confirmed that Predictive Wand reduces the rate of decline by delaying task performance and SoA. Figs. 9 and 10 indicate that simulation results are consistent with experimental results. For the operational tasks in this study, we consider task performance and SoA model in Chapter 3 to be appropriate. Predictive wand indicates an alternative to automation for preventing the degradation of task performance and SoA (Wen et al., 2015; Ueda et al., 2021; Zanatto et al., 2021). Similar to the discussion in Section 7.1, we use a method without linear approximation of \(u\) to model cases in which the delay expectation is greater than approximately 800 ms. Predictive Wand is an interface calculated from the current operational input and delay expectation (see Eq. (19)). Operational inputs are not reflected in positions indicated by Predictive Wand. Therefore, when the change in the operation inputs per unit time is large, the error between the position predicted by Predictive Wand and actual position increases. The change of the operation inputs per time is represented by \(\Delta u\) in our model. Fig. 11 displays a simulation of SoA when \(\Delta u\) is changed from 0.005 to 0.015. Compared with Fig. 5, Fig. 11 suggests that SoA decreases with Predictive Wand under 800 or larger delay expectation. From this result, we consider that task performance and SoA rather decline with Figure 11: Simulation result for SoA as a function of delay expectations for various conditions of delay variances, and presence of Predictive Wand. The difference from the results displayed in Fig. 5 is that \(\Delta u\), the change of the operation inputs per time step, is changed from 0.005 to 0.015. Under 800 or larger delay expectation, SoA decreases with Predictive Wand. Predictive Wand when \(\Delta u\) increases, that is, when the course gets more tortuous. The relationship between \(\Delta u\) and effect of Predictive Wand should be investigated in the future. ## 8 Conclusion This study had two findings. First, a mathematical model was proposed to explain the effects of delay expectation and delay variance on task performance and SoA during continuous operation. The delayed operation model was constructed based on the state-space model and Bayesian estimation, and a prediction of the future state was formulated based on perceived delay. We derived the operation and prediction errors from the formulated prediction and modeled task performance and SoA. In both model simulations and human experiments, we confirmed the following relationships: task performance and SoA decline as delay expectation increases, and delay variance does not affect task performance and SoA. Second, we proposed Predictive Wand, a visual interface to prevent decreases in task performance and SoA with the increase in delay. Predictive Wand is derived from Bayesian model predictions. Predictive Wand presents the prediction of future states calculated using delay expectation and current operation input. The agent predicts the future state based on Predictive Wand. This prediction diminishes operation and prediction errors. Both model simulations and experimental results confirmed that Predictive Wand reduced the rate of decline in task performance and SoA. In conclusion, we mathematically modeled the mechanism of task performance and SoA degradation due to action-feedback delay. We verified that delay expectation, rather than delay variance, was the primary cause of task performance and SoA degradation. As delay expectation increases, task performance and SoA decrease owing to the uncertainty in predicting future states. We argue that the larger the change of the operation inputs per time, the steeper is the decrease in the task performance and SoA. Therefore, the change of the operation inputs per time must be considered for each task to estimate the task performance and SoA in the design of an operation system with delay. Predictive Wand, our novel visual interface for operation task, mitigates task performance and SoA degradation due to delay by visualizing the prediction of future states. Predictive Wand is derived from our model of the mechanism of task performance and SoA degradation due to delay. We argue that task performance and SoA degradation due to delay can be mitigated by developing interfaces that reduce the uncertainty in future state prediction based on mathematical models. This study has several limitations. First, we used a linear approximation of recent past operation input in the mathematical model. The approximation becomes less valid with the increase in delay expectation; therefore, a method without a linear approximation of the recent past operation input was required to model cases in which delay expectation is greater than approximately 800 ms. Second, we did not consider the effect of the change in the operation inputs per time. For example, the change in operational inputs over time represents the tortuosity of courses. The simulation conducted in Section 7.2 reveal that task performance and SoA decline with Predictive Wand when the change in operation inputs per unit time is high. The relationship between the change in operation inputs over time and the effect of Predictive Wand should be studied in the future. ## Declaration of Competing Interest Authors declare that they have no competing financial interests or personal relationships that may have influenced this study. ## Author contributions **Isono Masaki:** Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing - Original Draft, Visualization. **Yanagisawa Hideyoshi:** Conceptualization, Methodology, Validation, Resources, Writing - Review & Editing, Supervision, Project administration, Funding acquisition. ## Acknowledgments We thank Prof. Tamotsu Murakami and members of Design Engineering lab., Department of Mechanical Engineering, The University of Tokyo for supporting this study. We also thank Ryuichi Suzuki, Shin Shiroma, and other Sony Group researchers for the helpful suggestions through meetings. ## Funding: This work is supported by JSPS KAKENHI grant No. 21H03528 and Sony Group corporation. ## Appendix A Detailed formulation of task performance and SoA We describe the formulation of task performance and SoA in detail. The main equations are as follows: (1) state transition, (2) observation, (3) prediction, and (4) update phases (Fig. 1). The notation is the same as in Section 2. \[\mathbf{x}_{[t+d+1]}=\mathbf{x}_{[t+d]}+b_{u}\mathbf{u}_{[t]}+ \boldsymbol{\epsilon}_{\mathbf{x}[t]} \tag{11}\] \[y_{[t+d+1]}=\mathbf{x}_{[t+d+1]}+\epsilon_{y_{[t+d+1]}}\] (12) \[\tilde{x}_{[t+d+1]}=\tilde{x}_{[t+d]}+b_{u}u_{[t]}+\epsilon_{x_{[ t]}} \tag{13}\] \[\begin{cases}\mu_{t+d+1}=\tilde{\mu}_{t+d+1}+\frac{\tilde{\sigma}_{t+d+1}^{2} }{\tilde{\sigma}_{t+d+1}^{2}+\sigma_{y}^{2}}\big{(}y_{[t+d+1]}-\tilde{\mu}_{t+d +1}\big{)}\\ \sigma_{t+d+1}^{2}=\left(1-\frac{\tilde{\sigma}_{t+d+1}^{2}}{\tilde{\sigma}_{ t+d+1}^{2}+\sigma_{y}^{2}}\right)\tilde{\sigma}_{t+d+1}^{2}\end{cases} \tag{14}\] When the agent selects the correct operation input from the desired and predicted states, we obtain the following action-selection equation in relation to Eq. (13). \[b_{u}u_{[t]}=\tilde{x}_{[t+d+1]}-\tilde{x}_{[t+d]} \tag{15}\] Here, \(\tilde{x}_{[t+d]}\) is predicted by repeating Eq. (13). \[\tilde{x}_{[t+d]}=x_{[t]}+\sum_{k=1}^{d}\big{(}b_{u}u_{[t-k]}+\epsilon_{x_{[t -k]}}\big{)} \tag{16}\] The current estimated actual state \(x_{[t]}\) is equal to the current predicted state \(\tilde{x}_{[t]}.\) Let us consider \(u_{[t-k]}\) in Eq. (A6). This state represents the memory of the past input operations. Here, we assume that the agent recalls \(u_{[t-k]}\) from \(u_{[t-k+1]}\) as follows: \[u_{[t-k]}=u_{[t-k+1]}+\Delta u_{[t-k+1]}+\epsilon_{u_{[t-k+1]}},\] (A7) where \(\Delta u_{[t-k+1]}\) represents the expected difference between \(u_{[t-k]}\) and \(u_{[t-k+1]}.\) Here, \(\epsilon_{u}\)\(\sim\)\(\mathcal{N}(0,\sigma_{u}^{2})\) represents noise due to ambiguity of recall. Repeating Eq. (A7), we obtain the following: \[u_{[t-k]}=u_{[t]}+\sum_{l=1}^{k}\Big{(}\Delta u_{[t-k+l]}+\epsilon_{u_{[t-k+l] }}\Big{)}.\quad(1\leq k\leq d)\] (A8) Substituting Eq. (A8) into Eq. (A6) gives the following expression: \[\tilde{x}_{[t+d]}=x_{[t]}+b_{u}du_{[t]}+b_{u}\sum_{k=1}^{d}\left(\sum_{l=1}^{ k}\Delta u_{[t-k+l]}+\sum_{l=1}^{k}\epsilon_{u_{[t-k+l]}}\right)+\sum_{k=1}^{d} \epsilon_{x_{[t-k]}}.\] (A9) Because we considered short delays of less than 1000 ms in this study, we assumed that \(\Delta u_{[t-k]}\) is constant during \(1\leq k\leq d.\) This result implies that we linearly approximate \(u_{[t-k]}.\) \[\Delta u_{[t-k]}\approx\Delta u=Const.\quad(1\leq k\leq d)\] (A10) Then, Eq. (A9) yields the following result: \[\tilde{x}_{[t+d]} \approx x_{[t]}+b_{u}du_{[t]}+b_{u}\sum_{k=1}^{d}\left\{k\Delta u+\sum_{l =1}^{k}\epsilon_{u_{[t-k+l]}}\right\}+\sum_{k=1}^{d}\epsilon_{x_{[t-k]}}\] (A11) \[=x_{[t]}+b_{u}du_{[t]}+b_{u}\frac{d(d+1)}{2}\Delta u+b_{u}\sum_{k =1}^{d}\sum_{l=1}^{k}\epsilon_{u_{[t-k+l]}}+\sum_{k=1}^{d}\epsilon_{x_{[t-k]}}.\] Because \(\epsilon_{u}\) and \(\epsilon_{x}\) are independent, we obtain the following equation from the linearity of normal distribution: \[\sum_{k=1}^{d}\sum_{l=1}^{k}\epsilon_{u_{[t-k+l]}}\sim\mathcal{N}\left(0,\frac {d(d+1)}{2}\sigma_{u}^{2}\right),\qquad\sum_{k=1}^{d}\epsilon_{x_{[t-k]}}\sim \mathcal{N}(0,d\sigma_{x}^{2}).\] (A12) Therefore, \(\tilde{x}_{[t+d]}\)\(\sim\)\(\mathcal{N}(\tilde{\mu}_{t+d},\tilde{\sigma}_{t+d}^{2})\) can be expressed as follows: \[\begin{cases}\tilde{\mu}_{t+d}=\mu_{t}+b_{u}du_{[t]}+b_{u}\frac{d(d+1)}{2} \Delta u\\ \tilde{\sigma}_{t+d}^{2}=\sigma_{t}^{2}+b_{u}^{2}\frac{d(d+1)}{2}\sigma_{u}^{2 }+d\sigma_{x}^{2}.\end{cases}\] (A13) Substituting Eq. (A11) into Eq. (A3), we obtain the following: \[\tilde{x}_{[t+d+1]}=x_{[t]}+b_{u}(d+1)u_{[t]}+b_{u}\frac{d(d+1)}{2}\Delta u+b_ {u}\sum_{k=1}^{d}\sum_{l=1}^{k}\epsilon_{u_{[t-k+l]}}+\sum_{k=0}^{d}\epsilon_ {x_{[t-k]}}.\] (A14) Therefore, \(\tilde{x}_{[t+d+1]}\)\(\sim\)\(\mathcal{N}(\tilde{\mu}_{t+d+1},\tilde{\sigma}_{t+d+1}^{2})\) can be expressed as follows: \[\begin{cases}\tilde{\mu}_{t+d+1}=\mu_{t}+b_{u}(d+1)u_{[t]}+b_{u}\frac{d(d+1)}{ 2}\Delta u\\ \tilde{\sigma}_{t+d+1}^{2}=\sigma_{t}^{2}+b_{u}^{2}\frac{d(d+1)}{2}\sigma_{u}^ {2}+(d+1)\sigma_{x}^{2}.\end{cases}\] (A15) The actual state is derived from repeating Eq. (A1) as follows: \[\mathbf{x}_{[t+\mathbf{d}+1]}=\mathbf{x}_{[t]}+\sum_{k=0}^{\mathbf{d}}\Big{(}b_{u} \mathbf{u}_{[t-k]}+\mathbf{\epsilon}_{\mathbf{x}_{[t-k]}}\Big{)}. \tag{16}\] Let us consider \(\mathbf{u}_{[t-k]}\) in Eq. (16). This model represents the past operation input. We assume that the agent recalls \(\mathbf{u}_{[t-k]}\) from \(\mathbf{u}_{[t-k+1]}\) as the following equation. We linearly approximate \(\mathbf{u}_{[t-k]}\) in \(\mathbf{1}\leq\mathbf{k}\leq\mathbf{d}\) and denote the approximation error by \(\mathbf{\epsilon}_{\mathbf{u}}\). \[\mathbf{u}_{[t-k]}\approx\mathbf{u}_{[t]}+k\Delta\mathbf{u}+\mathbf{ \epsilon}_{\mathbf{u}_{[t-k]}},\quad(1\leq k\leq\mathbf{d}) \tag{17}\] where \(\Delta\mathbf{u}\) is a gradient constant. Substituting Eq. (17) into Eq. (16) yields the following expression: \[\mathbf{x}_{[t+\mathbf{d}+1]}=\mathbf{x}_{[t]}+b_{u}(\mathbf{d}+ 1)\mathbf{u}_{[t]}+b_{u}\,\frac{\mathbf{d}(\mathbf{d}+1)}{2}\Delta\mathbf{u}+ b_{u}\sum_{k=0}^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{u}_{[t-k]}}+\sum_{k=0}^{ \mathbf{d}}\mathbf{\epsilon}_{\mathbf{x}_{[t-k]}}. \tag{18}\] Here, we consider the differences between each state. From Eq. (2) and the upper row of Eq. (14), we have \[\mu_{t+d+1}-\tilde{\mu}_{t+d+1}=\frac{\tilde{\sigma}_{t+d+1}^{2}} {\tilde{\sigma}_{t+d+1}^{2}+\sigma_{y}^{2}}\Big{(}\mathbf{x}_{[t+\mathbf{d}+1 ]}-\tilde{\mu}_{t+d+1}+\epsilon_{y_{[t+\mathbf{d}+1]}}\Big{)}. \tag{19}\] The left side of Eq. (19) represents the prediction error. From the upper row of Eq. (15) and Eq. (18), and Eq. (20) when we approximate as Eq. (1). \[\mathbf{x}_{[t+\mathbf{d}+1]}-\tilde{\mu}_{t+d+1}=\mathbf{x}_{[t] }-\mu_{t}+b_{u}(\mathbf{d}-d)\left(u_{[t]}+\frac{\mathbf{d}+d+1}{2}\Delta u \right)+b_{u}\sum_{k=0}^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{u}_{[t-k]}}+\sum_{ k=0}^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{x}_{[t-k]}} \tag{20}\] \[\begin{cases}\mathbf{u}_{[t]}\approx u_{[t]}\\ \Delta\mathbf{u}\approx\Delta u\end{cases} \tag{21}\] Eq. (21) indicates that the agent accurately perceives the current operation input and the gradient of the recent operation inputs. From Eqs. (15), (11), and (18), \[\mathbf{x}_{[t+\mathbf{d}+1]}-\tilde{\mathbf{x}}_{[t+d+1]}\] \[=\mathbf{x}_{[t]}-\mu_{t}+b_{u}\Bigg{(}(\mathbf{d}-d)\left(u_{[t ]}+\frac{\mathbf{d}+d+1}{2}\Delta u\right)+\sum_{k=0}^{\mathbf{d}}\mathbf{ \epsilon}_{\mathbf{u}_{[t-k]}}-\sum_{k=1}^{d}\sum_{l=1}^{k}\mathbf{\epsilon}_{u_{[ t-k+l]}}\Bigg{)}+\mathbf{\epsilon}_{t}+\sum_{k=0}^{\mathbf{d}}\mathbf{\epsilon}_{ \mathbf{x}_{[t-k]}}-\sum_{k=1}^{d}\mathbf{\epsilon}_{x_{[t-k]}}, \tag{22}\] where \(\mathbf{\epsilon}_{t}\)\(\sim\)\(\mathcal{N}(0,\sigma_{t}^{2})\) is the noise due to uncertainty of the estimated actual state. The left-hand side of Eq. (22) represents the operation error and is related to task performance. From Eq. (2) and the upper row of Eq. (14), we obtain the following expression: \[\mathbf{x}_{[t]}-\mu_{t}=\frac{\tilde{\sigma}_{y}^{2}}{\tilde{ \sigma}_{t}^{2}+\sigma_{y}^{2}}\big{(}\mathbf{x}_{[t]}-\tilde{\mu}_{t}\big{)}- \frac{\tilde{\sigma}_{t}^{2}}{\tilde{\sigma}_{t}^{2}+\sigma_{y}^{2}}\mathbf{ \epsilon}_{y_{[t]}}. \tag{23}\] This study focused on the increase in forecast uncertainty due to delays. In addition, we assumed that the agent gazes at an object on the screen. Therefore, we approximate that the observation noise is negligible compared with prediction uncertainty, that is, we have the following: \[\sigma_{y}^{2}\ll\tilde{\sigma}_{t}^{2}. \tag{24}\] Next, Eq. (23) becomes \[\mathbf{x}_{[t]}-\mu_{t}\approx\epsilon_{y_{[t]}}. \tag{101}\] From Eqs. (100) and (101), we obtain the following: \[\sigma_{t}^{2}=\left(1-\frac{\tilde{\sigma}_{t}^{2}}{\tilde{\sigma}_{t}^{2}+ \sigma_{y}^{2}}\right)\tilde{\sigma}_{t}^{2}\approx\sigma_{y}^{2}. \tag{102}\] From Eqs. (101), (102), (101), (101), and (102), we obtain the following equations: \[= \tag{103}\] \[\mathbf{x}_{[t+d+1]}-\hat{x}_{[t+d+1]}\] \[= b_{u}\left(\mathbf{(d-d)}\left(u_{[t]}+\frac{\mathbf{d}+d+1}{2} \Delta u\right)+\sum_{k=0}^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{u}_{[\{k-k \}]}}-\sum_{k=1}^{d}\sum_{l=1}^{k}\epsilon_{u_{[\{k-k+l\}]}}\right)+\sum_{k=0} ^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{x}_{[\{k-k\}]}}-\sum_{k=1}^{d} \epsilon_{x_{[\{k-k\}]}}+2\epsilon_{y_{[t]}}.\] Now we calculate the prediction error (Eq. (103)) and operation errors (Eq. (104)) by sampling \(\mathbf{d}\)\(\sim\)\(\mathcal{N}\big{(}\bar{\mathbf{d}},\sigma_{\mathbf{d}}^{2}\big{)}\), \(d\)\(\sim\)\(\mathcal{N}(\bar{d},\sigma_{\mathbf{d}}^{2})\), \(\mathbf{\epsilon_{u}}\)\(\sim\)\(\mathcal{N}(0,\sigma_{\mathbf{u}}^{2})\), \(\mathbf{\epsilon_{x}}\)\(\sim\)\(\mathcal{N}(0,\sigma_{\mathbf{x}}^{2})\), \(\mathbf{\epsilon_{x}}\)\(\sim\)\(\mathcal{N}(0,\sigma_{x}^{2})\), and \(\mathbf{\epsilon_{y}}\)\(\sim\)\(\mathcal{N}(0,\sigma_{y}^{2})\). Finally, we simulated task performance and SoA. In this study, we define task performance as the percentage of operational errors that are within an acceptable range. \[(task\ performance)\equiv\frac{\text{count}\big{(}\big{|}\mathbf{x}_{[t+d+1]}- \hat{x}_{[t+d+1]}\big{|}\leq E_{max}\big{)}}{\text{count}(all)}\times 100\ [\%], \tag{105}\] where \(\ count(*)\\) indicates that "the count of samples that satisfy the condition of \(*\)." Furthermore, \(E_{max}\) represents the upper limit of allowable operation error (e.g., road width). We calculated the SoA based on the free energy model (Taniyama et al., 2021; Yanagisawa, 2016, 2021, see Chapter 2 in detail). \[(SoA)\equiv\frac{1}{\text{count}(all)}\times\sum_{sample}\frac{F_{max}-F_{ sample}}{F_{max}}\times 100\ [\%] \tag{106}\] \[F_{sample}=\frac{1}{2}\bigg{(}\frac{|\mu_{t+d+1}-\tilde{\mu}_{t+d+1}|^{2}}{ \tilde{\sigma}_{t+d+1}+\sigma_{y}}+\ln 2\pi\big{(}\tilde{\sigma}_{t+d+1}+ \sigma_{y}\big{)}\bigg{)} \tag{107}\] Furthermore, \(F_{max}\\) is a suitable constant value to map SoA from 0 to 100. ## Appendix B Detailed formulation of Predictive Wand We describe the detailed formulation of the effect of Predictive Wand in Chapter 3. The position indicated by Wand, \(\mathbf{w}_{[t]}\\) is expressed as follows: \[\mathbf{w}_{[t]}=\mathbf{x}_{[t]}+b_{u}\bar{\mathbf{d}}u_{[t]} \tag{108}\] The following equation is its observation equation: \[z_{[t]}=\mathbf{w}_{[t]}+\epsilon_{z}, \tag{109}\] where \(z_{[t]}\) is the observation and \(\epsilon_{z}\sim\mathcal{N}(0,\sigma_{x}^{2})\) is the observation noise. The agent predicts \((\widetilde{w}_{[t]}\sim\mathcal{N}\left(\tilde{\mu}_{w_{[t]}},\tilde{\sigma}_{ w_{[t]}}^{2}\right))\) and estimates \((w_{[t]}\sim\mathcal{N}\left(\mu_{w_{[t]}},\sigma_{w_{[t]}}^{2}\right))\), the actual pointed position using the Kalman Filter. \[\widetilde{w}_{[t]}=x_{[t]}+b_{u}du_{[t]} \tag{104}\] \[\begin{cases}\mu_{w_{[t]}}=\tilde{\mu}_{w_{[t]}}+\frac{\tilde{ \sigma}_{w_{[t]}}^{2}}{\tilde{\sigma}_{w_{[t]}}^{2}+\sigma_{x}^{2}}\Big{(}z_ {[t]}-\tilde{\mu}_{w_{[t]}}\Big{)}\\ \sigma_{w_{[t]}}^{2}=\left(1-\frac{\tilde{\sigma}_{w_{[t]}}^{2}}{\tilde{ \sigma}_{w_{[t]}}^{2}+\sigma_{x}^{2}}\right)\tilde{\sigma}_{w_{[t]}}^{2}\end{cases} \tag{105}\] As in Eq. (104), we approximated the observation noise to be negligibly small compared with prediction uncertainty. \[\sigma_{x}^{2}\ll\tilde{\sigma}_{w_{[t]}}^{2} \tag{106}\] Then, Eq. (105) transforms to the following: \[\begin{cases}\mu_{w_{[t]}}\approx z_{[t]}\\ \sigma_{w_{[t]}}^{2}\approx\sigma_{x}^{2}.\end{cases} \tag{107}\] The agent predicts the state at time \(t+d\), \(\tilde{x}_{[t+d]}\) based on Wand as follows: \[\begin{cases}\tilde{\mu}_{t+d}=\mu_{w_{[t]}}\\ \tilde{\sigma}_{t+d}^{2}=\sigma_{w_{[t]}}^{2}+\sigma_{p}^{2},\end{cases} \tag{108}\] where \(\sigma_{p}^{2}\) represents the uncertainty in predicting \(\tilde{x}_{[t+d]}\) from the estimated positions indicated by W and \(w_{[t]}\). We considered this as a suitable constant for the simulation. From Eqs. (103), (102), (103) and (108), we obtain the following expression: \[\begin{cases}\tilde{\mu}_{t+d+1}=\mathbf{x}_{[t]}+b_{u}\big{(} \bar{\mathbf{d}}+1\big{)}u_{[t]}\\ \tilde{\sigma}_{t+d+1}^{2}=\sigma_{x}^{2}+2\sigma_{x}^{2}+\sigma_{p}^{2}.\end{cases} \tag{109}\] We now formulate the prediction and operational errors. From the upper row of Eqs. (104), Eq. (108), (101), and (108). we obtain the following expression: \[\mu_{t+d+1}-\tilde{\mu}_{t+d+1}\] \[=\frac{\sigma_{x}^{2}+2\sigma_{x}^{2}+\sigma_{p}^{2}}{\sigma_{x}^ {2}+2\sigma_{x}^{2}+\sigma_{p}^{2}+\sigma_{y}^{2}}\Bigg{(}b_{u}\Bigg{\{}\big{(} \mathbf{d}-\bar{\mathbf{d}}\big{)}u_{[t]}+\frac{\mathbf{d}(\mathbf{d}+1)}{2} \Delta\mathbf{u}+\sum_{k=0}^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{u}_{[t-k]} }\Bigg{\}}+\sum_{k=0}^{\mathbf{d}}\mathbf{\epsilon}_{\mathbf{x}_{[t-k]}}+ \mathbf{\epsilon}_{y_{[t+d+1]}}\Bigg{)}. \tag{110}\] From Eqs. (105), (108), (109), (101), (103), (107), and (108), we obtain the following: \[\mathbf{x}_{[t+d+1]}-\tilde{\mathbf{x}}_{[t+d+1]}\] \[=b_{u}\Bigg{\{}\big{(}\mathbf{d}-\bar{\mathbf{d}}\big{)}u_{[t]}+ \frac{\mathbf{d}(\mathbf{d}+1)}{2}\Delta\mathbf{u}+\sum_{k=0}^{\mathbf{d}} \mathbf{\epsilon}_{\mathbf{u}_{[t-k]}}\Bigg{\}}+\sum_{k=0}^{\mathbf{d}} \mathbf{\epsilon}_{\mathbf{x}_{[t-k]}}-2\mathbf{\epsilon}_{x}-\mathbf{ \epsilon}_{p}, \tag{111}\] where \(\mathbf{\epsilon}_{p}\sim\mathcal{N}\big{(}0,\sigma_{p}^{2}\big{)}\) is noise due to prediction uncertainty. Finally, we simulate task performance and SoA using the definitions given in Eqs. (109), (100), and (101).
2301.03981
The Lüscher scattering formalism on the $t$-channel cut
The L\"uscher scattering formalism, the standard approach for relating the discrete finite-volume energy spectrum to two-to-two scattering amplitudes, fails when analytically continued so far below the infinite-volume two-particle threshold that one encounters the $t$-channel cut. This is relevant, especially in baryon-baryon scattering applications, as finite-volume energies can be observed in this below-threshold regime, and it is not clear how to make use of them. In this talk, we present a generalization of the scattering formalism that resolves this issue, allowing one to also constrain scattering amplitudes on the $t$-channel cut.
André Baião Raposo, Maxwell T. Hansen
2023-01-10T14:22:50Z
http://arxiv.org/abs/2301.03981v2
# The Luscher scattering formalism on the \(t\)-channel cut ###### Abstract: The Luscher scattering formalism, the standard approach for relating the discrete finite-volume energy spectrum to two-to-two scattering amplitudes, fails when analytically continued so far below the infinite-volume two-particle threshold that one encounters the \(t\)-channel cut. This is relevant, especially in baryon-baryon scattering applications, as finite-volume energies can be observed in this below-threshold regime, and it is not clear how to make use of them. In this talk, we present a generalization of the scattering formalism that resolves this issue, allowing one to also constrain scattering amplitudes on the \(t\)-channel cut. Introduction & motivation In recent years, there has been considerable progress in the determination of two-nucleon and other two-baryon scattering amplitudes using numerical lattice QCD [1, 2, 3, 4, 5, 6, 7, 8, 9]. One of the leading methods in these calculations is to first extract the finite-volume energy spectrum and subsequently the scattering amplitudes via the Luscher formalism and its extensions [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. In such calculations, each finite-volume energy level constrains or predicts the scattering matrix for all multi-hadron channels that can physically propagate at that energy. One limitation in all finite-volume formalisms to date is that they neglect volume effects associated with \(t\)-channel (or left-hand) cuts.1 This is most obviously a problem when the lattice calculation predicts energies that are on top of the cut, as recently seen in ref. [7]. The finite-volume formalism is manifestly not applicable here, leading to predictions of a real-valued K-matrix (equivalently, a real-valued scattering phase shift) in a region where the latter is known to be complex. Footnote 1: For ref. [21], the issue of the cut may be circumvented by working in the plane-wave basis, but this is not specifically discussed in those proceedings. In this proceedings, we present an extension of the original formalism that can be applied on the left-hand cut. We begin with a brief review of infinite-volume scattering and the standard Luscher formalism, in sections 2 and 3, respectively. In section 4, we illustrate how the \(t\)-channel cut becomes an issue and, in section 5, we briefly describe our approach to a solution, the full details of which will be presented in a publication (to appear). Conclusions and an outlook are given in section 6. ## 2 Two-to-two scattering in infinite-volume We review a few properties of scattering amplitudes in the infinite-volume context, making no reference yet to the finite-volume formalism. Considering two-to-two elastic scattering of non-identical mass-degenerate spin-zero particles with physical mass \(M\), we write the total four-momentum in a given frame as \(P=(E,\mathbf{P})\) and introduce the standard Mandelstam invariants \(s\) and \(t\). Mandelstam \(s\) satisfies \(s=P^{2}=E^{2}-\mathbf{P}^{2}=(E^{\star})^{2}\), where \(E^{\star}\) denotes the centre-of-mass frame energy. Note we will use \(\star\) to denote quantities boosted to the centre-of-mass frame. The scattering amplitude, which we write \(\mathcal{M}(s,t)\), can be formally expressed as the sum of all connected and amputated two-to-two Feynman diagrams, with legs amputated and set on the mass shell (i.e. with external momenta \(p\) having \(p^{2}=(p^{0})^{2}-\mathbf{p}^{2}=M^{2}\)). This all-orders sum can be organized by introducing the Bethe-Salpeter kernel, defined as the sum of all connected and amputated two-to-two diagrams that are two-particle irreducible in the \(s\)-channel2. The amplitude is then expressible in terms of the Bethe-Salpeter kernels and pairs of dressed propagators of the scattering scalars, as shown in figure 1. Note that all propagators considered are taken with the standard \(i\epsilon\) prescription, and all loop momenta are integrated over all components. Footnote 2: In other words, the Bethe-Salpeter kernel is built from diagrams that cannot be separated into two pieces by cutting through two propagators whose momenta sum to the total four-momentum \(P=(E,\mathbf{P})\). It is also instructive to define partial-wave amplitudes according to \[\mathcal{M}(s,t)=\sum_{\ell=0}^{\infty}P_{\ell}(\cos\theta^{\star})\,\mathcal{ M}_{\ell}(s)\,, \tag{1}\] where \(P_{\ell}\) is a Legendre polynomial and \(\theta^{\star}\) is the scattering angle in the centre-of-mass frame, satisfying \(\sin^{2}(\theta^{\star}/2)=-t/(s-4M^{2})\). Using unitarity of the scattering matrix, one can show that the imaginary part of \({\cal M}_{\ell}(s)^{-1}\) is independent of the details of particle interactions. The real part is then typically parameterized using the scattering phase shift \(\delta_{\ell}(s)\). We may write \[{\rm Im}\,{\cal M}_{\ell}(s)^{-1}=-\rho(s)\,\Theta(E^{\star}-2M)\,,\qquad{\rm Re }\,{\cal M}_{\ell}(s)^{-1}=\rho(s)\cot\delta_{\ell}(s)\equiv{\cal K}_{\ell}(s )^{-1}\,, \tag{2}\] where \(\rho(s)\equiv\frac{p^{\star}}{8\pi E^{\star}}\) is the phase-space factor for non-identical particles, with \(p^{\star}\equiv\frac{1}{2}\sqrt{s-4M^{2}}\) denoting each particle's centre-of-mass spatial momentum magnitude, and we have defined the K-matrix \({\cal K}_{\ell}(s)\). This leads to the standard form of the partial-wave amplitude \[{\cal M}_{\ell}(s)=\frac{1}{{\cal K}_{\ell}(s)^{-1}-i\rho(s)}=\frac{8\pi E^{ \star}}{p^{\star}\cot\delta_{\ell}-ip^{\star}}\,. \tag{3}\] One can also reach these results via the Bethe-Salpeter series of figure 1 if one defines the K-matrix \({\cal K}\) by the same series as the amplitude \({\cal M}\), but in which all two-particle loops are evaluated with a principal-value prescription instead of the \(i\epsilon\) prescription. The relations above hold only for \((2M)^{2}<s<(E^{\star}_{\rm{inel.}})^{2}\), where \(E^{\star}_{\rm{inel.}}\) is the lowest-lying inelastic threshold coupling to the channel of interest. This fact has received significant attention for energies above \(E^{\star}_{\rm{inel.}}\) (in the form of three-particle finite-volume formalisms [22, 23, 24, 25, 26, 27]), but here we are concerned with the range \(s<(2M)^{2}\). For these sub-threshold energies, one can analytically continue the amplitude by taking \(-i\rho(s)\to|\rho(s)|\) in order to remain on the physical Riemann sheet. Such an analytic continuation leads to a real-valued scattering amplitude, provided that the K-matrix is real. When, however, we have a lighter particle coupling to the scattering channel of interest, the K-matrix partial waves become complex-valued due to a sub-threshold branch cut, the so-called \(t\)-channel or left-hand cut. Before turning to the consequences of the cut, we review the standard finite-volume formalism of Luscher and show that a real-valued K-matrix is implicitly assumed in the sub-threshold analytic continuation, and thus that the formalism is not applicable on the cut. Figure 1: (a) Diagrammatic representation of the two-to-two scattering amplitude using Bethe-Salpeter kernels and dressed propagators. (b) Definition of the Bethe-Salpeter kernel as the sum of all connected and amputated two-to-two diagrams which are two-particle irreducible in the \(s\)-channel. Dashed lines denote other particles that might couple to the scattering channels of interest. (c) Definition of the dressed propagator in terms of bare propagators and self-energy kernels. (d) Definition of the self-energy as the sum of all one-particle irreducible diagrams. ## 3 Review of the Luscher formalism In this section, we review the derivation of the Luscher quantization condition [10], subsequently extended in refs. [11, 12, 13, 14, 15, 16, 17, 18, 19, 20] to include all types of coupled two-particle channels. We focus here on the case of a single channel with two mass-degenerate but non-identical spin-zero particles. Consider a quantum field theory defined in a finite cubic spatial volume of side-length \(L\), with periodic boundary conditions. This system has a discrete \(L\)-dependent energy spectrum, and the energies lying below the lowest-lying three- or four-particle threshold can be used to extract the elastic two-to-two scattering amplitude. We follow closely the derivation of Kim, Sachrajda, and Sharpe [13], used also in refs. [28, 29, 30]. We begin by defining a two-point correlation function \[C_{L}(E,\mathbf{P})\equiv\int\,dx^{0}\int_{L}d^{3}\mathbf{x}\;e^{-iE\mathbf{x}^{0}}e^{i\bm {P}\cdot\mathbf{x}}\langle 0|\mathrm{T}\mathcal{A}(x)\mathcal{A}^{\dagger}(0)|0 \rangle_{L}\,, \tag{4}\] where the subscript \(L\) in \(\int_{L}d^{3}\mathbf{x}\) indicates that the integral runs over the finite volume, \(E\) denotes the total energy, \(\mathbf{P}\) is the total spatial momentum, and \(\mathcal{A}(x)\) and \(\mathcal{A}^{\dagger}(x)\) are annihilation and creation operators carrying the quantum numbers of the scattering channel of interest. One can construct a diagrammatic representation for this correlator using the ingredients already introduced in the previous section, the Bethe-Salpeter kernel and dressed propagator pairs. This is known as the skeleton expansion for the correlator and is shown in figure 2. In finite volume, spatial loop momenta are discretized as \(\mathbf{k}=\frac{2\pi}{L}\mathbf{n}\), with \(\mathbf{n}\in\mathbb{Z}^{3}\), and we have spatial loop momentum sums instead of integrals, i.e. we replace \(\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\to\frac{1}{L^{3}}\sum_{\mathbf{k}\in(2\pi/L) \mathbb{Z}^{3}}\) for all loops. The key observation here is that not all loops have the same volume dependence: loops with intermediate states that cannot go on shell in the energy range considered have exponentially suppressed volume effects \(\mathcal{O}(e^{-mL})\) with respect to their infinite-volume analogues, with \(m\) being the mass of the lightest particle coupled to the system, while loops with intermediate states that can go on shell have power-like effects \(\mathcal{O}(L^{-n})\), for some non-negative integer \(n\). In the elastic regime, on-shell states come precisely from the loops left explicit in the skeleton expansion shown in figure 2. Every other loop, implicitly included in the Bethe-Salpeter kernels and the dressed propagators, may be replaced by its infinite-volume counterpart, as the difference, which we neglect, is exponentially suppressed in \(L\). Thus, we effectively replace the finite-volume Bethe-Salpeter kernels and dressed propagators with their infinite-volume counterparts. The contribution from a generic two-particle loop shown in figure 2 can be written as \[C_{L}^{\text{loop}}(P)\equiv\int\,\frac{dk^{0}}{2\pi}\frac{1}{L^{3}}\sum_{ \mathbf{k}}\mathcal{L}(P,k)\;\Delta(k)\;\Delta(P-k)\,\mathcal{R}^{*}(P,k)\,, \tag{5}\] Figure 2: Skeleton-expansion representation of the finite-volume correlator \(C_{L}(E,\mathbf{P})\), in terms of Bethe-Salpeter kernels and dressed propagators, as defined in figure 1. The end-cap “blobs” stand for functions in momentum-space originating from the Fourier transforms of the creation and annihilation operators. As discussed, one can take the kernels and propagators to be the infinite-volume objects, as the difference between these and their finite-volume counterparts is exponentially suppressed, and thus only the loops explicitly shown need to be treated as finite-volume loops. with \(P\equiv(E,\mathbf{P})\) and loop momentum \(k\equiv(k^{0},\mathbf{k})\). The functions \(\mathcal{L}\) and \(\mathcal{R}^{*}\) stand for the objects before and after the given loop, \(\Delta\) is a fully dressed scalar propagator. Performing the \(k^{0}\)-integral and decomposing \(\mathcal{L}\) and \(\mathcal{R}\) in spherical harmonics with \(k\) individually put on-shell, i.e. setting \(k=(\omega(\mathbf{k}),\mathbf{k})\) with \(\omega(\mathbf{k})\equiv\sqrt{\mathbf{k}^{2}+M^{2}}\), we can obtain \[C_{L}^{\text{loop}}(P)=\frac{1}{L^{3}}\sum_{\mathbf{k}}\mathcal{L}_{\ell m}(P,|\bm {k}^{\star}|)\,i\mathcal{S}_{\ell m;\ell^{\prime}m^{\prime}}(P,\mathbf{k};L)\, \mathcal{R}^{*}_{\ell^{\prime}m^{\prime}}(P,|\mathbf{k}^{\star}|)+r(P)\,, \tag{6}\] where sums over the repeated indices \(\ell,m\) and \(\ell^{\prime},m^{\prime}\) are implied, and we have introduced \[\mathcal{S}_{\ell m;\ell^{\prime}m^{\prime}}(P,\mathbf{k};L)\equiv\frac{4\pi\,Y_{ \ell m}(\hat{\mathbf{k}}^{\star})\,Y^{*}_{\ell^{\prime}m^{\prime}}(\hat{\mathbf{k}}^{ \star})\,H(\mathbf{k}^{\star})}{2\omega(\mathbf{k})\,2\omega(\mathbf{P}-\mathbf{k})\,(E-\omega (\mathbf{k})-\omega(\mathbf{P}-\mathbf{k}))}\left(\frac{|\mathbf{k}^{\star}|}{p^{\star}}\right) ^{\ell+\ell^{\prime}}\,, \tag{7}\] for later convenience. The term \(r(P)\) in eq. (6) is a sum over a smooth summand, leading to exponentially suppressed finite-volume corrections, which we may neglect. The summand in the first term and, more specifically, the quantities \(\mathcal{S}_{\ell m;\ell^{\prime}m^{\prime}}(P,\mathbf{k};L)\), contain the pole corresponding to the two-particle intermediate state going on-shell, as can be seen explicitly in eq. (7), and thus this term contains all the power-like volume dependence arising from the loop we are considering. In eq. (7), we use \(p^{\star}\equiv\frac{1}{2}\sqrt{s-4M^{2}}\), the scattering particle's centre-of-mass spatial momentum magnitude, as defined in section 2. Consequently, setting \(|\mathbf{k}^{\star}|=p^{\star}\) satisfies the intermediate two-particle state on-shell condition \(E=\omega(\mathbf{k})+\omega(\mathbf{P}-\mathbf{k})\). This relation fixes the magnitude of \(\mathbf{k}^{\star}\) at the pole, but not its direction. The barrier factor \(\left(|\mathbf{k}^{\star}|/p^{\star}\right)^{\ell+\ell^{\prime}}\) is introduced to ensure that no singularities arise from the spherical harmonics. The function \(H(\mathbf{k}^{\star})\) is a regulator function which takes a value of \(1\) for \(4\omega(\mathbf{k}^{\star})^{2}<(E^{\star}_{\text{inel}})^{2}\) and of \(0\) for \(4\omega(\mathbf{k}^{\star})^{2}>(E^{\star}_{\text{uv}})^{2}\), where again \(E^{\star}_{\text{inel}}\) is the lowest lying three- or four-particle threshold, and \(E^{\star}_{\text{uv}}\) is some chosen high ultraviolet cut-off. In the region between, \(H(\mathbf{k}^{\star})\) interpolates smoothly between the two values. This regulator function is similar to the one found in the three-body scattering formalism of refs. [22, 23], and corresponds to a separation of low-energy and high-energy parts of the sum.3 Footnote 3: It should be emphasized that we have renormalization and regularization schemes keeping the overall result finite, the regulator function here simply ensures we have a separation of high-energy and low-energy contributions to the sum such that both parts are finite and that the low-energy part, which contains the relevant singular behaviour, is tractable when implemented numerically. \(E_{\text{uv}}\) will simply be a scheme-dependence of the formalism, but should be kept high, as setting it too low will lead to enhanced finite-volume effects. We next reduce eq. (6) by expanding the functions \(\mathcal{L}_{\ell m}(P,|\mathbf{k}^{\star}|)\) and \(\mathcal{R}^{*}_{\ell^{\prime}m^{\prime}}(P,|\mathbf{k}^{\star}|)\) about \(|\mathbf{k}^{\star}|=p^{\star}\) and subtracting and adding an integral to reach \[C_{L}^{\text{loop}}(P) =\mathcal{L}_{\ell m}(P,p^{\star})\,iF_{\ell m;\ell^{\prime}m^{ \prime}}(P;L)\,\mathcal{R}^{*}_{\ell^{\prime}m^{\prime}}(P,p^{\star})+r^{ \prime}(P)\,,\] \[=\mathcal{L}_{\text{os}}(P)\,iF(P;L)\,\mathcal{R}^{\dagger}_{ \text{os}}(P)+r^{\prime}(P)\,, \tag{8}\] where we introduce the sum-integral difference \[F_{\ell m;\ell^{\prime}m^{\prime}}(P;L)\equiv\left[\frac{1}{L^{3}}\sum_{\mathbf{k }}-\text{p.v.}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\right]\mathcal{S}_{\ell m; \ell^{\prime}m^{\prime}}(P,\mathbf{k};L)\,. \tag{9}\] Here, p.v. means the integral is evaluated using a principal-value prescription. The remainder term \(r^{\prime}(P)\) differs from \(r(P)\), but still contains the sum of a smooth summand together with p.v. integrals. In the second line of eq. (8), we have defined a compact vector-matrix notation in the angular-momentum index space. Applying this procedure iteratively to all loops in the skeleton expansion diagrams, it can be shown that we may write the finite-volume correlator in the form: \[C_{L}(P) =\sum_{n=0}^{\infty}A(P)\,iF(P;L)\,\left[i{\cal K}(P)\,iF(P;L) \right]^{n}A^{\dagger}(P)+C_{\infty}(P)\] \[=A(P)\,i\left[F^{-1}(P;L)+{\cal K}(P)\right]^{-1}A^{\dagger}(P)+ C_{\infty}(P). \tag{10}\] Here, we have summed the geometric series of the first line to obtain the second line. \(A(P)\) and \(A^{\dagger}(P)\) are vectors in angular momentum index space, originating from the source and sink operators, and \({\cal K}(P)\) is the K-matrix introduced in the previous section4. Given that we neglect exponentially suppressed volume effects, the \(L\)-dependence is entirely contained within the matrix \(F(P;L)\). Footnote 4: Note \({\cal K}\) is an infinite-volume scalar, and thus only depends on \(s=P^{2}\), but we keep \(P\) as an argument for compactness. Using a spectral representation of the correlator, it is straightforward to show that it must have poles at the energy levels of the finite-volume spectrum \(E_{n}(\mathbf{P};L)\). These poles in \(C_{L}(E,\mathbf{P})\) can only arise from the \(L\)-dependent part of the first term, meaning that we must have \[\det\left[F^{-1}(E_{n}(\mathbf{P};L),\mathbf{P};L)+{\cal K}(E_{n}(\mathbf{P};L),\mathbf{P}) \right]=0\,, \tag{11}\] at all finite-volume energy levels. This is called the _Luscher quantization condition_, and it can be used to determine \({\cal K}\), and hence the scattering amplitude \({\cal M}\), from the knowledge of the finite-volume spectrum. The matrices involved in the condition (11) are formally infinite-dimensional, since the set of possible angular momentum indices \(\ell m\) is infinite. For practical use, we must truncate them to the lowest harmonics, making the approximation that \({\cal K}\) vanishes for \(\ell>\ell_{\text{max}}\). This relies on a fast convergence of the partial-wave expansion of the amplitude, such that keeping the lowest harmonics still leads to a reasonable reconstruction of the amplitude. ## 4 The \(t\)-channel problem The finite-volume spectrum can sometimes include energy levels that drop below the infinite-volume elastic threshold at \(s=(2M)^{2}\). This can occur due to the appearance of a bound state (such that the \(L\to\infty\) limit gives the bound-state mass) as well as to an attractive scattering state (such that the energy approaches \(2M\) for \(L\to\infty\)). In many cases, the Luscher formalism can be analytically continued from \(s>(2M)^{2}\) and the sub-threshold finite-volume energy then provides an important constraint on the K-matrix below threshold. A subtlety arises, however, when the sub-threshold amplitude \({\cal M}(s,t)\), and therefore also the K-matrix, has a nearby \(t\)-channel cut. This is generically the case in baryon-baryon systems, for example, where a light meson can be exchanged in the \(t\)-channel. Taking \(m\) and \(M\) to be the meson and baryon masses, respectively, and assuming \(m\ll M\), one finds that a pole arises in \({\cal M}(s,t)\) at \(t=m^{2}\). For a given fixed choice of the centre-of-mass frame scattering angle \(\theta^{\star}\), this then leads to a pole in \(s\) at \[s=4M^{2}-\frac{t}{\sin^{2}(\theta^{\star}/2)}\bigg{|}_{t=m^{2}}=4M^{2}-\frac{m ^{2}}{\sin^{2}(\theta^{\star}/2)}. \tag{12}\] The analytic structure of the scattering amplitude for such systems is shown in figure 3(a). From the expression, one sees that the pole position in \(s\) varies from \(s=4M^{2}-m^{2}\) to \(s=-\infty\) as \(\theta^{\star}\) is varied from \(0\) to \(\pi\). As a result, the angular-momentum projection of the scattering amplitude leads to a branch cut running over this interval as shown in figure 3(b). Multiple meson exchanges can also occur, leading to additional cuts in both the fixed-\(\theta^{\star}\) and the angular-momentum projected amplitudes. In the latter case, these run along \(s\leq(2M)^{2}-(nm)^{2}\) for \(n\) exchanged mesons. As stressed above, finite-volume energies can arise in the region of the branch cuts (as has recently been identified in ref. [7]) and a naive application of the analytically continued Luscher formalism fails. In this work, we restrict attention to the region \((2M)^{2}-(2m)^{2}<s<(2M)^{2}-m^{2}\), in which only the single-meson cut arises, and derive a modified version of the scattering formalism that resolves this limitation. ## 5 Proposed solution The breakdown in the original formalism can be traced back to the steps between eq. (6) and (8) in the review of section 3. In the step of replacing \(\mathcal{L}_{\ell m}(P,|\boldsymbol{k}^{\star}|)\) and \(\mathcal{R}^{*}_{\ell^{\prime}m^{\prime}}(P,|\boldsymbol{k}^{\star}|)\) with the on-shell quantities \(\mathcal{L}_{\ell m}(P,p^{\star})\) and \(\mathcal{R}^{*}_{\ell^{\prime}m^{\prime}}(P,p^{\star})\), the derivation assumes that the product of the two-particle pole and a given difference, e.g. \(\mathcal{L}_{\ell m}(P,|\boldsymbol{k}^{\star}|)-\mathcal{L}_{\ell m}(P,p^{ \star})\), is a smooth function of \(\boldsymbol{k}^{\star}\). This step fails in the sub-threshold region due to the \(t\)-channel cut. To handle this issue, we separate out the problematic \(t\)-channel exchanges from the Bethe-Salpeter kernel. We define \[ig^{2}T(\boldsymbol{k}^{\star},\boldsymbol{k}^{\star})\equiv-ig^{2}\frac{1}{- (\boldsymbol{k}^{\star}-\boldsymbol{k}^{\star\star})^{2}-m^{2}+i\epsilon}\,, \tag{13}\] and define a modified kernel by subtracting this from the full Bethe-Salpeter kernel as shown in figure 4. Here, \(g\) denotes the effective baryon-meson-baryon coupling. We emphasize that \(m\) is the physical mass of the meson, and thus that \(-iT\) corresponds to the singular part of the fully-dressed meson propagator. The difference between the bare and dressed propagators is smooth, and is simply absorbed into the modified kernel. Examining the modified kernel, we know that it can be safely evaluated at \(|\boldsymbol{k}^{\star}|=p^{\star}\) and does not possess a singularity or cut in the region \((2M)^{2}-(2m)^{2}<s<(2M)^{2}-m^{2}\). Crucially, we note also that \(ig^{2}T\) is safe if kept partially off shell, namely if we keep \(|\boldsymbol{k}^{\star}|\) and \(|\boldsymbol{k}^{\star\star}|\) real. Figure 3: (a) Analytic structure of the two-to-two scattering amplitude \(\mathcal{M}(s,t)\) in the complex-\(s\) plane, for fixed centre-of-mass scattering angle \(\theta^{\star}\), in the case where a lighter particle of mass \(m\) couples to the scattering particles of mass \(M\). We show the infinite-volume elastic threshold (at \(s=(2M)^{2}\)) and inelastic threshold (at \(s=(2M+m)^{2}\)) and corresponding branch cuts. Below threshold, we see the \(t\)-channel exchange pole, corresponding to \(t=m^{2}\), and a lower branch cut, corresponding to two mesons being exchanged in the \(t\) channel. (b) Analytic structure of the amplitude when projected to definite angular momentum. The \(t\)-channel cut, which runs down from the branch point at \(s=(2M)^{2}-m^{2}\), arises from the \(t\)-channel pole. Recalling expression (8) for the contribution of a skeleton expansion loop, we again emphasize that the finite-volume frame momentum \(\mathbf{k}\), and hence \(\mathbf{k}^{\star}\), are discretized and can be indexed by \(\mathbf{n}\in\mathbb{Z}^{3}\). Thus, we can then treat \(\mathbf{k}^{\star}\) as an extra index, writing \(\mathcal{L}_{\mathbf{k}^{\star}\ell_{m}}\equiv\mathcal{L}_{\ell m}(P,|\mathbf{k}^{ \star}|)\), \(\mathcal{R}_{\mathbf{k}^{\star}\ell_{m}}\equiv\mathcal{R}_{\ell m}(P,|\mathbf{k}^{ \star}|)\) and defining \(S_{\mathbf{k}^{\star}\ell_{m}\mathbf{k}^{\star}\ell_{m^{\prime}}}(P,L)\equiv\frac{1}{ L^{3}}\;\delta_{\mathbf{k}^{\star}\mathbf{k}^{\star}\mathbf{S}\ell_{m^{\prime}}}(P,\mathbf{k};L)\) such that we may rewrite (8) as: \[C_{L}^{\text{loop}}(P) =\mathcal{L}_{\mathbf{k}^{\star}\ell_{m}}(P)\,iS_{\mathbf{k}^{\star} \ell_{m}\mathbf{k}^{\star}\ell_{m^{\prime}}}(P)\,\mathcal{R}_{\mathbf{k}^{\star}\ell_{ m^{\prime}}}(P;L)+r(P)\;, \tag{14}\] \[=\mathcal{L}(P)\,iS(P;L)\,\mathcal{R}(P)+r(P)\;. \tag{15}\] In the first line, we are also implicitly summing over the momentum indices. In the second line, we again employ a compact vector-matrix notation, but now in the angular momentum plus spatial loop momentum index space. Applying this to all loops in the skeleton expansion diagrams and rearranging by factors of \(S(P;L)\), we obtain the finite-volume correlator in the form \[C_{L}(P)=\sum_{n=0}^{\infty}A(P)\,iS(P;L)\,\left[(i\bar{K}+ig^{2}T(P))\,iS(P;L )\right]^{n}A^{\dagger}(P)+C_{\infty}^{(i)}(P)\;, \tag{16}\] where all quantities in the first term are vectors or matrices in the angular momentum plus loop momentum index space. Note that \(A(P)\) and \(A^{\dagger}(P)\) are different from those in (8). The matrix \(ig^{2}T\) is the matrix of angular momentum projections of the \(t\)-channel exchange defined in (13), and \(\bar{K}(P)\) is the sum of all possible smooth contributions one can obtain between \(S(P;L)\) matrices. The second term \(C_{\infty}^{(i)}(P)\) is a collection of \(L\)-independent terms. From the discussion above, we know it is safe to set \(|\mathbf{k}^{\star}|=p^{\star}\) for \(\bar{K}(P)\) and, therefore, we can expand \(\bar{K}(P)\) about the on-shell point. We implement this by making use of a trivial vector \(u\) in the momentum index space, whose elements are \(u_{\mathbf{k}^{\star}}=1\), and making the substitution \(\bar{K}(P)=u\vec{\mathcal{K}}(P)u^{\dagger}+\left[\bar{K}(P)-u\vec{\mathcal{K }}(P)u^{\dagger}\right]\). The matrix \(\bar{\mathcal{K}}(P)\) is a matrix in the angular momentum index space only and corresponds to \(\bar{K}(P)\) with the dependence on the magnitude of spatial momentum (through the momentum index) set to the on-shell momentum, i.e. with \(|\mathbf{k}^{\star}|=p^{\star}\). The different terms in brackets lead to terms that are sums of smooth summands and can be shuffled into the remainder term. After summing over the resulting geometric series, we obtain the following for the correlator: \[C_{L}(P)=A(P)\,i\,\left[S^{-1}(P;L)+u\vec{\mathcal{K}}(P)u^{\dagger}+g^{2}T(P )\right]^{-1}A^{\dagger}(P)+C_{\infty}^{(ii)}(P)\;. \tag{17}\] Using the same arguments as in section 3, we can derive the quantization condition: \[\det\left[S^{-1}(E_{n}(\mathbf{P},L),\mathbf{P})+u\vec{\mathcal{K}}(E_{n}(\mathbf{P},L), \mathbf{P})u^{\dagger}+g^{2}T(E_{n}(\mathbf{P},L),\mathbf{P})\right]=0\,, \tag{18}\] at all finite-volume energy levels \(E_{n}(\mathbf{P},L)\). Given that \(S^{-1}(E_{n}(\mathbf{P},L),\mathbf{P})\) and \(T(E_{n}(\mathbf{P},L),\mathbf{P})\) can be calculated numerically, one can use the knowledge of the finite-volume spectrum to obtain \(\bar{\mathcal{K}}(E_{n}(\mathbf{P},L),\mathbf{P})\) as well as the coupling \(g\). This object can then be linked back to the two-to-two scattering amplitude via integral equations, in a similar vein to the procedure used for the three-particle scattering formalism of refs. [22, 23, 31]. We leave further discussion to the upcoming paper. Figure 4: Separation of the Bethe-Salpeter kernel into a modified kernel (square box), which is safe to put on shell, and the \(t\)-channel meson exchange, represented by a meson propagator at physical mass. ## 6 Summary & Outlook In this proceedings, we have described our progress in addressing issues arising in the Luscher finite-volume scattering formalism [10] and extensions [11, 12, 13, 14, 15, 16, 17, 18, 19, 20] in the case of sub-threshold finite-volume energies appearing on the \(t\)-channel cut. This work is motivated by recent lattice calculations in baryon-baryon systems that have observed such energy levels [7]. To present the extension we first reviewed the standard derivation, following the method of Kim, Sachrajda, and Sharpe [13] for the case on non-identical spin-zero particles. We then identified the step in the derivation that fails to correctly account for the sub-threshold cut and provided a modification to address the issue. Our main result is an adapted quantization condition that applies above and below elastic threshold, including on the cut associated with single-meson exchange, though not on lower cuts arising from the exchange of multiple mesons. As we have in mind applications to baryon-baryon scattering, the next step, currently ongoing, is generalizing the derivation to particles with arbitrary intrinsic spin. Once the theoretical work is concluded, future directions include numerical tests on mock data (e.g. in the spirit of refs. [19, 32]) and eventually applications to lattice QCD baryon-baryon data. ## Acknowledgements The authors thank Raul Briceno, John Bulava, Evgeny Epelbaum, Drew Hanlon, Arkaitz Rodas, Fernando Romero-Lopez, Maxim Mai, Steve Sharpe, and Hartmut Wittig for useful discussions, including those in the context of the _Bethe Forum on Multihadron Dynamics in a Box_ that took place at the Bethe Center for Theoretical Physics in Bonn, Germany. M.T.H. is supported by UKRI Future Leader Fellowship MR/T019956/1, and both M.T.H and A.B.R. are partly supported by UK STFC grant ST/P000630/1.
2310.04777
Characterizing the effect of eccentricity on the dynamics of binary black hole mergers in numerical relativity
Many articles have partially studied the configuration of eccentric orbital binary black hole (BBH) mergers. However, there is a scarcity of systematic and comprehensive research on the effect of eccentricity on BBH dynamics. Thanks to the rich and numerous numerical relativistic simulations of eccentric orbital BBH mergers from RIT catalog, this paper aims to investigate the impact of initial eccentricity $e_0$ on various dynamic quantities such as merger time $T_{\text{merger}}$, peak luminosity $L_{\text{peak}}$ of gravitational waves, recoil velocity $V_f$, mass $M_f$, and spin $\alpha_f$ of merger remnants. We cover configurations of no spin, spin alignment, and spin precession, as well as a broad parameter space of mass ratio ranging from 1/32 to 1 and initial eccentricity from 0 to 1. For non-spinning BBH with an initial coordinate separation of $11.3M$ ($M$ is the total mass of BBH), we make the first discovery of a ubiquitous oscillation in the relationship between dynamic quantities $L_{\text{peak}}$, $V_f$, $M_f$, $\alpha_f$, and initial eccentricity $e_0$. Additionally, at $24.6M$, we observe the same oscillation phenomenon in the case of mass ratio $q=1$, but do not see it in other mass ratios, suggesting that this oscillation will be evident in numerical simulations with sufficiently dense initial eccentricity. abbreviated
Hao Wang, Yuan-Chuan Zou, Qing-Wen Wu, Yu Liu, Xiaolin Liu
2023-10-07T11:19:06Z
http://arxiv.org/abs/2310.04777v1
Characterizing the effect of eccentricity on the dynamics of binary black hole mergers in numerical relativity ###### Abstract Many articles have partially studied the configuration of eccentric orbital binary black hole (BBH) mergers. However, there is a scarcity of systematic and comprehensive research on the effect of eccentricity on BBH dynamics. Thanks to the rich and numerous numerical relativistic simulations of eccentric orbital BBH mergers from RIT catalog, this paper aims to investigate the impact of initial eccentricity \(e_{0}\) on various dynamic quantities such as merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\) of gravitational waves, recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of merger remnants. We cover configurations of no spin, spin alignment, and spin precession, as well as a broad parameter space of mass ratio ranging from \(1/32\) to \(1\) and initial eccentricity from \(0\) to \(1\). For non-spinning BBH with an initial coordinate separation of \(11.3M\) (\(M\) is the total mass of BBH), we make the first discovery of a ubiquitous oscillation in the relationship between dynamic quantities \(L_{\rm peak}\), \(V_{f}\), \(M_{f}\), \(\alpha_{f}\), and initial eccentricity \(e_{0}\). Additionally, at \(24.6M\), we observe the same oscillation phenomenon in the case of mass ratio \(q=1\), but do not see it in other mass ratios, suggesting that this oscillation will be evident in numerical simulations with sufficiently dense initial eccentricity. By associating the integer numbers of the orbital cycle of \(N_{\rm waves}\) with the peaks and valleys observed in the curves depicting the relationship between the dynamic quantities and the initial eccentricity, we reveal the significant oscillatory behavior attributed to orbital transitions. This discovery sheds light on the presence of additional orbital transitions in eccentric BBH mergers, extending beyond the widely recognized transition from inspiral to plunge. We perform an analysis to understand the different behaviors exhibited by the dynamic quantities and attribute them to variations in the calculation formulas. Furthermore, we demonstrate that finely adjusting the initial eccentricity can lead to the remnant black hole becoming a Schwarzschild black hole in the case of spin alignment. In a comprehensive analysis that surpasses previous studies by encompassing cases of no spin, spin alignment, and spin precession, we reveal consistent variations in the correlation between dynamic quantities and initial eccentricity, regardless of the presence of spin. This discovery underscores the universality of the impact of eccentricity on BBH dynamics and carries profound implications for astrophysical research. ## I Introduction Since the groundbreaking detection of the gravitational wave event GW150914 in 2015 [1], gravitational wave astronomy has entered a transformative era. Over time, gravitational wave detection has evolved into a routine practice. Ground-based gravitational wave detectors, namely LIGO [2], Virgo [3], and KAGRA [4] (collectively known as LVK), have successfully observed and recorded 93 gravitational wave events [5]. These events encompass a variety of sources, including binary black holes (BHH), black hole-neutron star (BHNS) systems, and binary neutron star (NSNS) systems. Since its breakthrough in solving the BBH merger problem [6; 7; 8], numerical relativity (NR) has delved into deeper corners of the BBH parameter space. This technique has explored various scenarios, including systems with no spin, spin alignment, spin precession, eccentric orbits, and extreme mass ratios. However, most of the existing research in NR and gravitational wave detection has primarily focused on circular orbits. This emphasis on circularization is due to the gravitational wave radiation's circularizing effect [9; 10], which eventually leads to BBH formed through the evolution of isolated binary stars in galaxy fields having circular orbits. These events of BBH mergers in circular orbits represent the primary targets for ground-based gravitational wave detectors such as LVK. Nevertheless, there are several mechanisms through which BBH can acquire non-zero eccentricity before merging. In dense regions like globular clusters [11; 12; 13; 14; 15; 16; 17; 18] and galactic nuclei [13; 19; 20; 21; 22; 23; 24; 25], BBH can gain eccentricity through processes [26] such as double-single interactions [27; 28], double-double interactions [29; 30], and gravitational capture [31; 19]. Additionally, in three-body systems [32] involving binary objects orbiting a supermassive black hole, the eccentricity of the inner binary can undergo oscillations due to the Kozai-Lidov mechanism [33; 34; 35; 36; 37; 38; 39]. These eccentric BBHs become detectable once they enter the frequency band of gravitational wave detectors. An example is the GW190521 event [40], which is considered a possi ble BBH merger with high mass and high eccentricity (\(e=0.69^{+0.17}_{-0.22}\)) [41; 42]. With the continuous improvement in detector sensitivity, future ground-based gravitational wave detectors like the Einstein Telescope (ET) [43] or the Cosmic Explorer (CE) [44] are expected to detect an increasing number of eccentric BBH mergers. Analytical relativity offers various methods to study the dynamics of BBHs merger, such as post-Newtonian (PN) [45], effective one body (EOB) [46; 47], and black hole perturbation theory (BHPT) [48]. These analytical approaches are effective in describing the early adiabatic inspiral phase of BBHs. However, they fall short in capturing the extreme relativistic and nonlinear strong field dynamics, including the plunge and merger stages. To understand these crucial phases, we must rely on NR. During the past decades, several NR collaborations, such as SXS [49; 50], RIT [51; 52; 53; 54], and Georgia Tech [55; 56], have conducted numerous simulations of binary compact objects. They have made their simulation catalogs publicly available, contributing significantly to the field. Modeling dynamical quantities such as peak luminosity, recoil velocity, remnant mass, and spin of BBH mergers carries significant astrophysical implications. However, due to the complexity of eccentric orbits, most articles that model these dynamical quantities mainly focus on circular orbits. In the case of precession, Ref. [57] employed the Gaussian process regression (GPR) method to model the peak luminosity. Early estimations of recoil velocity relied on analytical approximation methods, including PN [58; 59], EOB [60], and closed limit approximation [61]. Nowadays, more methods involve direct fitting of formulas with NR data [62; 63; 64; 65; 66; 67; 68; 69]. Similarly, for the mass and spin of the remnant, fitting formulas with NR data [62; 63; 64; 70; 71; 72; 73; 74; 75], analytical approximations [76], and GPR [77; 78] are the commonly used methods. Regarding NR simulations of eccentric orbits, there are currently limited open-source catalogs available, primarily including SXS [79] and RIT [80]. The fourth release of RIT extends simulations to eccentric orbits, covering a wide parameter space [54]. To date, only a few studies have explored the dynamic quantities in eccentric orbits, and most of them are qualitative in nature. These studies include investigating the influence of eccentricity on recoil velocity from a PN perspective [81], analyzing the transition from inspiral to plunge in eccentric orbit [82], studying orbital circularization [83], examining the recoil, mass and spin of remnant in low eccentricity orbits by NR [84], exploring kick enhancement caused by eccentricity [85], and investigating anomalies in recoil due to eccentricity [86]. In an attempt to quantitatively model the remnant properties of low-eccentricity BBH mergers, Ref. [87] explores the use of GPR technology. Analytical modeling of these properties is challenging due to the added complexity introduced by eccentricity. This paper aims to uncover the intricate nature of the complexity introduced by eccentricity, which may exceed our initial expectations. However, this complexity also opens the door to future analytical modeling. RIT [80] has conducted extensive and diverse simulations of eccentric orbit BBH mergers, which covered a wide range of parameters. These simulations include various mass ratios, ranging from \(1/32\) to \(1\), eccentricities spanning from \(0\) to \(1\), and consider scenarios with no spin, spin alignment, and spin precession. We provide a comprehensive summary of the relationships between the dynamic quantities of the merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), spin \(\alpha_{f}\) of the merger remnants and the initial eccentricity \(e_{0}\) in these three scenarios. Our study provides a systematic investigation and comprehensive analysis of the behavior exhibited by these quantities as the initial eccentricity varies. This article is organized as follows. In Section II, we provide a summary of the numerical methods employed, the NR simulation data utilized for eccentric orbits, and introduce key concepts related to gravitational waves. In Section III, we present the NR data for two scenarios: the initial coordinate separation of \(11.3M\) in Section III.1.1 and the initial coordinate separation of \(24.6M\) in Section III.1.2. Furthermore, we conduct an analysis of the observed behavior of the dynamic quantities in Section III.1.3. Then, we make a summary in Section III.1.4. In Section III.2, we explore the relationship between the dynamic quantities and the initial eccentricity for spin alignment, providing a detailed analysis and summary. Additionally, in Section III.3, we investigate spin precession case. Finally, in Section IV, we present our conclusions and provide an outlook for future research. Throughout this article, we adopt geometric units where \(G=c=1\). The component masses of BBH are represented as \(m_{1}\) and \(m_{2}\), while the total mass is denoted by \(M\). For simplicity, we set the total mass \(M\) at unity (although occasionally we explicitly write it for clarity). The mass ratio \(q\) is defined as \(q=m_{1}/m_{2}\), where \(m_{1}\) is smaller than \(m_{2}\). The dimensionless spin vectors of the black holes are denoted as \(\vec{\chi}_{i}=\vec{S}_{i}/m_{i}^{2}\) for \(i=1,2\), where \(\vec{S}_{i}\) is spin vector of BBH. ## II Eccentric numerical simulations The numerical relativistic simulations of eccentric orbital BBH mergers utilized in this study are obtained from the Rochester Institute of Technology (RIT) catalog. These simulations in the RIT catalog were performed using the LazEv code [88], which implements the moving puncture approach [7] and employs the BSSNOK formalism for evolution systems [89; 90; 91] (except for cases involving highly spinning black holes, where the CCZ4 formalism [92] is used). The LazEv code is integrated within the CACTUS/CARPET [93] infrastructure, which is part of the Einstein Toolkit [94]. To locate apparent horizons, RIT employs AHFinderDirect [95]. Initially, RIT measures the amplitude of the horizon spins, denoted as \(S_{H}\), utilizing the isolated horizon algorithm. Subsequently, they calculated the horizon mass using the Christodoulou formula: \(m_{H}=\sqrt{m_{\rm irr}^{2}+S_{H}^{2}/\left(4m_{\rm irr}^{2}\right)}\) where \(m_{\rm irr}\) represents the irreducible mass, defined as \(m_{\rm irr}=\sqrt{A_{H}/(16\pi)}\), with \(A_{H}\) denoting the surface area of the horizon [96]. For generating the numerical initial data, RIT employs the puncture approach [97] in conjunction with the TwoPunctures code [98]. To determine the initial coordinate separation and tangential quasicircular momentum \(p_{t,qc}\) for each eccentric family, RIT utilizes PN techniques as described in [99]. By introducing a new parameter \(\epsilon\), ranging from 0 to 1, the tangential linear momentum is modified as \(p_{t}=p_{t,qc}(1-\epsilon)\). In this approach, the initial positions of BBH are fixed at the apocenter, and the initial orbital eccentricity gradually increases throughout the simulations, spanning from the quasi-circular orbit (\(e=0\)) to the head-on collision limit (\(e=1\)). The corresponding initial orbital frequency (and the (2,2)-modes of the gravitational waves) is reduced by the same factor \(\Omega_{e}=\Omega_{qc}(1-\epsilon)\). Consequently, the initial eccentricity of the orbit can be approximated by \(e=2\epsilon-\epsilon^{2}\), which provides a second order approximation in terms of \(\epsilon\) and correctly captures the limits of \(e=0\) and \(e=1\) at \(\epsilon=0\) and \(\epsilon=1\), respectively. RIT provides waveform data in the form of the Newman-Penrose scalar \(\Psi_{4}\) and the gravitational wave strain \(h\) which can be downloaded in RIT's catalog [80]. These waveforms can be expanded using the spin-weighted spherical harmonic function \({}_{-2}Y_{l,m}(\theta,\phi)\) with spin weight \(s=-2\). Specifically, we have the expansion: \[r\Psi_{4}=\sum_{l,m}r\Psi_{4}^{lm}{}_{-2}Y_{l,m}(\theta,\phi), \tag{1}\] and \[rh=r\left(h_{+}-ih_{\times}\right)=\sum_{l,m}rh_{lm-2}Y_{l,m}(\theta,\phi), \tag{2}\] where \(r\) represents the extraction radius, \(h_{+}\) and \(h_{\times}\) denote the two polarizations of gravitational waves, and \(h_{lm}\) and \(\Psi_{4}^{lm}\) represent higher harmonic modes for \(h\) and \(\Psi_{4}\) respectively. Furthermore, we can recall that the gravitational wave strain \(h\) can be decomposed into a combination of amplitude and phase as follows: \[h_{lm}=\mathcal{A}_{lm}(t)\exp\left[-i\Phi_{lm}(t)\right], \tag{3}\] where the amplitude \(\mathcal{A}_{lm}\) and phase \(\Phi_{lm}\) of \(h_{lm}\) can be obtained using the following equations: \[\mathcal{A}_{lm}=|h_{lm}|, \tag{4}\] \[\Phi_{lm}=arg(h_{lm}). \tag{5}\] To facilitate the representation of the parameter space and research, we introduce the concept of effective spin in the \(z\) direction, which is aligned with the orbital angular momentum \(L\). It is defined as \[\chi_{\rm eff}=\frac{m_{1}\chi_{1,z}+m_{2}\chi_{2,z}}{m_{1}+m_{2}}, \tag{6}\] where \(\chi_{1,z}\) and \(\chi_{2,z}\) represent the dimensionless spins of the two black holes in the \(z\) direction. This measure allows us to characterize the combined spin of the binary system, considering the individual spins weighted by the respective masses of the black holes. To quantify the precession effect, we adopt the effective precession spin parameter introduced in Ref. [100], defined as: \[\chi_{p}=\frac{S_{p}}{A_{2}m_{2}^{2}}. \tag{7}\] Here, we have the following: \[\begin{split} S_{p}&:=\frac{1}{2}\left(A_{1}S_{1\perp }+A_{2}S_{2\perp}+|A_{1}S_{1\perp}-A_{2}S_{2\perp}|\right)\\ &\equiv\max\left(A_{1}S_{1\perp},A_{2}S_{2\perp}\right),\end{split} \tag{8}\] where \(S_{i\perp}\) (\(i=1,2\)) represents the component of the spin perpendicular to the orbital angular momentum. The values of \(A_{1}\) and \(A_{2}\) are given by \(A_{1}=2+3q/2\) and \(A_{2}=2+3/(2q)\), respectively. Note that in these expressions, \(q=m_{2}/m_{1}\), representing the mass ratio of the two black holes. The RIT catalog offers a comprehensive dataset that includes both waveform data and accompanying metadata, providing valuable information about the simulations. The metadata encompasses essential details regarding the initial data of the simulation, including mass ratio, initial distance, initial linear momentum, initial angular momentum, and more. Additionally, the metadata contains pertinent simulation results, such as the final remnant black hole masses, spins, and recoil velocity. It is worth noting that these relaxed initial quantities are measured at a specific time, specifically \(t_{\rm relax}=200M\), after the initial burst of radiation has substantially dissipated, accounting for relevant physical considerations. To facilitate data exploration and visualization, RIT has organized all the information in an interactive table, ensuring convenient access and interpretation of the data set [80]. RIT employs formulas derived from Refs. [101; 102] to quantify the radiated energy, linear momentum, and angular momentum using the radiative Weyl scalar \(\Psi_{4}\). However, instead of utilizing the full \(\Psi_{4}\), RIT decomposes it into \(l\) and \(m\) modes and focuses on the radiated linear momentum as in Eq. (1), disregarding terms with \(l>6\). The resulting final recoil velocity is determined by linear momentum radiation. In all simulations conducted by RIT, it has been ascertained that the waveforms, at the resolutions provided in the catalog, have reached a state of convergence, exhibiting convergence up to 4th-order with resolution. The evaluation of quantities related to the black hole horizon, such as the final mass and spins of the remnant, yields errors on the order of 0.1% via the isolated horizon algorithm. Furthermore, radiatively computed quantities, including recoil velocities and peak luminosities, are evaluated with a typical error of 5%. They all satisfy the precision requirements for research purposes. The RIT catalog encompasses a broad range of numerical relativistic simulations, specifically focusing on eccentric orbital BBH mergers. The fourth release of the catalog introduces an extension to include eccentric orbits, featuring a total of 824 eccentric orbital BBH merger simulations. These simulations encompass a diverse parameter space, spanning eccentricities from 0 to 1, mass ratios ranging from 1/32 to 1, and various configurations, including nonspinning, spin-aligned, and spin-precessing setups. It is important to note that some simulations were excluded from our research due to incomplete metadata or the absence of a continuous sequence of eccentricity simulations. All simulations in our study have two initial distances, coordinate separation and proper distance. The initial coordinate separation, representing the coordinate separation between the two centroids of the black holes, is used to characterize the initial distance in our research. The two chosen initial coordinate separations are \(11.3M\) and \(24.6M\). The parameter spaces for all simulations utilized in our study are depicted in FIG. 1. Specifically, we employed a total of 816 eccentric orbital BBH simulations, comprising 510 nonspinning, 197 spin-aligned, and 109 spin-precessing cases. The initial eccentricity, denoted as \(e_{0}\), was estimated by RIT catalog and adopted as a reasonable approximation based on our earlier description. For ease of visualization, FIG. 1 presents the nonspinning, spin-aligned, and spin-precessing simulations in separate panels, employing the effective spin \(\chi_{\rm eff}\) and effective spin precession parameters \(\chi_{p}\) as characterization metrics. ## III Results Performing numerical relativity simulations is a computationally demanding task. Fortunately, RIT catalog has undertaken a significant number of meticulous simulations focusing on eccentric BBH mergers. These simulations provide us with invaluable insights into the role of eccentricity in BBH merger dynamics. In this section, we present variations of various dynamical quantities, including the merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants, as functions of the initial eccentricity \(e_{0}\). It is important to note that the merger time \(T_{\rm merger}\) represents the duration from \(T_{\rm relax}\) to the time of the peak of the gravitational wave amplitude. We analyze and discuss the behavior exhibited by these dynamical quantities, shedding light on their implications for eccentric BBH mergers. However, it is regrettable that the trajectory information of the BBH system is not provided in the RIT catalog, which limits our ability to study other dynamic quantities, such as the evolution of the coordinate separation \(D(t)\). ### No spin RIT catalog has conducted an extensive set of simulations focusing on no spin configurations, encompassing two different initial coordinate separations \(11.3M\) and \(24.6M\). Specifically, there are 191 simulation groups performed with an initial coordinate separation of \(11.3M\), and 319 groups with an initial coordinate separation of \(24.6M\). The simulations with the former distance cover a finer range of initial eccentricities, while the simulations with the latter distance encompass a broader range of mass ratios. In this section we will show the relationship between the dynamic quantities and the initial eccentricity in the nonspinning case and analyze them. #### iii.1.1 Initial coordinate separation = 11.3m RIT catalog has conducted detailed simulations for the case where the initial coordinate separation is \(11.3M\), focusing on specific mass ratios. In particular, RIT has performed fine simulations for the following mass ratios: 0.25 (43 groups), 0.5 (67 groups), 0.75 (41 groups), and 1 (41 groups). The emphasis on the number of simulation groups is of particular importance, as it significantly influences the presentation of the results. In FIG. 2, we present the dynamical quantities of the merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants, illustrating their variations as a function of the initial eccentricity \(e_{0}\) at the initial coordinate separation of \(11.3M\). In panel (a) of FIG. 2, we present the evolution of the merger time \(T_{\rm merger}\) as a function of the initial eccentricity \(e_{0}\). It is evident that \(T_{\rm merger}\) is influenced by two key factors: the mass ratio and the initial eccentricity. When the initial eccentricity \(e_{0}\) is below approximately 0.23, \(T_{\rm merger}\) experiences a rapid decrease with increasing \(e_{0}\). In contrast, when \(e_{0}\) exceeds 0.23, \(T_{\rm merger}\) remains generally below \(300M\) and gradually decreases, approaching zero. It is important to note that the value \(T_{\rm merger}=0.23\) does not correspond to any specific dynamic positions, such as the transition from orbit to plunge. Furthermore, the influence of the mass ratio \(q\) on \(T_{\rm merger}\) is evident, as smaller mass ratios result in longer \(T_{\rm merger}\) durations. In panels (b), (c), (d), and (e) of FIG. 2, we illustrate the variations of the peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and the spin \(\alpha_{f}\) of the merger remnants as functions of the initial eccentricity \(e_{0}\). Across all four panels, we observe a similar pattern in the behavior of these dynamic quantities. Initially, these quantities are constant and are located in nearly horizontal straight lines. Then they display oscillatory behavior, which subsequently intensifies to a maximum or minimum value before eventually converging towards certain values in the head-on limit. Notably, the oscillations in peak luminosity \(L_{\rm peak}\), mass \(M_{f}\), and spin \(\alpha_{f}\) exhibit relatively regular and similar patterns. However, the oscillations in recoil velocity \(V_{f}\) appear more chaotic and less predictable. In a related study, Ref. [86] identified a similar oscillation phenomenon in the recoil velocity during a series of numerical simulations for eccentric orbit BBH mergers at short initial separation. They referred to this phenomenon as "anomalies" and attributed it to the infalling direction of the binary black holes at merger as a potential cause for this observed behavior. In panel (b), the absence of linear momentum radiation in the case of \(q=1\), which represents a completely symmetrical non-spinning configuration, results in a recoil velocity of \(0\). Ref. [103] discovered that for nonspinning circular orbits, the largest gravitational recoil occurs around \(q=0.3\). Consequently, the recoil velocity for \(q=0.25\) in panel (b) is higher compared to other mass ratios, because it is closest to \(0.3\). When the initial eccentricity \(e_{0}\) falls within the range of \([0,0.12]\), the oscillation of the recoil velocity is minimal, almost negligible. In the range of \([0.12,0.24]\), the recoil velocity exhibits a moderately chaotic oscillation. As eccentricity \(e_{0}\) increases within the range of \([0.24,0.5]\), the recoil velocity experiences a sharp increase, reaching its maximum value. For \(e_{0}\) in the range of \([0.5,0.99]\), the recoil velocity gradually decreases from the maximum value to \(0\) at the head-on collision limit. This characteristic holds true for all three mass ratios. In panel (c), we observe a more regular oscillatory behavior compared to the recoil velocity, and it occurs earlier in the evolution. For the mass ratio \(q=1\), there is almost no oscillation when the eccentricity \(e_{0}\) ranges from \(0\) to \(0.05\). As the eccentricity increases within the range of \([0.05,0.3]\), the oscillation gradually emerges and intensifies, reaching its maximum value. When the eccentricity is within the range of \([0.3,0.99]\), the peak luminosity gradually decreases from the maximum value to the minimum value. Furthermore, panel (c) reveals that the onset of the oscillation is delayed as the mass ratio decreases. The oscillation for the mass ratio \(q=0.25\) begins at \(e_{0}=0.095\), but \(q=1\) starts earlier. And we can see as the mass ratio decreases, the oscillation becomes weaker. Additionally, smaller mass ratios correspond to lower peak luminosities, consistent with the behavior observed in circular orbits. Another noteworthy observation is that the initial eccentricity corresponding to the maximum value of the oscillation increases as the mass ratio increases. In panel (d), we observe a similar oscillatory behavior in the mass of the remnant with respect to the eccentricity \(e_{0}\), resembling the pattern seen in the peak luminosity. However, the opening corresponding to the peak is oriented upwards. For the mass ratio \(q=1\), there is almost no oscillation when the eccentricity \(e_{0}\) is in the range of \([0,0.02]\). As the eccentricity increases within the range of \([0.02,0.26]\), the oscillation emerges and gradually decreases to its minimum value. Notably, the onset of oscillation occurs earlier than that of the peak luminosity. When the eccentricity is within the range of \([0.26,0.99]\), the mass of the remnant gradually increases from its minimum value to \(0.98\). It is important to mention that the mass of the remnant is not exactly zero when the eccentricity \(e_{0}=1\), as the binary black holes also radiate energy during the head-on collision, although the amount is relatively small. Similar to the peak luminosity, the oscillatory behavior of the mass of the remnant becomes weaker as the mass ratio decreases. Furthermore, lower mass ratios correspond to smaller eccentricities at which the oscillation reaches its minimum value. On the contrary, larger mass ratios result in more energy being radiated, leading to a smaller mass of the remnant, which is Figure 1: The parameter used in our study including three configurations no spin, spin alignment and spin precession in two initial coordinate separations \(11.3M\) and \(24.6M\), covering the parameter space mass ratio \(q\) from \(1/32\) to \(1\), and the initial eccentricity \(e_{0}\) from \(0\) to \(1\). The left panel uses effective spin \(\chi_{\rm eff}\) to label nonspinning and spin-aligned configurations. The right panel marks the spin precession configuration using the effective precession spin parameter \(\chi_{p}\). consistent with the behavior observed in circular orbits. In panel (e), we observe that the oscillation of the spin of the remnant \(\alpha_{f}\), is significantly weaker compared to the recoil velocity \(V_{f}\), peak luminosity \(L_{\rm peak}\), and mass \(M_{f}\). For the mass ratio \(q=1\), there is almost no oscillation when the eccentricity \(e_{0}\) falls within the range of \([0,0.05]\). As the eccentricity increases within the range of \([0.05,0.35]\), the oscillation gradually emerges and intensifies, reaching its maximum value. Subsequently, when the eccentricity is in the range of \([0.35,0.99]\), \(\alpha_{f}\) gradually decreases from the maximum value to 0.1. This residual spin of 0.1 is likely a result of some remaining orbital angular momentum in the initial data. The characteristics of the spin oscillation of the remnant with respect to the mass ratio exhibit similarities with those of the peak luminosity \(L_{\rm peak}\) and mass \(M_{f}\), and will not be discussed here. #### iii.2.2 Initial coordinate separation = 24.6m Next, we examine a significantly larger initial coordinate separation of \(24.6M\). RIT has conducted extensive simulations for various mass ratios for the case. The first is \(q=1\) for 48 groups, which has the largest number of groups among them. Additionally, simulations were performed for mass ratios of 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3333, 0.25, 0.2, 0.1667, and 0.1429 for 23 groups, as well as 0.06667 and 0.03125 for 9 groups. Simulations of the last two mass ratios lack low and moderate eccentric Figure 2: Variations of dynamical quantities of the merger time \(T_{\rm merger}\) (panel (a)), peak luminosity \(L_{\rm peak}\) (panel (c)), recoil velocity \(V_{f}\) (panel (b)), mass \(M_{f}\) (panel (d)), and spin \(\alpha_{f}\) (panel (e)) of the merger remnants as a function of the initial eccentricity \(e_{0}\) at the initial coordinate separation of \(11.3M\) for nonspinning configuration with different mass ratio. ities. In FIG. 3, we present the variations of dynamical quantities of merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants as a function of the initial eccentricity \(e_{0}\) at the initial coordinate separation of \(24.6M\). In panel (a) of FIG. 3, we present the variation of the merger time length \(T_{\rm merger}\) as a function of the initial eccentricity \(e_{0}\) for an initial coordinate separation of \(24.6M\). In particular, the relationship between the merger time and the initial eccentricity exhibits a pattern similar to that observed in the case of \(11.3M\). Interestingly, we observe a distinct turning point at approximately \(e_{0}=0.5\) for the \(24.6M\) case, in contrast to the turning point at \(e_{0}=0.23\) observed in the \(11.3M\) case. This discrepancy can be attributed to the larger initial coordinate separation utilized in the \(24.6M\) scenario. We believe this is a general behavior in which a larger initial coordinate separation leads to a delayed turning point. Furthermore, our findings indicate that the merging time increases with decreasing mass ratio. This trend also holds for the \(24.6M\) case as well. However, it is important to note that the mass ratio \(q=0.1667\) shows a significant deviation from the other results due to errors present in the data itself. In panels (b), (c), (d), and (e) of FIG. 3, we present peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\), as a function of the initial eccentricity \(e_{0}\). Notably, the overall pattern observed in the curves aligns closely with the trends observed in the \(11.3M\) case. However, there are notable differences, particularly in the presence or absence of oscillatory behavior among different groups of mass ratios. In panel (b), an intriguing bimodal structure emerges in the relationship between recoil velocity and initial eccentricity. This unexpected pattern adds a fascinating layer to our understanding of the merger process and warrants further investigation. It is worth highlighting that, while general trends remain consistent with the case \(11.3M\), the oscillation behavior seen in the groups other than the mass ratio \(q=1\) is less pronounced or absent. This discrepancy adds an intriguing dimension to the dynamics of the merger remnants and prompts us to explore the underlying mechanisms responsible for these observations. In panel (b) of FIG. 3, we observe that the largest group with a mass ratio of \(q=1\) exhibits complete symmetry, resulting in no linear momentum radiation and, consequently, a recoil velocity of \(0\). However, for other mass ratios, a visible oscillatory pattern emerges, commencing at an approximate eccentricity of \(0.44\). The first peak in the recoil velocity occurs at an initial eccentricity of \(0.51\). Notably, the position of the second peak is not fixed and varies with the mass ratio. Specifically, in the case of a mass ratio of \(q=0.3333\), the second peak manifests at an initial eccentricity of \(0.64\). Subsequently, the recoil velocity progressively decreases to \(0\) as the eccentricity increases. Analyzing the results for the initial coordinate separation of \(24.6M\), we find that, apart from the peculiar mid-range peaks and the subtle oscillation behavior, the overall trends align with those observed in the case \(11.3M\). Furthermore, the maximum recoil velocity occurs in a mass ratio of \(q=0.3333\), consistent with the findings for the scenario \(11.3M\). Additionally, we reaffirm the pattern that smaller mass ratios correspond to smaller oscillation or peak values, further validating the observations from the \(11.3M\) case and illustrating this trend across a wider range of mass ratios. These findings not only deepen our understanding of recoil dynamics in merger remnants for an initial coordinate separation of \(24.6M\), but also reinforce and extend the variations observed in the \(11.3M\) case, providing valuable insights across a broader range of mass ratios. In panel (c) of FIG. 3, we observe that the overall behavior of the peak luminosity \(L_{\rm peak}\) is consistent with the findings for the scenario \(11.3M\). Initially, it remains relatively constant, followed by oscillations that gradually reach a maximum value. Subsequently, the luminosity decreases from its peak to a minimum value. Notably, we clearly observe the oscillation pattern in the case of a mass ratio of \(q=1\), which is supported by a robust dataset of \(48\) groups. However, the oscillation behavior is less apparent for other mass ratios, where the available data is only half the size, comprising \(23\) groups. The sparser data points for mass ratios other than \(q=1\) smooth out the oscillations due to the coarse graining of the initial eccentricity, making them less discernible. Furthermore, we find that for the mass ratio \(q=1\), the oscillations begin at a higher initial eccentricity of \(0.33\). In contrast, we observe only subtle undulations and peaks for other mass ratios. Despite the absence of clear oscillations for mass ratios other than \(q=1\), we can draw analogous conclusions based on a wider range of mass ratios, similar to the \(11.3M\) case. Specifically, we find that smaller mass ratios correspond to lower peak luminosity, weaker oscillations, and a shift in the position of the maximum oscillation towards lower initial eccentricities. It is important to note that these conclusions are drawn by analogy since we do not observe explicit oscillations for mass ratios other than \(q=1\). However, based on the findings in the \(11.3M\) case, we would expect such oscillations to exist given a sufficient number of data points. In panel (d) of FIG. 3, we present the variation of the remnant mass with respect to the initial eccentricity. The overall behavior closely resembles that of the peak luminosity, exhibiting similar trends. However, for the sake of brevity, we will refrain from delving into further details in this section. In panel (e) of FIG. 3, we observe that the oscillation of the remnant spin is comparatively weaker than the oscillations observed in the other three dynamic quantities, aligning with the findings of the \(11.3M\) scenario. Additionally, we note that as the mass ratio decreases, the maximum value of the oscillation shifts to lower initial eccentricities, consistent with the observations in the \(11.3M\) case. The analysis of other relationships exhibits similar patterns, and we refrain from reiterating them here to avoid redundancy. In summary, our study involves a series of numerical simulations in which we systematically increased the initial eccentricity, while maintaining a fixed initial coordinate separation of \(11.3M\) or \(24.6M\). From these simulations, we derive several dynamic quantities characterizing the merger process. We observe consistent behaviors of the merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants with changes in the initial eccentricity for both cases. The merger time \(T_{\rm merger}\) exhibit an initial rapid decrease, followed by a slower decrease after passing a critical point. On the other hand, the remaining four quantities, peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants, display a universal behavior with the changing initial eccentricity. Initially, they maintain an almost stable horizontal line. Subsequently, they gradually enter an oscillatory phase, with the amplitude of oscillations intensifying. At the final peak, these quantities reach a maximum or minimum value. Finally, under extreme eccentricity \(e_{0}=1\) or in the head-on collision limit, they gradually approach a specific value. This behavior is only observable when the initial eccentricity in the numerical simulation is sufficiently dense. Among these dynamic quantities, the oscillation behavior of the recoil velocity \(V_{f}\) appear relatively irregular and less ordered compared to the relatively regular oscillations observed in \(L_{\rm peak}\), \(M_{f}\), and spin \(\alpha_{f}\). Furthermore, the magnitude of the oscillations decrease as the mass ratio increase and the initial eccentricity corresponding to the maximum or minimum value shift with changes in the mass ratio. #### iv.1.3 Analysis Understanding the intricate relationship between merger time and initial eccentricity, as well as the impact of the initial coordinate distance and mass ratio, can be accomplished through the application of analytical PN theory [45]. Our investigation reveals a notable turning point in this relationship. When the eccentricity exceeds the critical value, the merger time tends to decrease, although at a slower pace. Visually, a gradual decline in merger time with increasing eccentricity is observed. For a comprehensive analysis, we refer the interested reader to the full publication of PN. The primary focus of our investigation is to reveal the underlying physical mechanisms responsible for the observed oscillatory behavior in the dynamic quantities of peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of merger remnants. Additionally, we aim to quantify the extent to which these quantities can be enhanced or diminished by manipulating the initial eccentricity. Through a detailed analysis, we endeavor to elucidate the fundamental factors driving these oscillations and provide insights into the potential impact of varying the initial eccentricity on these dynamic quantities. In the study conducted by Huerta et al. [84], a comprehensive set of 89 eccentric numerical relativistic simulations was performed. These simulations covered a wide range of mass ratios from 1 to 10, with corresponding eccentricities ranging from 0 to 0.18. Their analysis focused on establishing the relationship between the mass, spin, and recoil of the merger remnant and the initial eccentricity. Interestingly, Huerta et al. found that these dynamic quantities exhibited minimal changes with respect to the initial eccentricity. Moreover, the dynamic quantities remained relatively constant, forming a nearly horizontal line in the parameter space. It is worth noting that the simulations conducted by Huerta et al. had initial coordinate separations exceeding \(11.3M\), as indicated by the number of cycles of gravitational waves. And they did not simulate enough eccentricity data points. Consequently, the specific range of eccentricities explored (up to 0.18) did not induce oscillatory behavior in the system. Radia et al. [86] conducted nonspinning eccentric numerical simulations with mass ratios of \(q=1/2\), \(1/3\), and \(2/3\). Their research focused on investigating the recoil velocity of the merger remnant, where they observed intriguing oscillatory behavior. Additionally, their figures exhibited noticeable oscillations in the remnant's spin and radiated energy, although these quantities were not the primary focus of their study. Of particular interest is their examination of cases involving very short initial coordinate separations, close to the point of merger. It is worth noting that their approach to generating eccentricity differs from that of RIT. Initially, they established a quasicircular configuration by fixing the binding energy \(E_{\rm b}\), defined as \[E_{\rm b}=M_{\rm ADM}-M, \tag{9}\] where \(M_{\rm ADM}\) is ADM mass. Subsequently, they incrementally reduced the initial linear momentum parameter \(p\) to generate a series of eccentric simulations. Importantly, while the eccentricity varied, the initial coordinate separation gradually increased. This finding offers an alternative perspective, demonstrating that the observed oscillation phenomenon is universal and independent of the initial distance in the simulation. This oscillatory phenomenon in the recoil velocity was explained by Radia et al. [86] as a consequence of the change in infall direction during the BBH merger. However, it should be noted that this oscillation is not limited to the recoil velocity alone. The oscillatory behavior was also observed in the dynamic quantities of peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\). Notably, the peaks and valleys of these oscillations for the dynamic quantities were situated at different initial eccentricity positions and did not correspond to each other. Furthermore, the oscillations exhibited both maximum and minimum values, suggesting a more complex underlying cause. These phenomena cannot be solely attributed to changes in the infall direction. The observed oscillations between the infall direction and the recoil are better characterized as a phenomenological outcome rather than as a represen tation of a singular physical origin. Sopuerta et al. [81] provided a PN perspective, indicating that in the low eccentricity regime, the recoil velocity \(V_{f}\) scales as \(\propto(1+e_{0})\). This PN analysis establishes a relationship between the recoil velocity and eccentricity within this specific regime. Furthermore, references such as Radia et al. [86] and Sperhake et al. [85] demonstrate that nonzero eccentricity can lead to a significant increase in the recoil velocity \(V_{f}\), up to approximately 25% when compared to the quasi-circular orbit case. These studies provide valuable insights into the enhancement of recoil velocity resulting from the presence of eccentricity. While there exist references that have investigated the enhancement of recoil velocity \(V_{f}\) caused by eccentricity, such as those mentioned earlier, there is limited literature that delves deeply into the amplification of peak luminosity \(L_{\rm peak}\), mass \(M_{f}\), and spin \(\alpha_{f}\) induced by nonzero initial eccentricity. It is of utmost importance to acknowledge that eccentricity introduces a distinctive oscillatory behavior, resulting in both amplifications and reductions in these dynamic quantities, rather than exclusively leading to amplifications. Therefore, further exploration is necessary to gain a comprehensive understanding of the effects of nonzero eccentricity on peak luminosity, mass, and spin of the merger remnant. Hinder et al. [83] conducted a series of numerical simulations focusing on cases of high eccentricity. They analyzed the changes in spin and mass of the merger remnants and concluded that the orbit becomes circularized when the eccentricity drops below 0.4. It is important Figure 3: Variations of dynamical quantities of the merger time \(T_{\rm merger}\) (panel (a)), peak luminosity \(L_{\rm peak}\) (panel (c)), recoil velocity \(V_{f}\) (panel (b)), mass \(M_{f}\) (panel (d)), and spin \(\alpha_{f}\) (panel (e)) of the merger remnants as a function of the initial eccentricity \(e_{0}\) at the initial coordinate separation of \(24.6M\) for nonspinning configuration with different mass ratio. to note that the initial separations in their simulations were approximately \(12M\), which is very close to the value \(11.3M\) used in this paper. From the perspective of more refined eccentric numerical simulations conducted by RIT (refer to FIG. 2) and in light of the conclusions of Ref. [83], it is observed that the eccentricity reaches a lower value (specifically \(\epsilon_{0}=0.02\)) when the orbit starts to circularize completely. However, above this value, the orbit cannot be fully circularized. This conclusion we get may seem counterintuitive, as we typically associate circularization with some small eccentricities, but not as small as \(0.02\). However, it is crucial to distinguish conceptual difference between the mass and spin of the remnant which are integral quantities, and the instantaneous circularization state of the orbit. While these two concepts can provide some characterization of each other, they are not entirely equivalent. In fact, the process of circularization is also reflected in the oscillatory phenomena observed in the peak luminosity, recoil velocity, mass, and spin of the merger remnant. Weaker oscillations indicate a stronger degree of circularization. Notably, there is minimal oscillatory effect for initial eccentricities ranging from \(0\) to \(0.02\). This does not imply complete circularization at eccentricities below \(0.02\), but rather suggests that for \(e_{0}\leq 0.02\), the dynamics and waveforms of these simulations closely resemble those of quasicircular orbits. The appearance of oscillations in peak luminosity, recoil velocities, masses, and spins of merger remnants is an intriguing phenomenon. Understanding the origin of these oscillations is closely tied to the peaks observed within the oscillatory behavior, particularly the last peak, which tends to be the largest and introduces the most significant enhancement effect caused by eccentricity. In a relevant study, Sperhake et al. [82] investigated the transition from inspiral to plunge in eccentric BBH mergers. They explored a wide range of eccentricities from \(0\) to \(1\) and examined the relationship between eccentricity and radiated energy. In particular, they found that near the critical point that marks the transition from orbit to plunge, the spin parameter \(\alpha_{f}\) of the remnant reached a maximum value of \(0.724\). While their study provided valuable insights into the eccentricity dependence of the remnant's spin and its relation to the transition from inspiral to plunge, the oscillatory phenomenon was not observed due to the relatively small number of numerical simulations conducted, amounting to only a dozen sets. Nevertheless, the findings presented in Ref. [82] offer valuable guidance for analyzing the underlying mechanisms responsible for the generation of oscillations in the dynamic quantities. In this study, we draw inspiration from the concept presented in Ref. [82] to consider the orbital transition. The number of orbital cycles \(N\) can be determined from two perspectives: one through the orbital phase of the puncture and the other through the phase of the gravitational waveform. However, it should be noted that RIT does not provide orbit trajectory information, restricting our ability to calculate the number of orbital cycles \(N\) solely through the gravitational waveform. In our analysis, we specifically focus on the 2-2 mode. To calculate the phase difference, we evaluate the expression: \[\Delta\Phi=\Phi\left(t_{\rm merger}\right)-\Phi\left(t_{0}+t_{\rm relax} \right). \tag{10}\] where \(t_{\rm merger}\) represents the time of BBH merger, \(t_{0}\) denotes the initial moment of the waveform, and \(t_{\rm relax}\) signifies the time required to the transition from the initial moment to a physically stable state. For the phase calculation, we adopt \(t_{\rm relax}=20M\) to remove small steps in the phase. While \(t_{\rm merger}\) is determined as the time when a common apparent horizon is formed as used in Ref. [82], we only have the time \(t_{\rm merger}\) that corresponds to the maximum amplitude in the waveform data. However, employing \(t_{\rm merger}\) instead of the precise time of common apparent horizon formation does not introduce a significant error in the phase difference calculation. Subsequently, the number of orbital cycles accomplished by the BBH system can be obtained as: \[N_{\rm waves}\;=\frac{\Delta\phi}{4\pi}. \tag{11}\] Here, we divide the phase difference \(\Delta\phi\) by \(4\pi\) since the waveform phase is twice that of the orbital phase. This conversion factor is chosen to align the two quantities appropriately. FIG. 4 displays the relationship between the integer orbital cycle number \(N_{\rm waves}\) and various quantities such as peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the remnant at the initial coordinate separation of \(11.3M\) and \(24.6M\) for different mass ratios. These points, denoted by red "x" markers, correspond to either an integer multiple or are in close proximity to an integer multiple of the orbital cycles. For the case with an initial coordinate separation of \(24.6M\), we accurately mark the points only for a mass ratio of \(q=1\). Due to the limited number of numerical simulation groups available for other mass ratios at the initial coordinate separation of \(24.6M\), there exist significant deviations from integer cycles or instances of excessive discontinuous cycles. Therefore, we selectively mark the cases where \(N_{\rm waves}\) closely approximates \(1\) in FIG. 4 for \(24.6M\) with other mass ratios. For instance, the maximum deviation can reach \(0.3\). We now shift our focus to panels (b), (d), (f) and (h) in FIG. 4, which correspond to the peak luminosity and spin of the remnant. These panels exhibit similarities to the scenarios investigated in Ref. [82]. In particular, we observe that all cases with \(N_{\rm waves}=1\) align precisely with the last peak, indicative of the transition from inspiral to plunge. Additionally, instances with \(N_{\rm waves}=2\) are predominantly positioned near the last valley. Furthermore, cases with \(N_{\rm waves}=3\) are consistently found in the penultimate peak, and this pattern continues for higher values of \(N_{\rm waves}\). We contend that this observed behavior is not coincidental but rather stems from a shared physical origin underlying the generation of both the last peak and other peaks or valleys. Much like how the last peak signifies the transition from inspiral to plunge, the last Figure 4: Relationship between the integer orbital cycle number \(N_{\rm waves}\) and various quantities such as the peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), the mass \(M_{f}\) and the spin \(\alpha_{f}\) of the remnant at initial coordinate separation of \(11.3M\) and \(24.6M\) for nonspinning configuration with different mass ratios. These points, denoted by red “x” markers, correspond to either an integer or are in close proximity to an integer of the orbital cycle number. Moving from right to left, each red “x” corresponds to successive orbital cycles, starting from cycle 1 and continuing indefinitely. The upper four panels correspond to the initial coordinate separation of \(11.3M\), and the lower four panels correspond to the initial coordinate separation of \(24.6M\). valley represents the transition from the last orbit 2 to the last orbit 1, while the penultimate peak corresponds to the transition from the last orbit 3 to the last orbit 2, and so on. Consequently, we gain insight into why dynamic quantities such as recoil velocity, peak luminosity, mass, and spin progressively oscillate from an initial horizontal line, culminating in a maximum peak or deepest valley. Moreover, we ascertain that the transition from inspiral to plunge introduces the most substantial enhancement effect in eccentric BBH mergers, aligning with the conclusion drawn in Ref. [82]. This behavior holds for both an initial separation of \(11.3M\) or \(24.6M\) and mass ratios up to \(q=1\). Nevertheless, in panels (b), (d), (f), and (h), certain data points deviate from the peaks or valleys, with larger deviations occurring the farther they are from the last few peaks. Several factors may contribute to these deviations: (i) Due to the limited simulation data, we are unable to obtain an exact integer value for the cycle \(N_{\rm waves}\), resulting in deviations from integers. The last few peaks and valleys are more apparent due to the fine simulation eccentricity, allowing for more accurate results. Occasionally, the worst cycle \(N_{\rm waves}\) deviates from an integer by up to 0.3, leading to significant errors. (ii) The data obtained from the simulations are not finely resolved but rather coarse-grained. Consequently, the peaks and valleys we identify may not precisely align with their most accurate positions but exhibit some level of deviation. (iii) As mentioned in Sec. II, the peak luminosities, recoil velocities, masses, and spins that we obtain are subject to errors. Simultaneously, errors arise in the phase used to calculate the cycle number \(N_{\rm waves}\). (iv) In eccentric BBH mergers, strong periastron precession occurs, causing the orbital plane to precess similarly to the perihelion precession of Mercury [104]. This precession leads to an incomplete orbital phase of the BBH, deviating from \(2\pi\). The greater the number of orbits and the smaller the mass ratio, the more severe the deviation. This effect may be a significant contributor to the observed discrepancies, where many data points do not exactly correspond to the peaks and valleys. It is important to note that while the measurement of eccentricity may be subject to significant errors due to approximate measurement methods, these errors do not impact the position of the peak or valleys and the occurrence of oscillatory behavior, since the way in which the initial eccentricity being generated is continuous and physically reasonable. The presence of uncertainties in the eccentricity measurements does not alter the overall pattern observed in the data. Furthermore, it is worth mentioning that there is no analytical formula available for the peak luminosity. However, the similarity observed between the peak luminosity and the spin of the remnant can be attributed to their inherent correlation, as discussed in Ref. [105]. Moving on to the mass of the remnant \(M_{f}\), it is evident that many integer orbital cycles do not align precisely with peaks or valleys. Rather, there are some specific deviations. This behavior can be likened to a phase shift, where the remnant mass is shifted in phase relative to the initial eccentricity. This phase shift arises due to the specific calculation method employed to determine \(M_{f}\), which differs somewhat from the calculations for peak luminosity and spin. Although the integer cycle points for \(M_{f}\) do not coincide with the peaks and valleys, we observe that the differences in the cycle number \(N_{\rm waves}\) between the peaks or valleys of the remnant mass \(M_{f}\) are approximately 1 when calculated. This finding highlights that the emergence of peaks and valleys in the remnant mass is a result of transitions between orbits, mirroring the behavior observed in peak luminosities and spins. This also shows that our conclusion that the oscillations come from an orbital transition is self-consistent. Before we proceed with the analysis of recoil velocity, let us first recall some formulas from Refs. [106, 86] that are used to calculate the recoil velocity, remnant mass and spin from the gravitational waveform. Although these formulas differ from the RIT using the isolated horizon algorithm, they carry the same physical meaning. The energy of gravitational wave radiation \(E_{\rm rad}(t)\) can be calculated from the Weyl scalar \(\Psi_{4}\)[101, 102]: \[E_{\rm rad}(t)=\lim_{r\to\infty}\frac{r^{2}}{16\pi}\int_{t_{0}}^{t}\;{\rm d}t ^{\prime}\oint_{S_{r}^{2}}\;{\rm d}\Omega\left|\int_{-\infty}^{t^{\prime}}{ \rm d}t^{\prime\prime}\Psi_{4}\right|^{2}. \tag{12}\] Using the orthogonality of \({}_{-2}Y_{l,m}(\theta,\phi)\) and Eq. (1), we can rewrite the radiated energy as \[E_{\rm rad}(t)=\lim_{r\to\infty}\frac{r^{2}}{16\pi}\sum_{l,m}\int_{t_{0}}^{t}\; {\rm d}t^{\prime}\left|\int_{-\infty}^{t^{\prime}}{\rm d}t^{\prime\prime} \Psi_{4}^{lm}\right|^{2}. \tag{13}\] Radiated linear momentum \({\bf P}_{\rm rad}(t)\) can be expressed as \[{\bf P}_{\rm rad}(t)=\lim_{r\to\infty}\frac{r^{2}}{16\pi}\int_{t_{0}}^{t}\;{ \rm d}t^{\prime}\oint_{S_{r}^{2}}\;{\rm d}\Omega\hat{\bf e}_{r}\left|\int_{- \infty}^{t^{\prime}}{\rm d}t^{\prime\prime}\Psi_{4}\right|^{2}, \tag{14}\] where \(\hat{\bf e}_{r}\) is the flat space unit radial vector \[\hat{\bf e}_{r}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta). \tag{15}\] Using the orthogonality of \({}_{-2}Y_{l,m}(\theta,\phi)\), Eq. (1) and the property of the radial unit vector \(\hat{\bf e}_{r}\), we can rewrite the radiated linear momentum as \[\begin{split} P_{+}^{\rm rad}&=\lim_{r\to\infty} \frac{r^{2}}{8\pi}\sum_{l,m}\int_{t_{0}}^{t}\;{\rm d}t^{\prime}\left[\int_{- \infty}^{t}dt^{\prime\prime}\Psi_{4}^{l,m}\right.\\ &\times\int_{-\infty}^{t}dt^{\prime\prime}\left(a_{l,m}\bar{\Psi} _{4}^{l,m+1}+b_{l,-m}\bar{\Psi}_{4}^{l-1,m+1}\right.\\ &\left.-b_{l+1,m+1}\bar{\Psi}_{4}^{l+1,m+1}\right)\bigg{]},\end{split} \tag{16}\] \[P_{z}^{\rm rad} =\lim_{r\to\infty}\frac{r^{2}}{16\pi}\sum_{l,m}\int_{t_{0}}^{t}\, \,\mathrm{d}t^{\prime}\left[\int_{-\infty}^{t}\,dt^{\prime\prime}\Psi_{4}^{l,m}\right. \tag{17}\] \[\times\int_{-\infty}^{t}dt^{\prime\prime}\left(c_{l,m}\bar{\Psi}_{ 4}^{l,m}+d_{l,m}\bar{\Psi}_{4}^{l-1,m}\right.\] \[\left.\left.+d_{l+1,m}\bar{\Psi}_{4}^{l+1,m}\right)\right],\] where \(P_{+}^{\rm rad}\) in Eq. (16) is a combination quantity introduced for convenience, which is \(P_{+}^{\rm rad}=P_{x}^{\rm rad}+\mathrm{i}P_{y}^{\rm rad}\). \(P_{x}^{\rm rad}\), \(P_{y}^{\rm rad}\) and \(P_{z}^{\rm rad}\) are the \(x\), \(y\) and \(z\) components of \(\mathbf{P}_{\rm rad}\) respectively. \(\bar{\Psi}_{4}^{l,m}\) is the conjugate complex of \(\Psi_{4}^{l,m}\). The initial time \(t_{0}\) should exclude the nonphysical radiation relaxation time when specifically calculated. The coefficients \((a_{l,m},b_{l,m},c_{l,m},d_{l,m})\) in Eqs. (16) and (17) are given by \[\begin{split} a_{l,m}&=\frac{\sqrt{(l-m)(l+m+1)}}{ l(l+1)}\\ b_{l,m}&=\frac{1}{2l}\sqrt{\frac{(l-2)(l+2)(l+m)( l+m-1)}{(2l-1)(2l+1)}}\\ c_{l,m}&=\frac{2m}{l(l+1)}\\ d_{l,m}&=\frac{1}{l}\sqrt{\frac{(l-2)(l+2)(l-m)(l+ m)}{(2l-1)(2l+1)}}.\end{split} \tag{18}\] Finally, the radiated angular momentum \(\mathbf{J}_{\rm rad}(t)\) is given by \[\begin{split}\mathbf{J}_{\rm rad}(t)=&-\lim_{r\to \infty}\frac{r^{2}}{16\pi}\,\mathrm{Re}\int_{t_{0}}^{t}\,\,\mathrm{d}t^{\prime} \left\{\oint_{S_{2}^{\prime}}\left(\int_{-\infty}^{t^{\prime}}\,\mathrm{d}t^{ \prime\prime}\bar{\Psi}_{4}\right)\right.\\ &\left.\times\hat{\mathbf{J}}\left(\int_{-\infty}^{t^{\prime}}\, \mathrm{d}t^{\prime\prime}\int_{-\infty}^{t^{\prime\prime}}\,\mathrm{d}t^{ \prime\prime\prime}\Psi_{4}\right)\mathrm{d}\Omega\right\},\end{split} \tag{19}\] where the angular momentum operator \(\hat{\mathbf{J}}\) for spin weight \(s=-2\) is given by \[\hat{\mathbf{J}}=\left(\mathrm{Re}\,\hat{\mathbf{J}}_{+},\mathrm{Im}\,\hat{ \mathbf{J}}_{+},\frac{\partial}{\partial\phi}\right) \tag{20}\] and \[\hat{\mathbf{J}}_{+}=\mathrm{e}^{\mathrm{i}\phi}\left(\mathrm{i}\frac{\partial }{\partial\theta}-\cot\theta\frac{\partial}{\partial\phi}+2\mathrm{i}\csc\theta \right). \tag{21}\] Again, using the orthogonality of \({}_{-2}Y_{l,m}(\theta,\phi)\), Eq. (1) and the property of the angular momentum operator \(\hat{\mathbf{J}}\), we can rewrite the radiated angular momentum as \[\begin{split} J_{x}^{\rm rad}&=-\lim_{r\to\infty} \frac{ir^{2}}{32\pi}\,\mathrm{Im}\left\{\sum_{l,m}\int_{t_{0}}^{t}\left[\int_{- \infty}^{t^{\prime}}\int_{-\infty}^{t^{\prime\prime}}\Psi_{4}^{l,m}dt^{\prime \prime\prime}dt^{\prime\prime}\right.\right.\\ &\left.\times\int_{-\infty}^{t^{\prime}}\left(f_{l,m}\bar{\Psi}_{ 4}^{l,m+1}+f_{l,-m}\bar{\Psi}_{4}^{l,m-1}\right)dt^{\prime\prime}\right]\,\, \mathrm{d}t^{\prime}\right\},\end{split} \tag{22}\] \[\begin{split} J_{y}^{\rm rad}&=-\lim_{r\to\infty} \frac{ir^{2}}{32\pi}\,\mathrm{Re}\left\{\sum_{l,m}\int_{t_{0}}^{t}\left[\int_{- \infty}^{t^{\prime}}\int_{-\infty}^{t^{\prime\prime}}\Psi_{4}^{l,m}dt^{\prime \prime\prime}dt^{\prime\prime}\right.\right.\\ &\left.\times\int_{-\infty}^{t^{\prime}}\left(f_{l,m}\bar{\Psi}_{ 4}^{l,m+1}-f_{l,-m}\bar{\Psi}_{4}^{l,m-1}\right)dt^{\prime\prime}\right]\,\, \mathrm{d}t^{\prime}\right\},\end{split} \tag{23}\] \[\begin{split} J_{z}^{\rm rad}&=-\lim_{r\to\infty} \frac{ir^{2}}{16\pi}\,\mathrm{Im}\left\{\sum_{l,m}m\int_{t_{0}}^{t}\left(\int_{- \infty}^{t^{\prime}}\int_{-\infty}^{t^{\prime\prime}}\Psi_{4}^{l,m}dt^{\prime \prime\prime}\right.\right.\\ &\left.\left.\times dt^{\prime\prime}\int_{-\infty}^{t^{\prime}} \bar{\Psi}_{4}^{l,m}dt^{\prime\prime}\right)\,\mathrm{d}t^{\prime}\right\}, \end{split} \tag{24}\] where the coefficients \(f_{l,m}\) in Eqs. (22) and (23) are given by \[\begin{split} f_{l,m}&:=\sqrt{(l-m)(l+m+1)}\\ &=\sqrt{l(l+1)-m(m+1)}.\end{split} \tag{25}\] The recoil velocity \(V_{f}\) can then be calculated from the radiated linear momentum \[V_{f}=-\frac{\mathbf{P}_{\rm rad}}{M_{f}}, \tag{26}\] where \(M_{f}\) can be calculated from the energy balance: \[M_{f}=M_{\rm ADM}-E_{\rm rad}. \tag{27}\] For nonspinning BBH, according to symmetry, the spin direction of the final remnants is in \(z\) direction, which can be calculated by \[\alpha_{f}=\frac{L-J_{z}^{\rm rad}}{M_{f}^{2}}, \tag{28}\] where \(L\) represents the initial orbital angular momentum. In panel (d) of FIG. 2 and panel (d) of FIG. 3, we observe that prior to the final peak or valley, the change in \(M_{f}\) is negligible. Therefore, the calculation of dynamic quantities such as \(V_{f}\), \(M_{f}\), and \(\alpha_{f}\) is based mainly on \(\mathbf{P}_{\rm rad}\), \(E_{\rm rad}\), and \(J_{z}^{\rm rad}\). The calculation of \(\mathbf{P}_{\rm rad}\), \(E_{\rm rad}\), and \(J_{z}^{\rm rad}\) is based on Eqs. (13), (16), and (24). As we previously mentioned, in the case of a nonspinning BBH, the recoil occurs within the orbital plane, while angular momentum radiation takes place along the \(z\)-direction. Eqs. (13) and (24) demonstrate a similarity in the computation of \(E_{\rm rad}\) and \(J_{z}^{\rm rad}\), except for an additional time integral in \(J_{z}^{\rm rad}\). This additional integral does not affect the physical regularity of the variation in \(J_{z}^{\rm rad}\), similar to the oscillation similarity shown in FIGs. 2 and 3 between \(E_{\rm rad}\) and \(J_{z}^{\rm rad}\). However, it introduces a phase shift in \(M_{f}\) relative to the initial eccentricity, which could explain the deviation of the integer cycles \(N_{\rm waves}\) from the peaks and valleys in FIG. 4. Now, let us return to the calculation of the radiated linear momentum (recoil velocity) in Eq. (16). If we disregard the last two terms, \(b_{l,-m}\bar{\Psi}_{4}^{l-1,m+1}\) and \(b_{l+1,m+1}\bar{\Psi}_{4}^{l+1,m+1}\), the remaining integral closely resembles Eq. (13). The integral over \(m\) or the integral over \(m+1\) and the coefficient \(a_{l,m}\) in Eq. (16) do not affect the regularity of the physics. However, including \(b_{l,-m}\bar{\Psi}_{4}^{l-1,m+1}\) and \(b_{l+1,m+1}\bar{\Psi}_{4}^{l+1,m+1}\) introduces significant complexity since it involves the superposition of different harmonic modes, resulting in messy and irregular recoil velocities, as depicted in panel (b) of FIGs. 2 and 3. The irregularities in recoil velocities can be characterized by the following. (i) The distribution of peaks and valleys in recoil velocities is irregular, without a specific location such as an integer cycle \(N_{\rm waves}\) of orbital transitions, and they lack uniform sharpness. (ii) The difference in the cycle numbers \(\Delta N_{\rm waves}\) between the peaks and valleys is less than 1 and irregular, varying between 0.2 and 0.7. (iii) The values of the peaks and valleys exhibit irregularity, occasionally causing sudden rises or falls (as seen in panel (b) of FIG. 2), and sometimes even surpassing the preceding peak (as seen in panel (b) of FIG. 3). This is analogous to the bimodal structure depicted in panel (b) of FIG. 3. At first glance, this structure may appear anomalous, but upon understanding the irregular nature of recoil and the coarse graining resulting from limited simulated data, it becomes apparent why it has such a structure. These formulas also provide an explanation for the asymptotic behavior of dynamic quantities at high eccentricities and head-on limits. In the scenario after the last valley, \(M_{f}\) gradually increases toward a specific value. Consequently, the recoil velocity \(V_{f}\) and the spin of the remnant \(\alpha_{f}\) in Eqs. (26) and (28) exhibit a rapid decrease, as observed in panels (b) and (e) of FIG. 2 and FIG. 3. On the other hand, the peak luminosity demonstrates a slow decrease, similar to the behavior of \(M_{f}\), as depicted in panels (c) and (d) of FIG. 2 and FIG. 3. It is noteworthy to mention this in Ref. [86], a regular functional relationship between recoil velocity and infall direction was acquired, indicating the presence of an intrinsic correlation between these two quantities. However, due to the absence of trajectory information in the RIT catalog, our investigation of the relationship among recoil velocity, eccentricity, and infall direction remains incomplete. We recognize the need for further research in this area to address this limitation. In summary, the dynamical quantities, including peak luminosity, recoil velocity, mass, and spin of the remnant, display distinct behaviors in their oscillations. Oscillations of peak luminosity, remnant mass, and spin exhibit a more regular pattern, whereas those of recoil velocity appear messy and irregular. This phenomenon can be attributed to their distinct physical origins, specifically the differences in their calculation methods. Furthermore, we delve into the physical origin of these oscillations, which can be attributed to orbital transitions. As evident from FIG. 2 and FIG. 3, the position of the peaks and valleys in the oscillations corresponds to the appearance of orbital transitions. In other words, these transitions introduce excitations that amplify the dynamic quantities, including peak luminosity, recoil velocity, and the mass and spin of the remnant. This effect manifests itself earlier for small initial coordinate separations, later for larger separations, and typically within less than 10 orbital cycles. Each peak and valley here has the same meaning as the transition from inspiral to plunge. They are close to extreme relativistic situations and only appear at close coordinate separations. So, it is a strong field effect that necessitates NR and cannot be captured by analytical PN methods. Moreover, we observe that as the eccentricity gradually decreases, the number of orbital cycles increases, resulting in a gradual reduction of the oscillations. This observation provides an alternative perspective, highlighting how orbit averaging can mitigate the impact of eccentricity. However, this effect is only prominent in strong-field regimes, and attempts to study it using analytical PN methods can only capture an average in gravitational wave background over a few gravitational wavelengths [9, 10]. Therefore, the complete manifestation of the orbital averaging effect for eccentricity requires the use of NR. The influence of initial eccentricity on the enhancement or weakening of dynamic quantities such as peak luminosity, recoil velocity, mass, and spin of the remnant has significant astrophysical implications. From a PN perspective, Ref. [81] suggests a proportional relationship between recoil velocity (\(V_{f}\)) and low eccentricity (\(e_{0}\)), i.e., \(V_{f}\propto(1+e_{0})\). However, in the strong-field regime of NR, no obvious proportional relationship between recoil and initial eccentricity has been observed in FIG. 2 and FIG. 3. Previous studies, such as Refs. [86] and [85], have quantitatively analyzed the enhancement effect induced by eccentricity. To quantitatively analyze the relative increment percentage of peak luminosity (\(L_{\rm peak}\)), recoil velocity (\(V_{f}\)), mass (\(M_{f}\)), and spin (\(\alpha_{f}\)) of the remnant relative to the corresponding circular orbit, we express it as: \[\frac{\Delta A}{A_{c}}=\frac{A_{e}-A_{c}}{A_{c}}\times 100\%, \tag{29}\] where \(A\) denotes \(L_{\rm peak}\), \(V_{f}\), \(M_{f}\), or \(\alpha_{f}\), and the subscripts \(e\) and \(c\) represent the cases of eccentric and corresponding circular orbits, respectively. For the initial coordinate separation of \(11.3M\), the first set of simulations with zero eccentricity serves as \(A_{c}\). However, for the initial coordinate separation of \(24.6M\), despite an initial eccentricity of \(0.19\) in the first group, the near-horizontal characteristic observed in FIG. 3 makes it comparable to a circular orbit, allowing us to approximate it as \(A_{c}\). It should be noted that we exclude recoil with a mass ratio of \(q=1\) and the cases with mass ratios of \(q=0.06667\) and \(q=0.03125\) at the initial coordinate separation of \(24.6M\) due to the unreasonable initial eccentricity (\(e_{0}=0.51\)) to approximate a circular orbit. In FIG. 5, we present the percentages of increase of \(L_{\rm peak}\), \(V_{f}\), \(M_{f}\), and \(\alpha_{f}\) relative to the corresponding circular orbit for the initial Figure 5: Increment percentages of \(V_{f}\), \(L_{\text{peak}}\), \(M_{f}\), and \(\alpha_{f}\) relative to the corresponding circular orbit at initial coordinate separations of \(11.3M\) and \(24.6M\) for nonspinning configuration with different mass ratios. The upper four panels correspond to the initial coordinate separation of \(11.3M\), and the lower four panels correspond to the initial coordinate separation of \(24.6M\). coordinate separations of \(11.3M\) and \(24.6M\). Notable observations include: (i) The relative increase of the dynamic quantities is influenced by the initial coordinate separation and the mass ratio. Here we focus solely on peaks or valleys and exclude discussions on high eccentricity and the head-on collision limit. We find that for the recoil velocity \(V_{f}\), at the initial coordinate separation of \(11.3M\), the maximum relative increase can reach 69% for \(q=0.75\), while at \(24.6M\), the maximum relative increase is 38% for \(q=0.1667\). There is a remarkably significant increase in their values compared to circular orbits. As for peak luminosity \(L_{\rm peak}\), at the initial coordinate separation of \(11.3M\), the maximum relative increment can reach 20% for \(q=0.75\), and at \(24.6M\), the maximum relative increment is 42% for \(q=0.1667\) (It is worth noting that the simulation of \(q=0.1667\) may have errors due to its deviation from simulations with other mass ratios. In our previous study [107], we discovered that the waveforms associated with a mass ratio of \(q=0.1667\) exhibit abnormal behavior and peculiar deviations from the expected patterns.). Regarding mass \(M_{f}\), at the initial coordinate separation of \(11.3M\), the minimum relative increment can reach -0.28% for \(q=1\), and at \(24.6M\), the minimum relative increment is -0.5% for \(q=1\). Lastly, for spin \(\alpha_{f}\), at the initial coordinate separation of \(11.3M\), the maximum relative increase can reach 3.1% for \(q=0.25\), while at \(24.6M\), the maximum relative increase is 6.9% for \(q=0.1429\). (ii) In the case of regular oscillations (\(L_{\rm peak}\), \(M_{f}\), \(\alpha_{f}\)), the last orbital transition from orbit to plunge introduces the most significant relative increment or decrement, leading to a substantial change compared to the penultimate peak or valley. This observation is evident in panels (b), (c), (d), (f), (g), and (h) of FIG. 5. For further details and additional features, which are not discussed comprehensively here, please refer to FIG. 5. #### iii.1.4 Summary In conclusion, in Section III.1, we have provided a comprehensive analysis of the relationship between various dynamic quantities, including merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants, and the initial eccentricity \(e_{0}\) for different initial coordinate separations \(11.3M\) and \(24.6M\). Our findings reveal intriguing oscillatory behaviors, which become evident when the numerical simulation data points are sufficiently dense. In Sec. III.1.1 and Sec. III.1.2, we objectively described the observed phenomenology and oscillations without delving into their physical origins. However, in Sec. III.1.3, we embarked on exploring the underlying causes of these oscillations. Using the phase of gravitational waves, we calculated the orbital cycle number \(N_{\rm waves}\) and found a remarkable correlation between peaks or valleys of the dynamic quantities and orbital transitions. Subsequently, we employed calculation formulas analysis from gravitational waveform to examine the oscillatory behavior exhibited by different dynamic quantities. Our analysis led us to conclude that the distinct oscillation patterns observed in various physical quantities arise from the use of different calculation methods. Finally, to address the astrophysical implications of our findings, we computed the percentage increment of each dynamic quantity in eccentric orbits relative to corresponding circular orbits. This analysis provides valuable insights into the relative enhancements or weakenings of these quantities associated with eccentricity. In general, our study sheds light on the intricate relationship between initial eccentricity and dynamic quantities, revealing oscillatory phenomena and providing a deeper understanding of their physical origins. The calculated percentage increments further contribute to our understanding of the astrophysical implications of eccentric orbits compared to circular orbits. ### Spin alignment #### iii.2.1 Analysis The analysis of spin-aligned eccentric BBH mergers follows a similar framework to the previous nonspinning case. However, the inclusion of spin introduces additional considerations. Specifically, we need to account for the influence of spin on the merger dynamics. The hangup effect, characterized by spin alignment or anti-alignment with orbital angular momentum, can either slow down or accelerate the BBH merger compared to the nonspinning scenario [108; 109; 110; 111]. This effect fundamentally alters the relationship between the dynamic quantities of the BBH merger, including the merger time \(T_{\rm merger}\), the peak luminosity \(L_{\rm peak}\), the recoil velocity \(V_{f}\), the mass \(M_{f}\), and the spin \(\alpha_{f}\), with respect to the initial eccentricity \(e_{0}\), relative to the nonspinning case. TABLE 1 provides the parameters for eccentric BBH simulations with spin-aligned or anti-aligned configurations (collectively referred to as spin-aligned for simplicity) form RIT [80]. It is important to note that the minimum values \(e_{0\rm min}\) of the initial eccentricity differ across the simulations, and the maximum value of the initial eccentricity is set to 0.9999, approaching the head-on collision limit. To facilitate representation and analysis, each simulation configuration is assigned a unique ID, as indicated in the first column of TABLE 1. FIG. 6 illustrates the dynamic quantities merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\), as functions of the initial eccentricity \(e_{0}\) for the spin-aligned configuration at an initial coordinate separation of \(24.6M\). Additionally, we mark the position where \(N_{\rm waves}\) is approximately equal to one orbit with a red "x" in FIG. 6. Notably, we now consider different effective spin configurations, which introduce variations compared to the nonspinning BBH case. Incorporating spin into the analysis of eccentric BBH mergers enhances our understanding of the complex \begin{table} \begin{tabular}{|r|r|r|r|r|r|r|} \hline configuration ID & \(q\) & \(\chi_{1z}\) & \(\chi_{2z}\) & \(\chi_{\rm eff}\) & \(e_{0,min}\) & set number \\ \hline A1 & 1 & -0.5 & -0.5 & -0.5 & 0.19 & 23 \\ \hline A2 & 1 & -0.8 & -0.8 & -0.8 & 0.19 & 23 \\ \hline A3 & 1 & 0.5 & 0.5 & 0.5 & 0.4375 & 21 \\ \hline A4 & 1 & 0.8 & 0.8 & 0.8 & 0.4375 & 21 \\ \hline A5 & 1 & 0 & 0.8 & 0.4 & 0.4375 & 14 \\ \hline A6 & 1 & 0 & -0.8 & -0.4 & 0.19 & 14 \\ \hline A7 & 0.25 & 0 & -0.8 & -0.64 & 0.4375 & 20 \\ \hline A8 & 0.3333 & 0 & -0.8 & -0.6 & 0.36 & 21 \\ \hline A9 & 0.5 & 0 & -0.8 & -0.53 & 0.36 & 16 \\ \hline \end{tabular} \end{table} Table 1: Parameters for eccentric BBH simulations with spin-aligned configurations, where \(e_{0,min}\) represents the minimum value of the initial eccentricity in the simulation series. Figure 6: Variations of dynamical quantities of the merger time \(T_{\rm merger}\) (panel (a)), peak luminosity \(L_{\rm peak}\) (panel (c)), recoil velocity \(V_{f}\) (panel (b)), mass \(M_{f}\) (panel (d)), and spin \(\alpha_{f}\) (panel (e)) of the merger remnants as a function of the initial eccentricity \(e_{0}\) at the initial coordinate separation of \(24.6M\) for spin aligned configuration with different mass ratios. We mark the position where \(N_{\rm waves}\) is approximately equal to one orbit with a red “x”. The dashed line in panel (e) represents the Schwarzschild black hole whose corresponding spin is 0. interaction between spin dynamics and initial eccentricity. The inclusion of different effective spin configurations further enriches the investigation of orbital hangup effect, highlighting nuances compared to the nonspinning case. In panel (a) of FIG. 6, the relationship between merger time and initial eccentricity exhibits similarities to the overall behavior observed in the previous nonspinning BBH case, as depicted in FIG. 2 and FIG. 3. However, the hangup effect significantly alters the location of the critical point in the merger time. Specifically, for positive effective spin values, the corresponding initial eccentricity at the critical point is higher, approximately 0.65. On the contrary, for negative effective spin values, the critical point occurs at a lower initial eccentricity, around 0.45. This observation underscores the profound impact of the hangup effect on either accelerating or decelerating the BBH merger process. In particular, a greater effective spin leads to longer merger times, indicating a stronger influence of spin on the dynamics of the system. In panel (b) of FIG. 6, the overall behavior of the recoil velocity aligns with the trends observed in FIG. 2 and FIG. 3. However, because of limited data points and a scarcity of simulations with low initial eccentricity, the oscillatory behavior is not clearly visible, and the final peak is barely discernible. When the mass ratio \(q=1\) and the spins are equal, the BBH system adopts a perfectly symmetric configuration, resulting in zero radiated linear momentum and, consequently, a recoil velocity of 0. Comparing configurations A1, A2, A3, and A4, we observe that the influence of different mass ratios on the recoil velocity persists, similar to the nonspinning case. However, the effect of aligned spin on the recoil velocity is twice as significant as the effect of an asymmetric mass ratio. Previously, the maximum recoil value introduced by an asymmetric mass ratio of \(q=0.3333\) was 226 km/s, but configuration A9 raises this value to 387 km/s. As the spin configuration becomes more asymmetric, the resulting recoil velocity increases. Simultaneously, the hangup effect causes a shift in the initial eccentricity corresponding to the recoil peak, reflecting its role in accelerating or decelerating the BBH dynamics. Notably, the presence of spin amplifies the recoil velocity for circular orbital cases. Consequently, the incremental percentage of recoil is reduced in the presence of spin compared to the previous nonspinning scenario. In panel (c) of FIG. 6, the overall behavior of the peak luminosity shows similarities to FIGs. 2 and 3. Some configurations, such as A3 and A4, display slight oscillations, consistent with the previous observations. However, it is important to note that these oscillations are not comprehensive, as the available data points are limited and represent a coarse-grained picture. The influence of spin on the peak luminosity is significantly greater than the effect of eccentricity. For the simulation sequences in RIT, in the absence of spin and eccentricity, the maximum value of peak luminosity can reach \(5.1\times 10^{56}\) ergs/s. However, with spin and no eccentricity, the maximum value of peak luminosity can reach \(7.0\times 10^{56}\) ergs/s. When both eccentricity and spin are present, as in configuration A4, the maximum value of peak luminosity can reach \(9.3\times 10^{56}\) ergs/s. In panel (c), the impact of the hangup effect on the peak luminosity is also evident, which will not be detailed here. The orbital cycle number \(N_{\rm waves}\ \approx 1\) is approximately located near the peak, similar to the situation without spin. However, due to the limited data points and inherent uncertainties, this value should be regarded as a reference rather than an exact measurement. The analysis of panel (d) in FIG. 6 follows a similar pattern to the previous panels (b) and (c), and therefore, we will refrain from repeating it here. In panel (e) of FIG. 6, the overall behavior of the spin of the remnant exhibits similarities to FIGs. 2 and 3. However, the presence of spin introduces a new phenomenon in the presence of eccentricity i.e. a final spin transition from positive to negative, passing through the Schwarzschild black hole during the process. In eccentric BBH simulations, the increase in initial eccentricity is equivalent to the decrease in tangential linear momentum, as can be observed from the relationships \(p_{t}=p_{t,qe}(1-\epsilon)\) and \(e=2\epsilon-\epsilon^{2}\). The initial angular momentum \(L\) of the BBH can be expressed as \[L=p_{t}D. \tag{30}\] In the spin-aligned configuration, the radiated angular momentum \(J_{z}^{\rm rad}(e_{0},q,\chi_{1z},\chi_{2z})\) is in the \(z\) direction and depends on the mass ratio \(q\), the initial eccentricity \(e_{0}\), and the spins \(\chi_{1z}\) and \(\chi_{2z}\). Referring to previous work [76; 82], neglecting effects such as high-order spin-orbit coupling and spin-spin coupling, and assuming that the spin of each black hole remains constant during the evolution of the BBH, the approximate expression for the final spin parameters \(\alpha_{f}\) is given by \[\alpha_{f}=\frac{L(e_{0},q)-J_{z}^{\rm rad}(e_{0},q,\chi_{1z},\chi_{2z})}{M_{f }^{2}(e_{0},q,\chi_{1z},\chi_{2z})}+\chi_{1z}+\chi_{2z}. \tag{31}\] As previously analyzed, when the initial coordinate separation is fixed, both \(L\) and \(M_{f}\) are functions of the initial eccentricity \(e_{0}\) and mass ratio \(q\), with the latter also dependent on the spins \(\chi_{1z}\) and \(\chi_{2z}\). If the spin direction \(\chi_{1z}\) and \(\chi_{2z}\) aligns with the orbital angular momentum or the sum of \(\chi_{1z}\) and \(\chi_{2z}\) is greater than 0, regardless of the adjustment of the initial eccentricity \(e_{0}\), the final spin direction remains positive (in accordance with the direction of the orbital angular momentum). On the other hand, if the spin direction \(\chi_{1z}\) and \(\chi_{2z}\) is anti-aligned with the orbital angular momentum or the absolute value of the sum (This sum is required to be negative) of \(\chi_{1z}\) and \(\chi_{2z}\) is greater than the first term of the right side of Eq. (31), it is possible to finely adjust the initial eccentricity such that the final spin \(\alpha_{f}\) becomes 0, resulting in a Schwarzschild black hole. This relationship can be qualitatively expressed as: \[0=\frac{L(e_{0S},q)-J_{z}^{\rm rad}(e_{0S},q,\chi_{1z},\chi_{2z})}{M_{f}^{2}(e_ {0S},q,\chi_{1z},\chi_{2z})}+\chi_{1z}+\chi_{2z}. \tag{32}\] Determining accurately the initial eccentricity \(e_{0S}\) that leads to the final black hole being a Schwarzschild black hole is challenging when using analytical modeling. This difficulty arises from the need to consider the eccentricity's special effects as well as the influence of the hangup effect, which makes the problem highly complex. From panel (e) in FIG. 6, it can be observed that the initial eccentricities that eventually result in a Schwarzschild black hole are all in the plunge stage rather than in the inspiral stage, indicating high eccentricity and complex strong field dynamics. The corresponding initial eccentricity values to form a Schwarzschild black hole for configurations A1, A2, A6, A7, A8, and A9 are 0.96, 0.91, 0.96, 0.6156, 0.7975, and 0.91, respectively. These eccentricity values do not imply that the final black hole spin is exactly 0, but rather that it is as close as possible to 0. These initial eccentricity values provide insights into the influence of spin and mass ratios on the BBH dynamics. Importantly, it is worth noting that the combined effect of eccentricity and spin does not cause the final black hole's spin to exceed that of an extreme Kerr black hole whose spin is 1, thus confirming the validity of the cosmic censorship hypothesis [112, 113]. #### iii.2.2 Summary In summary, in Sec. III.2, we presented a comprehensive analysis of various eccentric spin alignment configurations in the BBH merger simulations. We investigated the relationship between key dynamic quantities, including the merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants, and the initial eccentricity \(e_{0}\) for an initial coordinate separation of \(24.6M\). Our findings demonstrate that the overall behavior of these dynamic quantities follows a similar pattern to the nonspinning case. They start with a horizontal line, gradually exhibit oscillations towards the final peak or valley (although due to limited data points, we observed only a portion of the oscillation), and eventually converge to a certain value as they approach the head-on collision limit. This universal behavior reveals the similar effects of eccentricity on dynamics, regardless of spin alignment or no spin. In both the nonspinning and spin-aligned scenarios, the percentage increment of these dynamic quantities due to eccentricity, relative to the circular orbit case, remains approximately analogous. This observation underscores the universality of eccentricity's influence on BBH dynamics. The hangup effect plays a crucial role in altering the critical points of the merger time \(T_{\rm merger}\), modifying the baseline value of the recoil velocity and the corresponding eccentricity at the final peak for \(V_{f}\), and introducing variations in the peak luminosity \(L_{\rm peak}\) and remnant mass \(M_{f}\) compared to the case of zero eccentricity. Additionally, it can give rise to a critical eccentricity that results in a transition across the Schwarzschild black hole for \(\alpha_{f}\). These effects, characterized by the alterations in dynamic quantities of BBHs under the influence of spin and eccentricity, have profound astrophysical implications. ### Spin precession #### iii.3.1 Analysis When the spin angular momentum and orbital angular momentum directions are misaligned, orbital precession can occur. This precession effect introduces intricate modulations on waveforms and dynamics, including amplitude and phase modulation of the waveform and orbital plane precession [114, 115]. The situation becomes even more complex when eccentricity is introduced [107]. In this scenario, the waveform undergoes dual modulation. As discussed in Sec. III.1, this impact on the waveform is equivalent to the impact on dynamic quantities such as \(L_{\rm peak}\), \(V_{f}\), \(M_{f}\), and \(\alpha_{f}\). Furthermore, precession affects the merger time \(T_{\rm merger}\). In TABLE 2, we provide the parameters of the eccentric BBH simulations used for spin precession. In FIG. 7, we present the dynamic quantities of the merger remnants, including merger time \(T_{\rm merger}\), peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\), as functions of the initial eccentricity \(e_{0}\) for the spin-precessing configuration at an initial coordinate separation of \(24.6M\). In FIG. 7, we do not mark the position where \(N_{\rm waves}\) is approximately one orbit due to the limited number of data points. In such cases, the cycle number \(N_{\rm waves}\) deviates significantly from an integer value and lacks a reference value. To facilitate comparison with the effective spin previously studied, we introduce the effective precession spin parameter \(\chi_{p}\) in an attempt to quantitatively describe the impact of precession. Due to the vast parameter space, RIT's simulations only cover eccentric precession configurations with a mass ratio of \(q=1\) and some special spin configurations. In panel (a) of FIG. 7, we observe that the variation of merger time with initial eccentricity exhibits a similar trend to the overall behavior of the nonspinning BBH case in FIG. 2 and FIG. 3, as well as the spin-aligned BBH case in FIG. 6. Notably, there are no apparent differences among the effects of different spin precession configurations. The impact is not as pronounced as the changes induced by the hang-up effect observed previously. The critical turning point of \(T_{\rm merger}\) aligns closely with the non-spinning case, occurring at approximately 0.5. Moreover, the effective precession spin parameter exhibits comparable values in all configurations. However, due to the limited number of data points, it is challenging to discern any significant correlations. In panel (b) of FIG. 7, we observe that the variation of the recoil velocity with initial eccentricity follows a trend similar to the overall behavior of the non-spinning BBH case in FIG. 2 and FIG. 3, as well as the spin-aligned BBH case in FIG. 6. However, the oscillations in panel (b) appear more chaotic compared to both the nonspinning and spin-aligned cases. It is worth noting that cer \begin{table} \begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|} \hline configuration ID & \(q\) & \(\chi_{1x}\) & \(\chi_{1y}\) & \(\chi_{1z}\) & \(\chi_{2x}\) & \(\chi_{2y}\) & \(\chi_{2z}\) & \(\chi_{p}\) & \(e_{0,min}\) & set number \\ \hline P1 & 1 & 0 & -0.6062 & 0.35 & 0 & 0.6062 & 0.35 & 0.6062 & 0.51 & 7 \\ \hline P2 & 1 & 0 & -0.7 & 0 & 0 & 0.7 & 0 & 0.7 & 0.51 & 7 \\ \hline P3 & 1 & 0.5 & 0 & 0 & 0.5 & 0 & 0 & 0.5 & 0.5775 & 5 \\ \hline P4 & 1 & 0.6 & 0 & 0.6 & 0 & 0 & 0.6 & 0.5775 & 5 \\ \hline P5 & 1 & 0.7 & 0 & 0 & 0.7 & 0 & 0 & 0.7 & 0.19 & 15 \\ \hline P6 & 1 & 0.6062 & 0.35 & 0 & 0.7 & 0 & 0 & 0.7 & 0.51 & 11 \\ \hline P7 & 1 & 0.35 & 0.6062 & 0 & 0.7 & 0 & 0 & 0.7 & 0.51 & 11 \\ \hline P8 & 1 & 0 & 0.7 & 0 & 0.7 & 0 & 0 & 0.7 & 0.51 & 11 \\ \hline P9 & 1 & -0.35 & 0.6062 & 0 & 0.7 & 0 & 0 & 0.7 & 0.51 & 11 \\ \hline P10 & 1 & -0.6062 & 0.35 & 0 & 0.7 & 0 & 0 & 0.7 & 0.51 & 11 \\ \hline P11 & 1 & -0.7 & 0 & 0 & 0.7 & 0 & 0.7 & 0.19 & 15 \\ \hline \end{tabular} \end{table} Table 2: Parameters for eccentric BBH simulations with spin precession configurations, where \(e_{0,min}\) represents the minimum value of the initial eccentricity in the simulation series. Figure 7: Variations of dynamical quantities of the merger time \(T_{\rm merger}\) (panel (a)), peak luminosity \(L_{\rm peak}\) (panel (c)), recoil velocity \(V_{f}\) (panel (b)), mass \(M_{f}\) (panel (d)), and spin \(\alpha_{f}\) (panel (e)) of the merger remnants as a function of the initial eccentricity \(e_{0}\) at the initial coordinate separation of \(24.6M\) for spin precession configuration with mass ratio \(q=1\). tain configurations, such as P4 and P5, exhibit a recoil velocity of 0 due to symmetry in mass ratio and spin. Firstly, we observe that the magnitude of the recoil velocity is approximately an order of magnitude larger than in the previous nonspinning and spin-aligned cases. This phenomenon can be attributed to the increased asymmetry exhibited by the precession configuration in comparison to the spin alignment and no spin, leading to higher recoil velocities. Among the configurations, P1 reaches a maximum recoil velocity of 3653.64 km/s at an eccentricity of 0.64, while the smallest cases such as P6 reach a maximum recoil velocity of 674.81 km/s at an eccentricity of 0.5775. Second, the initial eccentricities at which the maximum recoil values occur for each configuration are not consistent, contributing to the visual complexity in panel (b). As discussed in Sec. III.1, we already understand the origin of these chaotic oscillations in the recoil velocity. The messy appearance in panel (b) is a combined effect of eccentricity and spin precession, with spin playing a more dominant role in the observed behavior compared to eccentricity. Furthermore, we can observe from P11 that the maximum recoil caused by eccentricity is 722 km/s larger than the recoil observed in corresponding circular orbit. This difference corresponds to a maximum percentage increase of 25.5%, which is consistent with the findings of previous cases without spin and spin alignment. This quantitative concept holds significant astrophysical significance and provides valuable insights into the dynamics of eccentric BBH systems. In panels (c), (d), and (e) of FIG. 7, we observe that the variations of the peak luminosity \(L_{\text{peak}}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of merger remnants with initial eccentricity follow a similar trend to the overall behavior observed in the nonspinning BBH case in FIG. 2 and FIG. 3, as well as the spin-aligned BBH case in FIG. 6. From configurations P5 and P11, we can see that in the presence of spin precession, the incremental percentages of the dynamic quantities \(L_{\text{peak}}\), \(M_{f}\), and \(\alpha_{f}\) relative to the values in a circular orbit are essentially consistent with the findings in the no spin and spin-aligned case. These observations indicate that, regardless of the inclusion of spin, the effect of eccentricity on the dynamics of BBHs remains universal and does not change. Furthermore, these findings highlights the fact that eccentricity exerts a consistent influence on BBH dynamics, regardless of the presence or absence of spin. They underscore the universal nature of the eccentricity-induced effects and provide further insight into the behavior of eccentric BBH systems. The other detailed analysis is the same as the previous no spin and spin alignment, so we will not go into details here (refer to FIG. 7). #### iii.2.2 Summary In summary, Sec. III.3 presents a collection of representative simulations of eccentric spin precession configurations in BBH systems. We investigate the relationship between several dynamic quantities, the merger time \(T_{\text{merger}}\), peak luminosity \(L_{\text{peak}}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants, and the initial eccentricity \(e_{0}\) for an initial coordinate separation of \(24.6M\). Our analysis reveals that the overall behavior of these dynamic quantities closely resembles that observed in previous studies involving nonspinning and spin-aligned cases. However, it is important to note that, due to limitations in the available data points, we do not observe oscillatory patterns similar to those depicted in FIGs. 2 and 3. We conduct an analysis to understand the reasons behind the intricate nature of recoil in panel (b), as illustrated in FIG. 7, and propose that it arises from the combined effects of spin precession and eccentricity. Notably, we find that in the presence of spin precession, the percentage increment of the dynamic quantities with respect to the initial eccentricity remains consistent with that observed in both the no spin and spin-aligned scenarios. These findings highlight the universality of the influence of eccentricity on BBH dynamics, which has significant astrophysical implications. ## IV Conclusion and Outlook Thanks to the extensive collection of numerical relativistic simulations of eccentric orbital BBH mergers conducted by RIT, we investigated the effect of the initial eccentricity \(e_{0}\) on various dynamic quantities, including the merger time \(T_{\text{merger}}\), peak luminosity \(L_{\text{peak}}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\) of the merger remnants. Our study encompasses configurations involving no spin, spin alignment, and spin precession, as well as a wide parameter space that encompasses mass ratios ranging from 1/32 to 1 and initial eccentricities spanning from 0 to 1. In the case of non-spinning BBH systems, we conducted a detailed investigation using two fixed initial coordinate separations \(11.3M\) and \(24.6M\). For the \(11.3M\) separation, we make a significant discovery regarding the presence of a widespread oscillation phenomenon in the relationship between dynamic quantities \(L_{\text{peak}}\), \(V_{f}\), \(M_{f}\), \(\alpha_{f}\), and the initial eccentricity \(e_{0}\). This observation represents the first identification of such universal oscillations in this context. Furthermore, in the case of a mass ratio of \(q=1\) and the \(24.6M\) separation, we also observe similar oscillatory behavior, leading us to conclude that this phenomenon will manifest itself in numerical simulations featuring sufficiently dense initial eccentricity. We further analyze the role played by the mass ratio in these oscillations. To gain further insight into these oscillations, we calculate the orbital cycle number \(N_{\text{waves}}\) by examining the phase of gravitational waves. We establish a connection between the integer value of \(N_{\text{waves}}\) and the peaks and valleys observed in the curves of the dynamic quantities. This association leads us to infer that the oscillation phenomenon arises from orbital transitions. This study presents a groundbreaking discovery of the dynamic effects arising from additional orbital transitions in eccentric BBH mergers, beyond the well-known transition from inspiral to plunge [82]. Subsequently, we analyze the formulas used to calculate \(V_{f}\), \(M_{f}\), and \(\alpha_{f}\) from the gravitational waveform. We propose that the chaotic behavior observed in the recoil velocity \(V_{f}\) and the regular behavior observed in \(M_{f}\) and \(\alpha_{f}\) are the result of differences in the calculation formulas. To facilitate astrophysical applications, we quantitatively evaluated the percentage increment of the dynamic quantities \(L_{\rm peak}\), \(V_{f}\), \(M_{f}\), and \(\alpha_{f}\) relative to their circular orbit counterparts. This analysis provides a useful measure of the deviations of the dynamic quantities from circular orbits and the impact of eccentricity on the dynamical properties of the system. In the spin-aligned case, we observe a similarity in the overall behavior of the dynamic quantities compared to the non-spinning scenario. However, the presence of the hangup effect introduces modulations in the relationship between the initial eccentricity and the dynamical quantities, relative to the nonspinning case. In particular, we make a significant discovery in this context. That is, when the spin angular momentum and orbital angular momentum are anti-aligned, we find that by adjusting the initial eccentricity, which is equivalent to modifying the initial tangential momentum, the spin of the final remnant \(\alpha_{f}\) can undergo a transition from positive to negative, passing through the Schwarzschild black hole configuration along the way. Furthermore, we discover that the percentage of increments of the dynamic quantities with respect to the initial eccentricity in the spin-aligned BBH systems is similar to that observed in the non-spinning case. This finding highlights the consistency in the effects of eccentricity on the dynamics of both spin-aligned and nonspinning BBH systems. In the spin-precessing case, we also observe a general similarity in the overall behavior of the dynamic quantities compared to the nonspinning and spin-aligned cases. However, we note distinct characteristics in the recoil velocities, which exhibit larger magnitudes and more intricate curves compared to the previous two scenarios. Through a comprehensive analysis, we conclude that these complex recoil behaviors arise from the combined influence of spin precession and eccentricity. Furthermore, we find that the percentage increment of the dynamic quantities with respect to the initial eccentricity follows a pattern similar to that observed in the nonspinning and spin-aligned cases. These observations underscore the universality of the effect of eccentricity on the dynamics of BBH systems, regardless of the presence or absence of spin. All in all, our comprehensive analysis reveals universal behavior in the influence of eccentricity on BBH dynamics. This behavior can be described as follows: Initially, the effect of eccentricity is minimal, resulting in nearly horizontal straight-line trajectories. As eccentricity increases, the dynamic quantities, including peak luminosity \(L_{\rm peak}\), recoil velocity \(V_{f}\), mass \(M_{f}\), and spin \(\alpha_{f}\), exhibit gradual oscillations, reaching peaks or valleys at certain points. As eccentricity further increases, under high eccentricity and head-on collision limits, the dynamic quantities tend to converge towards specific values. This unified model provides a comprehensive understanding of how the initial eccentricity influences the various dynamic quantities in BBH systems of different mass ratios and spin configurations, encompassing the entire range from low to high eccentricities. However, it is essential to acknowledge the limitations of our current study. While we have made significant progress, it remains incomplete. For the initial coordinate separation of \(11.3M\), although we have a substantial number of data points, the density is still not sufficient to draw definitive conclusions. Similarly, in the case of \(24.6M\), the initial eccentricity is not small enough, and the number of data points is limited. To develop a more comprehensive understanding, it is necessary to investigate other coordinate separations and analyze the unified behavior of the influence of eccentricity on the dynamic quantities. Additionally, various factors, including errors arising from insufficient data point density, uncertainties in measured eccentricity, numerical inaccuracies, and the effects of perisatron precession, need to be thoroughly addressed. Therefore, further research utilizing eccentric orbital numerical simulations is needed to verify these findings and address these challenges. Furthermore, the absence of trajectory information in the RIT dataset hinders our ability to fully analyze the dynamic origins of the observed oscillations. Incorporating trajectory information into future studies will be important to gain deeper insight into this phenomenon. Moreover, the cases of spin alignment and spin precession explored in this study do not cover a sufficiently wide parameter space in terms of spin, initial eccentricity, and mass ratio. The limited number of numerical simulation data points in these cases may restrict the generalizability of the results. Moving forward, as numerical relativistic simulations of eccentric orbit BBH mergers continue to advance, the influence of eccentricity on dynamics will gradually be revealed. A more practical approach for astrophysical applications would be to develop analytical models that describe the relationship between the dynamic quantities, such as \(T_{\rm merger}\), \(L_{\rm peak}\), \(V_{f}\), \(M_{f}\), and \(\alpha_{f}\), in terms of the initial eccentricity \(e_{0}\). Investigating and constructing such unified models will be the main focus of our future research endeavors. ###### Acknowledgements. The authors are very grateful to the RIT collaboration for the numerical simulation of eccentric BBH mergers, and thanks to Yan-Fang Huang, Zhou-Jian Cao, Duan-Yuan Gao, Lin Zhou, Yuan-Yuan Zuo, Jun-Yi Shen, Dong-Jie Liu and Shi-Yan Tian for their helpful discussions. The computation is partially completed in the HPC Platform of Huazhong University of Science and Technology. The languages was polished by ChatGPT during the revision of the draft. This work is supported by the National Key R&D Program of China (2021YFA0718504).
2310.03018
Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
We introduce a new zero resource code-switched speech benchmark designed to directly assess the code-switching capabilities of self-supervised speech encoders. We showcase a baseline system of language modeling on discrete units to demonstrate how the code-switching abilities of speech encoders can be assessed in a zero-resource manner. Our experiments encompass a variety of well-known speech encoders, including Wav2vec 2.0, HuBERT, XLSR, etc. We examine the impact of pre-training languages and model size on benchmark performance. Notably, though our results demonstrate that speech encoders with multilingual pre-training, exemplified by XLSR, outperform monolingual variants (Wav2vec 2.0, HuBERT) in code-switching scenarios, there is still substantial room for improvement in their code-switching linguistic abilities.
Kuan-Po Huang, Chih-Kai Yang, Yu-Kuan Fu, Ewan Dunbar, Hung-yi Lee
2023-10-04T17:58:11Z
http://arxiv.org/abs/2310.03018v3
# Zero Resource Code-Switched Speech Benchmark Using Speech ###### Abstract We introduce a new zero resource code-switched speech benchmark designed to directly assess the code-switching capabilities of self-supervised speech encoders. We showcase a baseline system of language modeling on discrete units to demonstrate how the code-switching abilities of speech encoders can be assessed in a zero-resource manner. Our experiments encompass a variety of well-known speech encoders, including Wav2vec 2.0, HuBERT, XLSR, etc. We examine the impact of pre-training languages and model size on benchmark performance. Notably, though our results demonstrate that speech encoders with multilingual pre-training, exemplified by XLSR, outperform monolingual variants (Wav2vec 2.0, HuBERT) in code-switching scenarios, there is still substantial room for improvement in their code-switching linguistic abilities. Kuan-Po Huang\({}^{1*}\), Chih-Kai Yang\({}^{2*}\), Yu-Kuan Fu\({}^{3}\), Ewan Dunbar\({}^{4}\), Hung-yi Lee\({}^{5}\)\({}^{1235}\)National Taiwan University \({}^{1}\)ASUS Intelligent Cloud Services \({}^{4}\)University of Toronto Code-switch, Multilingual, Discrete unit, Zero resource, Self-supervised ## 1 Introduction Code-switching is a common phenomenon happening in our daily lives, especially in conversations between people from different regions or countries that have multiple official languages. In speech processing, there are also various kinds of tasks where code-switching might be involved, for example, speech recognition [1, 2], speech translation [3, 4], text-to-speech synthesis [5], etc. With the huge advantage of using heavily parameterized self-supervised speech encoders such as Wav2vec 2.0 [6], HuBERT [7], and XLSR [8, 9], many of the speech processing tasks are performed on the representations extracted by these speech encoders, and thus code-switching abilities become essential for their applicability for tasks involving code-switching. However, to our best knowledge, there's no existing benchmark or corpus that allows the speech community to directly evaluate the inherent code-switching abilities of these commonly used speech encoders. Hence, we propose a zero resource code-switched speech benchmark to address this issue. The advantages of directly assessing the code-switching ability of speech encoders in a zero-shot manner are twofold, one is that additional parameters of downstream models are not needed, and the other one is that paired training data and labels are not required. This not only relieves the burden of training multiple downstream models when there are many downstream tasks but also allows us to utilize unlabeled speech data to serve as the training data during the assessment process instead of having to collect paired training data which is extremely difficult in the code-switching scenario. The Zero Resource Speech Challenge 2021 [10] established a baseline system demonstrating how speech encoders could be evaluated through spoken language modeling directly from speech without the need for text transcripts or task labels. One of the evaluation metrics, sBLIMP, assesses the syntactic ability of speech encoders by having models assign probabilities to a pair of speech utterances where one of them contains grammatical errors. To show great syntactic ability, a speech encoder should assign a higher probability to the correct utterance than the incorrect one. In our work, we extended this metric into a code-switched version and also allowed semantic errors in the incorrect utterance. A speech encoder would have to attain both semantic and syntactic linguistic abilities in a code-switching scenario to obtain good results based on this newly proposed metric. Speaking of code-switching, knowing that code-switching involves more than one language in a sentence, there has been a debate on whether multilingual text-based LLMs have code-switching abilities [11, 12, 13]. Relatively, we also looked into the code-switching ability of multilingual self-supervised speech encoders. Unfortunately, our results indicated that in the aspect of code-switching ability, the evaluated speech models still have a long way to go. Overall, the contributions of our zero resource code-switched speech benchmark are: (1) Proposing a new zero resource code-switched switched speech task for assessing syntactic and semantic linguistic abilities of self-supervised speech models in code-switching scenarios, (2) Highlighting that there is significant room for improvement for several existing multilingual speech models in such a task. Data samples and code of our baseline systems are available at [https://github.com/nobel861017/cs_zs_baseline](https://github.com/nobel861017/cs_zs_baseline). ## 2 Zero Resource Code-Switched Speech Task We establish a brand new zero resource code-switched speech benchmark, a zero-shot evaluation, to assess the linguistic abilities of speech encoders on code-switched speech. The original BLIMP (The Benchmark of Linguistic Minimal Pairs) [14] task in the Natural Language Processing field is a task with pairs of sentences, where each pair consists of one grammatically correct sentence while the other one is grammatically incorrect. The goal of this task is to evaluate the linguistic ability of text-based language models by trying to assign a higher probability to the grammatically correct sentence. Later on, [15] proposed a zero resource speech benchmark, including a speech version task of BLIMP, namely, sBLIMP. Similar to BLIMP, this task also contains pairs of sentences but in the form of speech. The key difference between the baseline systems of sBLIMP and BLIMP is that the former takes discrete units quantized from speech representations as input while the latter takes text as input. The goal of sBLIMP is to evaluate the syntactic ability of speech encoders, while BLIMP is to evaluate the syntactic ability of text-based language models. Our proposed zero resource code-switched speech task is similar to sBLIMP. Each pair of data consists of two spoken utterances, a correct one and a wrong one. The goal is to assign a higher score to the correct utterance. Slightly different from the previous works, the term "correct" in this scenario means that the content of an utterance makes sense, and is meaningful and grammatically acceptable. Take the input and output sentence in the lower part of Fig. 1 for example. To understand the input sentence, the system should have multilingual understanding. Specifically, it needs to have English ability to understand what "water" is and Chinese ability to know the Chinese part of the sentence means "This does not dissolve in something". Furthermore, cross-lingual understanding is necessary for it to incorporate its semantic understanding in the two languages to know that the sentence means "This does not dissolve in water.". Similarly, the system should use multilingual and cross-lingual capabilities to understand the other sentence as "This does not dissolve in fire". Finally, as the first one is more meaningful, the assigned probability should be higher than that assigned to the other one. The aforementioned example shows that to achieve good performance on the proposed task, the model needs multilingual and cross-lingual syntactic and semantic understanding. Thus, we expect our proposed task to provide a way to assess the linguistic ability of the self-supervised speech models on code-switched speech. We note that there are many linguistic theories of code-switching that attempt to explain, among other things, why some grammatical positions are impossible for code-switching [16, 17]. While some of our illegal sentences are indeed grammatically inappropriate (as confirmed by our human evaluations), our benchmark does require us to have an answer to when code-switching is grammatically allowed. In many cases, the illegal sentence simply generates semantic incoherence. Nevertheless, the benchmark measures a model's ability to do language modeling in the presence of code-switching. ### Data generation and validation To generate pairs of correct and wrong utterances, we first utilized the well-known LLM released by OpenAI, ChatGPT, to generate code-switched sentences in which English (en) is mixed with either Spanish (es), French (fr), or Chinese (zh). As shown in Fig. 1, we prompted ChatGPT by first defining code-switching as suggested in [11] and asking it to generate a code-switched sentence based on a given monolingual sentence in language \(X\) from Common Voice [18], where \(X\in\{\text{es, fr, zh}\}\), to restrict the content of the resulting sentence to some extent (Step 1 in Fig. 1). The generated sentence with English mixed with language \(X\) would be used as the presumed correct sentence, and the corresponding wrong sentence was generated by requiring ChatGPT to replace or switch at most three words in the presumed correct sentence so that the resulting sentence could be more meaningless or erroneous than the original one (Step 2 in Fig. 1) while preserving the overall similarity between the two sentences. We discovered that the wrong sentences generated in this way actually tend to make no sense, be meaningless, or get grammatically unacceptable. Finally, to synthesize the code-switched speech pairs, we adopted the on-the-shelf Amazon Polly system [19] to synthesize bilingual speech utterances. As suggested in [11], we conducted human validations by multiple bilingual speakers. Each human annotator was required to label whether the paired sentences were valid, meaning that the presumed correct sentence in each pair should: (1) actually make sense and be meaningful and grammatically acceptable, (2) be indeed better than the presumed wrong one on the aforementioned aspects. Pairs failing to meet the above two requirements would be labeled as invalid ones. To ensure the annotation quality, the hired annotators were required to complete an annotation trial on some sampled paired sentence data with pre-defined ground truths. Human annotators were required to get at least \(95\%\) accuracy before proceeding to the data annotating process. A pair of correct and wrong sentences was included in the task if the majority of annotators labeled it to be valid. ### Code-switched data statistics The three tracks in the zero resource code-switching task are based on three code-switched language pairs, including Spanish-English (es-en), French-English (fr-en), and Chinese-English (zh-en), with 7263, 4020, and 3176 human-validated data samples, respectively. For each language pair, all available bilingual speaker configurations were adopted from the Amazon Polly text-to-speech system to synthesize each utterance. All the synthesized speech utterances had a sample rate of 22.5kHz originally and were later resampled to 16kHz to match the configurations of the speech encoders. ### Baseline systems Our speech-based baseline systems are depicted in Fig. 2, which consist of three main modules: the speech encoder, the quantization module, and the unit language model (Unit LM). Given a speech dataset, representations of part of the dataset are first extracted by the speech encoder and are then formed into \(k\) clusters via the k-means algorithm. The resulting k-means clusters will further serve as the quantization module for the whole training split of the speech dataset. For each representation, the quantization is done by assigning the ID of the cluster the representation vector belongs to, and thus the originally continuous waveforms become sequences of discrete units. Following previous works using speech units [20, 21], after the quantization of the whole training split, a deduplication operation is performed to ensure that there are no successive identical units in the unit sequences. Note that this operation is not performed in the original spoken language modeling system in [15]. Finally, the collected unit sequences are used as the training data to train the Unit LM. After the training, the testing set is discretized by the quantization module, and the Unit LM is used to compute the probabilities (span-PP score mentioned in Section 2.4) of the correct and wrong utterances of the pairs for evaluation. For reference, we provide some direct-inference results of pre-trained text-based language models from fairseq [22], including XLM-R Base[23] and XGLM 1.7B [24]. We also include a random baseline derived by utilizing random-assigned units and a random-weighted Unit LM. ### Evaluation metric The performance of this code-switched speech task is measured in accuracy, where a hit occurs when the Unit LM assigns a higher span-masked pseudo-probability (span-PP) score [15] to the correct utterance. Given a discrete unit sequence of a quantized speech utterance \(\mathbf{u}=u_{1},u_{2},\cdots,u_{T}\), the span-PP score is defined as follows: \[\text{span-PP}_{w,s}(\mathbf{u}) \tag{1}\] \[=\prod_{i=1+j\cdot x}P(u_{i}\cdots u_{i+w}|u_{1}\cdots u_{i-1}u_{ i+w+1}\cdots u_{T})\] Figure 1: Code-switched text data generation by prompting ChatGPT. where \(w\) is the decoding span size, \(s\) is the stride, and \(0\leq j\leq\lfloor(T-1)/s\rfloor\). In our experiments, \(w\) and \(s\) are set to be 15 and 5, respectively. ## 3 Experimental Setup ### Training set The training sets in our experiments were sampled from the following speech corpora: LibriSpeech [25] for English (en), Multilingual LibriSpeech [26] for Spanish (es) and French (fr), and MAGICDATA Mandarin Chinese Read Speech Corpus [27] for Chinese (zh). Note that as our experiments aimed to assess the inherent code-switching ability of the pre-trained multilingual and monolingual speech encoders and served as the baselines of the benchmark, we didn't use any code-switched data for training to prevent potential learning of code-switching abilities from those data and the possible bias in the resulting performance. ### Speech encoders, Quantization modules, and Unit LMs **Speech encoders** In our baselines, we picked several widely-used pre-trained speech models publicly available at fairseq and S3PRL [28], including XLS-R 1B, XLS-R 0.3B [9], XLSR-53 [8], Wav2vec 2.0 Large [6], HuBERT X-Large, HuBERT Base [7], and mHuBERT [21] as the speech encoders to investigate if they can solve a code-switching task even though code-switched data were absent during pre-training. As the generalizability to the code-switching task of these models and the underlying relationship between such abilities and the layers of the models remain unexplored, in our baselines, only the hidden representations of the last layer of the encoders were extracted for the training of the quantization module and the discretization of the dataset, and we leave the layer-wise analysis of these models' performance on the proposed task as future work. **Quantization modules** For the quantization modules required in our baseline systems, we sampled monolingual data from the speech corpora mentioned in Section 3.1, forming different sets of training data for each speech encoder. Each set resulted in 100 hours of monolingual speech in total and consisted of the languages the corresponding speech encoder had seen during its pre-training phase. For each speech encoder, a k-means model with \(k=100\) was trained with its corresponding set of monolingual speech and served as the quantization module by assigning the ID numbers of the closest cluster centers to the vectors at each time step. **Unit LMs** Similar to the training data of the quantization modules, we sampled monolingual data of the languages involved in the pre-training of the speech encoders and formed a training set containing 400 hours in total. The training set was further discretized with the quantization modules to obtain the training set for the Unit LMs. We then trained BERT Base models on the discretized training set to serve as the Unit LM, with the masked token prediction as the training objective. Following [29] and [15], spans of \(M\) consecutive tokens were masked for the model to predict, where \(M\sim\mathcal{N}(10,100)\). The training was done with a total batch size of 2.6M tokens, and the learning rate was warmed up to the peak value of \(10^{-4}\) and polynomially decayed afterward. The implementation here was based on fairseq. ## 4 Results The overall results are listed in Table. 1, with the number of pre-training languages of multilingual speech encoders, the corpora the monolingual speech encoders were pre-trained on, and the number of parameters of these encoders included for reference. Although we tried to restrict the length difference of the sentences to balance the length of the synthesized utterances between the correct and wrong versions, their lengths were still not exactly matched. Therefore, the use of direct likelihood comparisons in the measure may lead to a bias in favor of the shorter sentence, which was generally the wrong one. While most of the speech-based baselines were not significantly influenced by this, perhaps because of their informative units, and could apparently distinguish the two utterances as the span-PP scores of the two utterances differed a lot, the random baseline was heavily misled since its units were randomly assigned. Thus the resulting performance of the random baseline is below 50%, as shown in Table 1. ### Multilingual pre-training Comparing the results of the baseline systems with multilingual speech encoders (the uppermost block in Table 1) and those with monolingual ones (the middle block in Table 1), it is obvious that the systems with multilingual speech encoders substantially outperform those with their monolingual counterparts in es-en and fr-en tracks. As for the zh-en track, except for XLS-R 1B, all the models that included Chinese in their pre-training slightly outperform their monolingual counterpart (Wav2Vec2.0 Large), though the differences are insignificant. This may be a result of relatively inadequate pre-training data in Chinese compared with the Spanish and French pre-training data. For mHuBERT, the performance on the zh-en track is quite close to that of HuBERT Base since Chinese speech data were absent during its pre-training stage. Overall, the results show that multilingual pre-training does help in the proposed task and serves as evidence that our benchmark can effectively distinguish the models' multilingual abilities. Figure 2: Illustration of our speech-based baseline systems with discrete unit language modeling. ### Model size and pre-training languages Comparing the performance of baseline systems with XLSR-53, XLS-R 0.3B, and XLS-R 1B in Table 1, we first observe that systems with XLSR-53 and XLS-R 0.3B as speech encoders consistently outperform that with XLS-R 1B in all the tracks, even though these two models have much fewer parameters than XLS-R 1B has. However, we do not observe a similar trend in the comparison between systems trained with HuBERT Base and with HuBERT X-Large. This suggests that the model with a smaller size may extract representations that have a stronger capability of generalizing to a task requiring out-of-domain code-switching knowledge, but such an advantage will conditionally appear if the model meets the minimal requirements of the abilities needed to solve the task (multilingual ability, in this case). Next, we find that the system with XLS-R 0.3B significantly outperforms that with XLSR-53, which may imply that the multilingual pre-training with broader coverage of languages provides better generalizability for code-switched speech and thus induces better performance on our benchmark. Note that these two observations are similar to those discovered in [30]. As XLS-R 0.3B benefits from both model size and the wide coverage of pre-training languages, the baseline system based on it achieves the best performance among all the speech-based baselines. ### Deduplication Comparing the performance of each XLSR model without unit deduplication to their corresponding counterparts with unit deduplication in Table 2, we find that deduplication always benefits performance on the es-en and fr-en track, while performance degradation is observed in the zh-en set. The reason for this degradation requires further investigation in the future. However, by considering the average performance of the three testing sets, the deduplication operation is still useful in improving the performance on this task. ### Gap between speech-based and text-based systems The lowermost block of Table 1 shows the performance of evaluating text-based language models on the transcripts of the testing set. We find that the pre-trained XLMR Base, which has the same architecture as all the Unit LMs of the speech-based baselines and has been pre-trained on a large amount of multilingual data, can not obtain satisfactory performance, indicating that this task is not easy for a multilingual text-based model with moderate size. The task is difficult because it requires faithful encoding of not only the phonetics but also the semantic and grammatical properties of words in two different languages. However, even this unsatisfactory performance outperforms most of our speech-based baselines built on commonly used speech encoders that have been reported to be powerful in several downstream tasks. This implies that this task is even harder for existing speech encoders. We also notice that there is a tremendous gap between the the best performance of speech-based baselines and the text-based models, suggesting that there is still room for these speech models to improve on this code-switching task and hence on the code-switching syntactic and semantic abilities. These phenomena are likely due to the overall limitations of unit quality in current systems, which also affect the performance of monolingual language modeling on previous monolingual syntactic (sBLIMP) and semantic evaluations in the Zero Resource Speech Challenge [10]. ## 5 Conclusion This paper introduces a novel benchmark to assess the code-switching capability of self-supervised speech models in a zero-shot manner. Our results show that the size of speech models and the coverage of pre-training languages have considerable influences on the models' generalization ability for this out-of-domain code-switching task. In addition, the results unveil that most of the evaluated speech models do not exhibit strong code-switching ability compared to the text-based language models and still have a long way to go. We invite the speech community to participate in this benchmark and encourage further research on broadening the speech processing technology for code-switching. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Speech encoder & \# param. (B) & km: 100 cluster (B) & Unit LM (RoBERTa) & es-en (B) & fr-en (B) & zh-en (B) & avg \\ mono speech (hr) & mono speech (hr) & mono speech (hr) & dedup & Acc \(\uparrow\) & Acc \(\uparrow\) & Acc \(\uparrow\) & Acc \(\uparrow\) \\ \hline \hline \multicolumn{10}{c}{Multilingual Speech Encoders} \\ \hline XLSR-53 (53 lang) & 0.3 & es, fr, zh, en 25 each & es, fr, zh, en 100 each & V & 33.74 & 45.25 & 47.20 & 42.06 \\ XLS-R 0.3B (128 lang) & 0.3 & es, fr, zh, en 25 each & es, fr, zh, en 100 each & V & 75.16 & 59.30 & 43.18 & 59.21 \\ XLS-R 1B (128 lang) & 1 & es, fr, zh, en 25 each & es, fr, zh, en 100 each & V & 33.30 & 38.66 & 39.22 & 37.06 \\ mHuBERT (es, fr, en) & 0.09 & es, fr, en 33 each & es, fr, en 133 each & V & 29.55 & 30.42 & 40.33 & 33.43 \\ \hline \hline \multicolumn{10}{c}{Monolingual Speech Encoders} \\ \hline Wav2vec 2.0 LARGE (ll60k) & 0.3 & en 100 & en 400 & V & 13.11 & 25.35 & 42.41 & 26.96 \\ HuBERT X-LARGE (ll60k) & 1 & en 100 & en 400 & V & 24.54 & 25.60 & 38.60 & 29.58 \\ HuBERT Base (LS960) & 0.09 & en 100 & en 400 & V & 22.26 & 25.30 & 40.24 & 29.27 \\ \hline \hline random & - & random & random & - & 23.63 & 32.11 & 37.47 & 31.07 \\ \hline \hline XLM-RoBERTa Base (text-base) & 0.125 & - & - & - & 54.62 & 55.12 & 55.16 & 54.97 \\ XGLM 1.7B (text-base) & 1.7 & - & - & - & 90.91 & 88.38 & 92.03 & 90.44 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of the speech encoders, text-based models, and the random baseline in accuracy (%) on es-en, fr-en, and zh-en tracks. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Speech encoder & dedup & es-en & fr-en & zh-en & avg \\ \hline XLSR-53 (53 lang) & X & 32.27 & 42.19 & **49.65** & 41.37 \\ XLS-R 0.3B (128 lang) & X & 68.87 & 50.87 & **44.21** & 54.65 \\ XLS-R 1B (128 lang) & X & 29.57 & 35.30 & **41.91** & 35.59 \\ \hline XLSR-53 (53 lang) & V & **33.74** & **45.25** & 47.20 & **42.06** \\ XLS-R 0.3B (128 lang) & V & **75.16** & **59.30** & 43.18 & **59.21** \\ XLS-R 1B (128 lang) & V & **33.30** & **38.66** & 39.22 & **37.06** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablations studies of deduplication for XLSR models.
2303.03120
High-Order Elasticity Interpolants for Microstructure Simulation
We propose a novel formulation of elastic materials based on high-order interpolants, which fits accurately complex elastic behaviors, but remains conservative. The proposed high-order interpolants can be regarded as a high-dimensional extension of radial basis functions, and they allow the interpolation of derivatives of elastic energy, in particular stress and stiffness. Given the proposed parameterization of elasticity models, we devise an algorithm to find optimal model parameters based on training data. We have tested our methodology for the homogenization of 2D microstructures, and we show that it succeeds to match complex behaviors with high accuracy.
Antoine Chan-Lock, Jesus Perez, Miguel Otaduy
2023-02-22T10:42:35Z
http://arxiv.org/abs/2303.03120v1
# High-Order Elasticity Interpolants for Microstructure Simulation ###### Abstract We propose a novel formulation of elastic materials based on high-order interpolants, which fits accurately complex elastic behaviors, but remains conservative. The proposed high-order interpolants can be regarded as a high-dimensional extension of radial basis functions, and they allow the interpolation of derivatives of elastic energy, in particular stress and stiffness. Given the proposed parameterization of elasticity models, we devise an algorithm to find optimal model parameters based on training data. We have tested our methodology for the homogenization of 2D microstructures, and we show that it succeeds to match complex behaviors with high accuracy. \(\bullet\)**Computing methodologies \(\rightarrow\)**Physical simulation; ## 1 Introduction Accurately representing the elastic response of complex materials is an ongoing challenge across computer graphics and computational mechanics. This problem has application in fitting material models to physical tests of real-world objects [1, 2], developing mesoscale models for microscale materials [3], or designing simulation models with nonlinear response [14]. A common approach to designing complex elastic material behaviors is to define elastic energy or parameters of stress-strain functions using weighted scalar basis functions [1, 2, 3, 4]. However, as we demonstrate in this paper, this approach suffers various problems. Some variants fail to represent the elastic behavior accurately, while other variants lack fundamental properties of elasticity, such as energy conservation. In this paper, we develop a novel formulation of elastic materi als based on high-order interpolants, which fits accurately complex elastic behaviors, but remains conservative. The contributions of our work are: 1. The design of tensor basis functions to interpolate derivatives of elastic energy (Section 3). These basis functions can be regarded as a high-dimensional extension of radial basis functions (RBFs). 2. Based on the tensor interpolants, we design a parameterization of elasticity models (Section 4). This paramterization provides suitable degrees of freedom to fit both the stress and stiffness behavior of complex materials. 3. An algorithm to optimize the parametric elasticity model based on training data (Section 5), which finds the control points and coefficients of the elasticity interpolants. 4. The application of the methodology to homogenization of 2D microstructures (Section 6). This includes the generation of representative training data and the application of the estimation algorithm mentioned above. In the paper, we evaluate the accuracy of our method, we compare it to other variants, and we analyze the effect of various design choices. As a conclusion, the proposed methodology for the design of elasticity models succeeds at capturing complex behaviors, such as those shown in Fig. 1. We have tested the methodology on 11 2D microstructures with different deformation behaviors, and we discuss the full results. ## 2 Related Work ### Elasticity Interpolation The baseline approach to model elastic behaviors is to design expressive constitutive models. Research in this direction is ample, covering both the ability to reproduce interesting behaviors (nonlinearity, anisotropy, volume conservation), as well as robustness [1, 13, 14, 15]. However, designing constitutive models is built on the inherent assumption of homogeneous materials, and is not meant to accurately represent the complex nonlinearities of heterogeneous materials. The common approach in computer graphics to represent complex nonlinearities and anisotropy is to interpolate elasticity models. There is a large variety of methods to do so, with different features. Some methods model nonlinear stress-strain relationships through interpolation. Examples include RBF interpolation of material parameters [1], interpolation of stiffness values at control points in strain domain [23], or stress interpolation based on RBFs [24]. Unfortunately, modeling the stress-strain function through interpolation lacks energy conservation, as the stress-strain function is not integrable. This can produce artifacts through energy gain or loss, and prevents the use of attractive optimization-based numerical integrators [16]. One exception [15] models the stress-strain curve for individual strain values, hence it remains conservative, but it largely limits the expressiveness of the material. Other methods model the elastic energy function through interpolation, and therefore remain conservative by construction. Examples include formulating energy addends that depend on different subdomains of strain [12], and modeling such energy addends using spline interpolation [25]. Xu et al. [17] used spline interpolation to model energy addends within the Valanis-Landel isotropy assumption. They handled anisotropy separately, but with limited expressiveness. When modeling microscale heterogeneous materials, numerical coarsening [14, 15, 16, 17] is an alternative to elasticity model design. In numerical coarsening, the material models are evaluated at high-resolution spatial discretization, respecting the heterogeneous material distribution. However, the simulation is computed at a coarse mesoscale and interpolated to the microscale through complex nonlinear shape functions. ### Microstructure Simulation 2D and 3D microstructures are a powerful way of controlling mesoscale deformation behavior under limited material choices, and have therefore become a major tool in computational fabrication of deformable objects [1, 18, 19, 20]. However, the simulation of large objects at microscale resolution is computationally costly, and it challenges the use of microstructures within design optimization algorithms. Homogenization is a powerful tool for computational design with microstructures, as it fits mesoscale material models that accurately represent the aggregate microscale behavior [10]. We test our material modeling approach in the context of material homogenization for microstructures, and in this regard we follow a popular homogenization methodology. Same as previous works [18, 25], we simulate microstructures under periodic boundary conditions. This makes the mesoscale strain uniform, and enables easy transfer of training data from microstructure simulation to mesoscale. ## 3 Conservative Derivative Interpolation We want to design a parametric function (the elastic energy) such that it interpolates given values of its derivatives, i.e., its gradient and Hessian (stress and stiffness). To do this, we leverage RBF interpolation, but we face the question of designing a good parameterization such that the resulting function is conservative and interpolates derivative values. To answer this question, in this section we analyze matrix-valued RBFs for gradient interpolation. We conclude that this formulation can be generalized and extended to the interpolation of arbitrary higher-order derivatives. By leveraging these conclusions, we will later show how to design a good parameterization for RBF energies. ### Matrix-Valued RBFs for Gradient Interpolation In our exposition of the fundamentals of high-order RBF interpolants, we denote the domain of RBF interpolation as \(x\). With RBF center \(x_{i}\), radial vector \(\Delta x_{i}=x-x_{i}\), and RBF radius \(r_{i}=\|\Delta x_{i}\|\), we express the corresponding RBF as \(\phi_{i}\equiv\phi(r_{i})\). Appendix A lists some derivatives of RBFs that we use throughout the paper. Matrix-valued RBFs can be constructed from scalar-valued RBFs \(\phi\) through a double differentiation process, \(\left(\alpha\nabla^{2}I+\beta\nabla\nabla^{T}\right)\phi\), with \(\alpha\) and \(\beta\) scalar coefficients. Vector-valued RBF coefficients \(w_{i}\) yield a vector field: \[v(x)=\sum_{i}\left(\alpha\nabla^{2}I+\beta\nabla\nabla^{T}\right)\phi_{i}w_{i}. \tag{1}\] When interpolating vector values, matrix-valued RBFs yield positive-definite systems [20]. Thanks to a Helmholtz-Hodge decomposition [1], the matrix-valued RBF interpolation can be decomposed into curl-free and divergence-free vector fields [14]: \[v_{\text{curl-free}}(x) =\sum_{i}\nabla\nabla^{T}\phi_{i}w_{i}, \tag{2}\] \[v_{\text{div-free}}(x) =\sum_{i}\left(\nabla^{2}I-\nabla\nabla^{T}\right)\phi_{i}w_{i}. \tag{3}\] Moreover, it is easy to show that the curl-free vector field can be derived from a potential function \(f(x)\), hence concluding that the vector field is also conservative: \[v_{\text{curl-free}}(x)=\nabla f(x),\text{ with }f(x)=\sum_{i}w_{i}^{T}\nabla \phi_{i}. \tag{4}\] In Appendix B, we demonstrate that RBF interpolants based on RBF gradients can be recast based on RBFs directly. Then, the interpolant \(w_{i}^{T}\nabla\phi_{i}\) in (4) can be recast as \(\phi_{i}w_{i}^{T}\Delta v_{i}\), with some other choice of RBF. As a result, the curl-free vector field in (4) can be rewritten as: \[v_{\text{curl-free}}(x)=\nabla f(x),\text{ with }f(x)=\sum_{i}\phi_{i}w_{i}^{T} \Delta x_{i}. \tag{5}\] ### Generalization to High-Order Derivatives In the previous section, we observe that the key property to interpolate gradients with conservative functions is that the RBF interpolants are expressed as inner product of the radial vector \(\Delta x_{i}\) and a vector of RBF coefficients \(w_{i}\) with the same dimensionality as the target gradients. In fact, this observation can be generalized to arbitrary high-order n-th derivatives. The sufficient and necessary condition for interpolation of n-th derivatives with a conservative function is that the RBF interpolants are expressed as the tensor contraction of n tensor products of the radial vector \(\Delta x_{i}\) with an n-th dimensional tensor of RBF coefficients \({}_{n}w_{i}\). Formally: \[f(x)=\sum_{i}\phi_{i}{}_{2}w_{i}\colon\underbrace{\left(\Delta x_{i}\otimes \Delta x_{i}\cdots\otimes\Delta x_{i}\right)}_{\text{n times}} \tag{6}\] is a function whose n-th derivative can interpolate n-th dimensional tensor data, i.e., the target n-th dimensional derivatives. Conservative interpolation of gradients (first derivatives) and Hessians (second derivatives), for example, reduce to defining interpolants of the form: Gradient: \[\phi_{i}{}_{1}w_{i}\colon\Delta x_{i}=\phi_{i}{}_{1}w_{i}^{T} \Delta x_{i},\] (7) Hessian: \[\phi_{i}{}_{2}w_{i}\colon\left(\Delta x_{i}\otimes\Delta x_{i} \right)=\phi_{i}\Delta x_{i}^{T}{}_{2}w_{i}\Delta x_{i},\] (8) with \({}_{1}w_{i}\) a vector of RBF coefficients and \({}_{2}w_{i}\) a matrix of RBF coefficients, respectively. Fig. 2 shows examples of first-order and second-order interpolants for some representative choices of the RBF coefficients \({}_{1}w_{i}\) and \({}_{2}w_{i}\). We can see that the first-order interpolants provide local control of the gradient (both value and direction) of the energy, and the second-order interpolants provide local control of the curvature of the energy. In the next section, we leverage these interpolants in the definition of RBF elastic energy functions. Figure 2: Our proposed first-order interpolant (7) and second-order interpolant (8) provide local control, respectively, of the gradient (stress) and curvature (stiffness) of energy functions. The images show energies in the neighborhood of an RBF center, for representative choices of the RBF coefficients. We used a multiquadric RBF, and blue denotes low energy while red denotes high energy. Notice how the direction of the gradient is controlled on the top row, and this dictates the local stress. Notice also how the second-order interpolant allows modeling isotropic stiffness (left), directional stiffness (right), or also saddle-point configurations (center). ## 4 RBF Elastic Energy We want to define nonlinear and anisotropic elastic materials that are parameterized by the current deformation, and we do this following a scattered data interpolation strategy using RBFs. We start the section with some definitions and a discussion of desired properties. Then, we introduce our energy parameterization, leveraging the high-order RBF interpolants derived in Section 3. ### Definitions and Desiderata When designing elastic materials, we want to preserve the stress-strain response. This includes both the stress value at a certain strain, and its derivative or tangent stiffness. We choose Green strain \(\epsilon=\frac{1}{2}\left(F^{T}F-I\right)\) as representation of strain or deformation, with \(F\) the deformation gradient. For convenience, we write the strain in Voigt notation \(E\), which becomes the interpolation domain \(E=x\) for our RBF interpolation method. In our 2D examples, we have \(E=(\epsilon_{xx},\epsilon_{yy},2\,\epsilon_{xy})\). Following the Voigt notation of Green strain \(E\), and with elastic energy density \(\Psi\), we define stress as the energy gradient wrt strain, \(s=\nabla\Psi=\frac{\partial\Psi}{\partial E}^{T}\), which is a vector form of the 2nd Piola-Kirchhoff stress. We also define the tangent stiffness as the Hessian of the energy wrt strain, \(K=\nabla\nabla^{T}\Psi=\frac{\partial^{2}\Psi}{\partial E^{2}}\). We seek a material model \(\Psi=f(E,\{w_{i}\})\) that relates strain to energy according to some material parameters \(\{w_{i}\}\). In designing a good parameterization for elasticity models, we pay attention to the properties of the magnitudes we wish to match, namely the stress and the tangent stiffness. A naive solution for the design of a stress(strain) function would be to formulate scalar basis functions in the strain domain (e.g., RBFs), together with vector-type basis coefficients. Unfortunately, the resulting function is not guaranteed to produce a conservative field. Most importantly, conservativeness cannot be enforced through an appropriate choice of basis coefficients; the lack of conservativeness is an inherent limitation of the formulation. The key to enforce conservativeness of the stress field is to regard stress as the gradient of an energy field. Then, fitting a stress field can be posed as a gradient interpolation problem, with the stress the gradient of the underlying energy field. Similarly, fitting a stiffness field can be posed as a Hessian interpolation problem, with the stiffness the Hessian of the underlying energy field. To this end, we look at the high-order RBF interpolants of Section 3. ### Energy and its Derivatives We design an RBF energy formulation that is equipped with conservative gradient interpolants (7), to fit a target stress field, and with conservative Hessian interpolants (8), to fit a target tangent stiffness field. We denote each gradient interpolant as \(\Psi_{\text{GI},i}\), with RBF center \(E_{i}\) and vector RBF coefficients \(w_{i}\) (the \(1w_{i}\) in (7)). Similarly, we denote each Hessian interpolant as \(\Psi_{\text{HI},i}\), with RBF center \(E_{i}\) and matrix RBF coefficients \(W_{i}\) (the \(2w_{i}\) in (8)). We also add to the energy formulation two offset terms \(\Psi_{\text{GO}}\) and \(\Psi_{\text{HO}}\) that produce, respectively, a stress offset \(s_{\text{O}}\) and a stiffness offset \(K_{\text{O}}\). We add the stress offset to easily enforce zero stress at zero strain, and the stiffness offset to easily fit the average stiffness. In this way, the RBF interpolants act as corrections with respect to offset terms. The full energy formulation is summarized as: \[\Psi =\Psi_{\text{GO}}+\Psi_{\text{HO}}+\sum_{i}\Psi_{\text{GI},i}+ \sum_{i}\Psi_{\text{HI},i}, \tag{9}\] \[\Psi_{\text{GO}} =s_{\text{O}}^{T}E,\] \[\Psi_{\text{HO}} =\frac{1}{2}E^{T}K_{\text{O}}E,\] \[\Psi_{\text{GI},i} =\phi_{i}w_{i}^{T}\,\Delta E_{i},\] \[\Psi_{\text{HI},i} =\phi_{i}\Delta E_{i}^{T}\,W_{i}\Delta E_{i}.\] Note that the formulation above (and also our implementation) uses the same RBF function \(\phi\) and RBF centers \(\{E_{i}\}\) for gradient and Hessian interpolants, but these could be different in practice. From the energy definition (9), we obtain the stress and the tangent stiffness. \[s=\nabla\Psi_{\text{GO}}+\nabla\Psi_{\text{HO}}+\sum_{i}\nabla\Psi_{\text{GI},i}+\sum_{i}\nabla\Psi_{\text{HI},i}, \tag{10}\] \[\nabla\Psi_{\text{GO}} =s_{\text{O}},\] \[\nabla\Psi_{\text{HO}} =K_{\text{O}}E,\] \[\nabla\Psi_{\text{GI},i} =\phi_{i}w_{i}+w_{i}^{T}\,\Delta E_{i}\,\frac{\partial\phi_{i}}{ \partial E}^{T},\] \[\nabla\Psi_{\text{HI},i} =2\phi_{i}W_{i}\Delta E_{i}+\Delta E_{i}^{T}\,W_{i}\Delta E_{i} \,\frac{\partial\psi_{i}}{\partial E}^{T}.\] \[K=\nabla\nabla^{T}\Psi_{\text{HO}}+\sum_{i}\nabla\nabla^{T}\Psi_{\text{GI},i}+ \sum_{i}\nabla\nabla^{T}\Psi_{\text{HI},i}, \tag{11}\] \[\nabla\nabla^{T}\Psi_{\text{HO}} =K_{\text{O}},\] \[\nabla\nabla^{T}\Psi_{\text{GI},i} =w_{i}\frac{\partial\phi_{i}}{\partial E}+\frac{\partial\phi_{i}}{ \partial E}^{T}\,w_{i}^{T}+w_{i}^{T}\,\Delta E_{i}\,\frac{\partial^{2}\phi_{i} }{\partial E^{2}},\] \[\nabla\nabla^{T}\Psi_{\text{HI},i} =2W_{i}\Delta E_{i}\,\frac{\partial\phi_{i}}{\partial E}+\frac{ \partial\phi_{i}}{\partial E}\,\,\Delta E_{i}^{T}\,W_{i}\] \[\quad+2\phi_{i}\,W_{i}+\Delta E_{i}^{T}\,W_{i}\Delta E_{i}\,\frac{ \partial^{2}\phi_{i}}{\partial E^{2}}.\] The first and second partial derivatives of the RBFs are listed in Appendix A. Our energy model (9) is parameterized by the stress and stiffness offsets \(s_{\text{O}},K_{\text{O}}\), and the gradient and Hessian interpolant centers and coefficients \(\{E_{i},w_{i},W_{i}\}\). Note that, thanks to our conservative RBF interpolants, both the stress (10) and the tangent stiffness (11) are expressed as the sum of weighted basis functions (i.e., they are linear with respect to the basis coefficients), each RBF introduces degrees of freedom with the same dimensionality as the stress and/or the stiffness, and the formulation remains conservative by construction. ## 5 Material Fitting Algorithm Once we have defined our RBF energy model in the previous section, we describe how we estimate the parameters of this model. Our algorithm includes two aspects: one is the optimization of energy coefficients, the other one is the optimization of metaparameters (i.e., RBF centers and radius/smoothness parameters). ### Optimization of RBF Coefficients We assume we have target stress and stiffness data \(\{s_{j},K_{j}\}\) available for a set of known strains \(\{E_{j}\}\). In Section 6, we describe how we obtain representative target data for 2D microstructures. And at this point we also assume that the energy RBF centers \(\{E_{i}\}\) are given. In the next subsection, we discuss how these centers are optimized. The energy function includes the following parameters to be optimized (see Section 4.2): stress and stiffness offsets \(s_{\mathrm{O}},K_{\mathrm{O}}\), and coefficients of stress and stiffness interpolants \(\{w_{i}\},\{W_{i}\}\). The stress offset is implicitly defined by constraining the stress to be zero at zero strain. From (10), we get: \[s(0)=s_{\mathrm{O}}+\sum_{i}\nabla\Psi_{\mathrm{GL},i}(0)+\sum_{i }\nabla\Psi_{\mathrm{HI},i}(0)=0\Rightarrow\] \[s_{\mathrm{O}}=-\sum_{i}\nabla\Psi_{\mathrm{GL},i}(0)-\sum_{i} \nabla\Psi_{\mathrm{HI},i}(0). \tag{12}\] We denote the remaining parameter set as \(p=(K_{\mathrm{O}},\{w_{i}\},\{W_{i}\})\). We compute these parameters by minimizing the difference between target and estimated stress and stiffness values. This is expressed formally as: \[p=\arg\min_{p}\sum_{j}\frac{1}{z_{\mathrm{RMS}}^{2}}\left\|s(E_{j},p)-s_{j} \right\|^{2}+\frac{1}{K_{\mathrm{RMS}}^{2}}\left\|K(E_{j},p)-K_{j}\right\|^{2}. \tag{13}\] Note that we normalize the stress and stiffness errors by the root-mean square of target stress and target stiffness values, respectively. This optimization is a simple linear least squares problem, which yields a positive definite linear system for the solution of the parameters \(p\). ### RBF Metaparameters In addition to RBF coefficients, the elastic energy function is also parameterized by the number of RBFs, their centers, and other RBF-specific smoothness or support parameters (e.g., the variance of Gaussian RBFs). We have followed a greedy algorithm to optimize these metaparameters. We start with no RBFs, and we progressively add RBFs until the energy fitting error as defined in (13) is smaller than a target threshold. Given \(k\) RBFs, we first optimize the locations of the RBF centers \(\{E_{i},1\leq i\leq k\}\). We do this by clustering the target strain values \(\{E_{j}\}\) into \(k\) clusters using \(k\)-means clustering. Then we solve the optimization (13) while sweeping smoothness or support parameters, and we choose the optimal result. Fig. 3 shows example results of \(k\)-means clustering for two different microstructures. In some cases (e.g., top of Fig. 3), the target strains \(\{E_{j}\}\) are evenly distributed and a small number of RBFs may cover well the domain. In other cases (e.g., bottom of Fig. 3), the target strains show discontinuities, e.g., due to buckling of the microstructures, and a larger number of RBFs may be necessary. Our approach for selecting the RBF centers is not optimal, as the clustering algorithm does not account for local error. There are other possible approaches for optimizing the metaparameters of the RBFs, such as recursive orthogonal least squares [1, 1], but we leave this to future work. In Section 7 we discuss how the fitting error is affected by the choice of metaparameters and RBF functions. ## 6 Homogenization of 2D Microstructures We apply our parametric energy model (Section 4) and estimation algorithm (Section 5) to design homogeneous mesoscale elasticity models for 2D microstructures. In doing so, we pay special attention to the generation of representative strain, stress and stiffness data for the estimation algorithm. To generate training data, we simulate 2D microstructures under planar deformations with periodic boundary conditions (PBCs), similar to the work of Schumacher et al. [1]. Specifically, to simulate the high-resolution microstructures with PBCs, we follow the method by Sperl et al. [1]. We model a repeatable tile of microstructure at high resolution, using a finite-element mesh. The positions of the mesh nodes are grouped as \(\mathbf{x}\), and they are governed by the combination of a coarse homogeneous deformation \(E\) and local mesh displacements \(\mathbf{u}\). Following Sperl et al., we apply a known coarse deformation \(E\), and we solve for mesh displacements that minimize the tile's elastic energy density \(\Psi\) under PBCs. Formally, this is expressed as: \[\mathbf{u}=\arg\min\Psi(\mathbf{x}(E,\mathbf{u})),\ \ \text{s.t.}\ \mathbf{c}( \mathbf{u})=0, \tag{14}\] where \(\mathbf{c}(\mathbf{u})\) includes PBCs as well as constraints to avoid net rigid motion of the tile. We produce training data in a controlled way, generating microstructure deformations that span planar uniaxial stretch deformations in all directions. These cover situations where there is a Figure 3: Several results of \(k\)-means clustering for the computation of RBF centers. The plots show the distribution of training strains for two different microstructures, and RBF centers with 5 vs. 10 clusters. Microstructure 9 exhibits buckling effects that make the training data discontinuous, and it requires more RBFs to cover well the domain. dominant direction of deformation, and were also the main focus of attention of several previous works [1, 18, 22]. The rotation-invariant part of the deformation gradient can be defined as \(F=\text{Rot}(\theta)\operatorname{diag}(\lambda_{1},\lambda_{2})\operatorname{ Rot}(\theta)^{T}\), where \(\lambda_{1}\) and \(\lambda_{2}\) are principal stretches, and \(\theta\) is the direction of stretch. We regularly sample the first principal stretch \(\lambda_{1}\) in the range \(0.9\) --\(2.0\), and the stretch direction \(\theta\) in the range \(0--\pi\). For each combination \((\lambda_{1},\theta)\), we simulate the microstructure and we search for the orthogonal stretch \(\lambda_{2}\) that produces zero orthogonal stress, i.e., \(\frac{\partial\Psi}{\partial\lambda_{2}}=0\). Then, we add two more deformations by changing the orthogonal stretch by \(\pm 0.05\). We collect the full set of deformations and compile the coarse strain, stress, and tangent stiffness for each deformation, \(\{E_{j},s_{j},K_{j}\}\). The training data is roughly centered around uniaxial stretches in all directions. Fig. 4 shows training data for two microstructures with very different behavior; the one on the left shows negative Poisson's ratio, and hence the training data populates a very different region in the strain domain. Please watch the accompanying video for animations of the training data generation and 3D visualizations of the data in strain domain. ## 7 Experiments and Discussion We have tested our high-order elasticity interpolation methodology on the homogenization of 11 different periodic microstructures. All microstructures are shown in Fig. 6. They exhibit diverse nonlinearities and anisotropic behavior, including auxetic response. We start the section discussing choice and estimation of metaparameters. Then we analyze the fitting error across all microstructures, and we discuss the result of validation tests. We conclude with a discussion of comparisons to other methods. ### Metaparameters Our first test evaluates what type of RBF provides highest accuracy under the same number of parameters. We have tested four RBFs that are smooth, to ensure Hessians are well defined: multiquadric, Gaussian, inverse quadratic, and inverse multiquadric. \begin{table} \begin{tabular}{c|c} RBF & Stress and stiffness error (\%) \\ \hline Multiquadric & 6.53 \\ Gaussian & 6.89 \\ Inverse quadratic & 6.66 \\ Inverse multiquadric & 6.62 \\ \end{tabular} \end{table} Table 1: We fit all microstructures using four different RBFs with the same number of RBF centers (10), and the error differences are minimal. Figure 4: The plots compare the training data (colored according to the norm of stress) in strain domain, for two different microstructures. We also highlight a directional stretch from rest (blue) to a deformed configuration (red). Projecting the data to 2D we can clearly see the extremely different behavior of these two microstructures; the one on the right shows negative Poisson’s ratio for this stretch, and the training data populates a different region of the strain domain. Figure 5: This figure shows the fitting error for microstructure 1 as we sweep the smoothness radius of a multiquadric RBF. Note that the optimal radius gets smaller as we grow the number of RBFs. The plots are interrupted when the estimation problem becomes ill-conditioned. We have estimated all 11 microstructures following the method described in Section 5, with 10 RBF centers. As shown in Table 1, the differences across RBF types are minimal. This result concurs with previous experiments [1]. Based on minimal advantage, we choose the multiquadric RBF \(\phi(r)=\sqrt{r^{2}+r_{0}^{2}}\). As discussed in Section 5.2, as part of our estimation algorithm, we sweep radius/smoothness parameters of the RBF. With the multiquadric RBF, this is the radius \(r_{0}\). Fig. 5 shows the total error for microstructure 1 as a function of \(r_{0}\), for different numbers of RBF centers. The error is not shown after a certain radius \(r_{0}\), because the fitting problem becomes ill-conditioned. Note that ill-conditioning occurs at smaller \(r_{0}\) as we add more RBFs and they get closer. For this reason, it is not possible to choose a single optimal value of \(r_{0}\) for all numbers of RBFs. ### Fitting Error and Validation We have fitted the training stress and stiffness of all test microstructures. We increase the number of RBFs until we reach an average error of 5% between stress and stiffness, but we stop the process if we reach 19 RBFs. See (13) for the error definition and normalization based on RMS values. Table 2 summarizes the fitting quality across all materials. The stress error is below or just above 5% for all materials, and the stiffness error is below 10% for all materials except two (which suffer error above 20%). For some materials, adding more RBFs produced only a marginal gain. Those cases probably require higher local control, with non-uniform selection of RBF centers and smoothness radius. Detailed fitting results for all materials are shown in Table 5 and Table 6. These tables show the norm of all values of stress and stiffness in the training data, the fitted values, and the error percentage (normalized with respect to RMS values). Interestingly, the error in stress remains low and is spread across the domain for many materials, although it shows high local values for some materials. On the other hand, the error in stiffness shows some high spikes for most of the materials. This again suggests that higher local control is needed for higher accuracy. ### Extrapolation The training data for the model consists of uniform uniaxial stretch data within a prescribed range. Therefore, we regard and test extrapolation in multiple ways. One is to extrapolate the energy behavior outside the range of strains in the training data. We have no a priori expectation for the model to succeed in this though, as the behavior of the microstructure materials may be unpredictable outside the training range. The other one is to extrapolate to non-uniform deformations. This, in contrast, is an expected and critical behavior, as it makes the model practical for real applications. To evaluate extrapolation outside the range of strains, we have performed two tests. First, we trained using data from the lower half of the stretch range, and tested extrapolation to the upper half. The error on the training data was 5.27% \(\pm\) 3.48% across all 11 materials, and on the test data it was 114.44% \(\pm\) 60.70%. Second, we trained using data from half of the stretch directions, and tested extrapolation to the other half. The error on the training data was 12.39% \(\pm\) 9.10% across all 11 materials, and on the test data it was 147.08% \(\pm\) 131.45%. As expected, the models fail to extrapolate. But this is not a limitation of the methodology; it is an inherent challenge of the problem, because the behavior outside the training range may be highly nonlinear and unpredictable. For this reason, we exhaustively sample the expected deformation range as part of training. To evaluate extrapolation to non-uniform strains, we have simulated large patches of microstructures, both with high-resolution FEM simulations, and with coarse simulations using our fitted energy models. Fig. 7 shows two comparisons, for two different microstructures. We demonstrate that, thanks to the fitted energy models, we can replicate the behavior of complex microstructure patches (4252 and 5786 finite elements each) with coarse simulation meshes (36 and 48 elements each). ### Comparisons Our first comparison analyzes if some terms of our elasticity model are more relevant. To this end, we compared (a) fitting stress data only using stress interpolants only, (b) fitting stiffness data only using stiffness interpolants only, and (c) our full method fitting both stress and stiffness data using both stress and stiffness interpolants, on microstructure 1. For a fair comparison, we used the same number of parameters (36) in all cases: (a) 11 RBFs plus stress offset, (b) 5 RBFs plus stiffness offset, and (c) 3 RBFs and both stress and stiffness offsets. As shown in Table 3, our method achieves the best balance in fitting both stress and stiffness. Fitting stiffness only leads to higher stress error. Fitting stress only produces high stiffness error, but most importantly there is no control over the quality of the stiffness, which can lead to unstable material models. We have also compared our method to models that interpolate the elastic energy directly, hence they do not provide direct control for stress and stiffness as in our method. We designed an interpolated energy model of the form \(\Psi=\sum_{i}\phi_{i}v_{i}\), with scalar RBF coefficients \(w_{i}\)[10]. We compared (a) fitting energy data with energy interpolation, (b) fitting our stress and stiffness metric with energy interpolation, and (c) our method. For a fair comparison, we used the same number of parameters (54) in all cases: (a) and (b) 54 RBFs, and (c) 5 RBFs and both stress and stiffness offsets. As shown in Table 4, our approach achieves the highest accuracy in all cases. Finally, we also tried fitting the stress of microstructure 1 using a \begin{table} \begin{tabular}{c|c|c} & Stress error (\%) & Stiff. error (\%) \\ \hline Energy fit, energy interp. & 17.54 & 65.67 \\ Our fit, energy interp. & 6.43 & 16.56 \\ Our method & 3.06 & 9.21 \\ \end{tabular} \end{table} Table 4: Under the same number of parameters, we have evaluated the accuracy of fitting energy data with energy interpolation, fitting our stress and stiffness metric with energy interpolation, and our method. Our approach achieves the highest accuracy. \begin{table} \begin{tabular}{c|c|c} & Stress error (\%) & Stiffness error (\%) \\ \hline Stress fit & 7.67 & 30.23 \\ Stiffness fit & 19.72 & 18.98 \\ Stress + stiffness fit & 12.28 & 19.40 \\ \end{tabular} \end{table} Table 3: Under the same number of parameters, we have evaluated the accuracy of fitting stress data only, stiffness data only, or our combined stress and stiffness fitting. Our approach keeps the best balance in stress and stiffness error. Figure 7: Here, we evaluate our model on non-uniform strain deformations. We compare the simulation of high-resolution FEM microstructure models with coarse FEM models using our fitted energies. The images show two of the microstructures in the data set, under Dirichlet conditions on part of the boundary, and zero-traction Neumann conditions on the rest. The tested microstructure patches consist of 63 and 48 tiles, and were simulated using FEM models with 4252 and 5786 elements, respectively. The coarse models use 36 and 48 quad meshes. As shown in the overlays, the match between our fitted model and the full microstructure simulations is practically perfect. non-conservative stress interpolation method. In particular, we formulated the stress \(s=\sum_{i}\phi_{i}w_{i}\), with vector RBF coefficients \(w_{i}\), as done by Wang et al. [20]. The optimization required 20 RBFs to reach a stress error below 5%. Fig. 8-left shows the distribution of stress error. Most importantly, we quantified the curl of stress, \(\nabla\times s\), and we normalized it by the RMS of stiffness. Note that the curl measures the non-symmetry of the Hessian. As shown in Fig. 8-right, the curl of stress reached over 30% of the RMS of stiffness at times. ## 8 Conclusions and Future Work In this paper, we have presented a novel formulation of elastic energy models based on high-order interpolants. The interpolants extend scalar RBFs to provide local control over derivatives of the energy function, namely stress and stiffness. We have shown that, when applied to the homogenization of 2D microstructures, our formulation provides higher accuracy than previous approaches. The design of optimal high-order RBF interpolants is still an active research topic in numerical analysis [5], and our methodology could see applicability in general high-order interpolation problems, beyond elastic simulation. To help with reproducibility, a sample implementation is available in the project webpage [http://mslab.es/projects/HiOinterp](http://mslab.es/projects/HiOinterp). We have also identified limitations that could motivate future work. In particular, our current estimation methods appear limited when the stress or stiffness have strong local discontinuities. This could be addressed by distributing RBF centers with non-uniform density and non-uniform radius. Similarly, it would be beneficial to sample the deformation range in an adaptive manner, adding training samples where nonlinearity appears higher. In general, it would be advantageous to find ways to make the parameterization of the resulting energy more compact. We have applied our formulation and methodology only to in-plane deformation of 2D microstructures. The possible extensions include: 3D microstructures, the bending response of thin shells (necessary to apply the method to 3D cloth simulation), plasticity, and/or viscosity. Some of the extensions may be straightforward, such as 3D microstructures or modeling viscosity by interpolating dissipation potentials [2]; others are unclear. Finally, it would be interesting to use our methodology in the context of other applications beyond example-based homogenization. These could include estimating materials from other types of data (e.g., force-deformation examples, or sparse observations of space-time deformations), or using the model in the context of material exploration. Obtaining homogenized strain from real-world force-deformation examples is straightforward. Stress can be obtained based on boundary forces [14]. Stiffness is not immediate, but it could be obtained through finite-difference approximation using incremental deformations. **Acknowledgments.** We would like to thank the anonymous reviewers for their feedback. We also want to thank Igor Santestenban for help with the rendering pipeline. This work was funded in part by the European Research Council (ERC-2017-CoG-772738 TouchDesign).
2306.06686
UAV Trajectory and Multi-User Beamforming Optimization for Clustered Users Against Passive Eavesdropping Attacks With Unknown CSI
This paper tackles the fundamental passive eavesdropping problem in modern wireless communications in which the location and the channel state information (CSI) of the attackers are unknown. In this regard, we propose deploying an unmanned aerial vehicle (UAV) that serves as a mobile aerial relay (AR) to help ground base station (GBS) support a subset of vulnerable users. More precisely, our solution (1) clusters the single-antenna users in two groups to be either served by the GBS directly or via the AR, (2) employs optimal multi-user beamforming to the directly served users, and (3) optimizes the AR's 3D position, its multi-user beamforming matrix and transmit powers by combining closed-form solutions with machine learning techniques. Specifically, we design a plain beamforming and power optimization combined with a deep reinforcement learning (DRL) algorithm for an AR to optimize its trajectory for the security maximization of the served users. Numerical results show that the multi-user multiple input, single output (MU-MISO) system split between a GBS and an AR with optimized transmission parameters without knowledge of the eavesdropping channels achieves high secrecy capacities that scale well with increasing the number of users.
Aly Sabri Abdalla, Ali Behfarnia, Vuk Marojevic
2023-06-11T14:01:15Z
http://arxiv.org/abs/2306.06686v2
UAV Trajectory and Multi-User Beamforming Optimization for Clustered Users Against Passive Eavesdropping Attacks With Unknown CSI ###### Abstract This paper tackles the fundamental passive eavesdropping problem in modern wireless communications in which the location and the channel state information (CSI) of the attackers are unknown. In this regard, we propose deploying an unmanned aerial vehicle (UAV) that serves as a mobile aerial relay (AR) to help ground base station (GBS) support a subset of vulnerable users. More precisely, our solution (1) clusters the single-antenna users in two groups to be either served by the GBS directly or via the AR, (2) employs optimal multi-user beamforming to the directly served users, and (3) optimizes the AR's 3D position, its multi-user beamforming matrix and transmit powers by combining closed-form solutions with machine learning techniques. Specifically, we design a plain beamforming and power optimization combined with a deep reinforcement learning (DRL) algorithm for an AR to optimize its trajectory for the security maximization of the served users. Numerical results show that the multi-user multiple input, single output (MU-MISO) system split between a GBS and an AR with optimized transmission parameters without knowledge of the eavesdropping channels achieves high secrecy capacities that scale well with increasing the number of users. Index Terms: UAV-assisted, beamforming, DRL, eavesdropping, MU-MISO, physical layer security, power control, trajectory optimization. ## I Introduction Unmanned aerial vehicles (UAVs) are envisioned to improve the next generation of wireless communication systems, 6G and beyond, by providing flexible, intelligent, secure, and limitless connectivity [1, 2, 3]. Steps to identify the challenges and solutions of emerging cellular networks to serve UAVs are being undertaken by the 3rd Generation Partnership Project (3GPP) [4]. A prominent use case for a UAV is the aerial relay (AR) which supports extended coverage or higher system capacity at low-cost. However, the largely line of sight (LoS) air-to-ground (A2G) communications between UAVs and user equipment (UEs) make the system vulnerable to a variety of attacks [5]. Eavesdropping is a major passive attack that can compromise communications channels and gain access to private and sensitive user information. Physical layer security has been introduced as a powerful tool to secure communication links by using the physical characteristics of wireless communication channels [6, 7, 8]. UAVs can benefit from physical layer security by applying the latest technologies such as artificial intelligent (AI) methods as well as various communication techniques to mitigate the compromise of malicious behavior in the network. However, applying such techniques is complicated by three factors: (i) requiring to coordinate between a ground base station (GBS) and the AR to determine which users should be served by which stations, (ii) handling resource limitations and trajectory of ARs to serve specific UE(s), and (iii) choosing the well-suited learning and communication techniques for the GBS and the AR to dynamically maximize the security metrics. ### _Related Work_ The recent related works can be classified into three groups: i) beamforming-aided secure communications; ii) UAV-aided secure communications; iii) beamforming and UAV-aided secure communications. Table I provides a summery of the prior art and proposed research related to the work presented in this paper. **Beamforming-aided Secure Communications:** Transmit beamforming limit the radio frequency (RF) propagation footprint and thus implicitly enable a secure the propagation channel without high computational requirements a the receiver as compared to cryptographic security schemes. Carefully designing the beam patterns of antennas at the transmitter, receiver, or both can enhancing system performance and security parameter, such as signal-to-interference plus noise ratio (SINR) and secrecy rate, respectively. Researchers have studied how to leverage and optimize beamforming for improving the PLS of current and future wireless communication networks. The work presented in [9] investigates the achieved secrecy sum rate for a multi-cell multiple-input multiple-output (MIMO) system which is under a passive eavesdropper attack. The power allocation between artificial noise (AN) and information signal is managed to maximize the sum secrecy rate with imperfect channel state information (CSI), which is derived using regularized channel inversion (RCI) precoding. Reference [10] proposes strategies of combining AN and beamforming to achieve high secrecy performance for massive MIMO systems in spite of single-antenna active eavesdropping attacks that attempt to spoil the channel estimation acquisition at the BS. Reference [11] derives the multiple-input, single-output (MISO) beamforming design for random wireless networks with statistical CSI in an environment with eavesdroppers and interferers. **UAV-aided Secure Communications:** Reference [12] demonstrates the applicability of maximizing the achievable average secrecy rate by optimizing the AN transmission and UAV trajectory. Reference [13] proposes using the UAV as a relay to improve the secrecy rate by jointly optimizing the source/relay transmit power and the UAV trajectory. In our previous work [14], we have proposed a deep Q-learning (DQL) algorithm to optimize the secrecy rate by optimizing the trajectory of the UAV relay and the transmit power without the availability of the CSI of the wiretap channel. **Beamforming Plus UAV-Aided Secure Communications:** Combining both beamforming and UAV has been considered as an enhanced PLS technique in advanced wireless communications. For example, [15] introduces the multi-objective dragonfly algorithm (MODA) to solve the multi-objective optimization problem for enhancing the minimum secrecy rate between the UAV node and a single UE for different clusters. The work presented in that paper assumes perfect CSI conditionsfor the BS and focuses on optimizing the UAV performance. Reference [16] proposes a multi-agent deep reinforcement learning (DRL) algorithm to maximize the secrecy capacity of a multi user system by optimizing the trajectory of the aerial BS and the beamforming matrix of the jammer UAV interfering with the eavesdroppers. The authors of [17] propose an iterative optimization approach that alternately optimizes the beamforming of satellite transmitters and the power allocation of the UAV acting as an aerial relay and friendly jammer supporting multi-beam satellite-enabled vehicle communication in the presence of eavesdropping. ### _Contribution_ In this paper, we aim to mitigate passive eavesdropping attacks, where an eavesdropper illegitimately wiretaps the legitimate wireless communications links. To this end, we propose a combination of machine learning, deep reinforcement learning, and multi-antennas techniques at the BS and the AR to maximize the security of UEs in a wireless communication network. The contributions of this paper are: * We define a practical optimization problem to maximize the channel secrecy capacity without CSI knowledge of the wiretap channel. * We introduce a framework for effectively solving this problem by means of user clustering, beamforming and power control, and AR trajectory optimization. We design a DRL solution for the trajectory optimization and leverage the closed form solutions for the beamforming and transmit and relay power allocation. * We provide a comprehensive numerical analysis that demonstrates the effectiveness of the proposed tools. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Category** & **Ref.** & **Objective Metric** & **Attack type** & **Strategy** & **Attackers’ CSI** \\ \hline Beamforming & [9] & Secrecy sum rate & Passive eavesdroppers & RCI is adopted to drive the power allocation between AN and information signal. & Perfect CSI with channel errors \\ \cline{2-5} & [10] & Secrecy rate & Active eavesdropper & Analytical framework to find the best combination of AN and beamforming. & Perfect CSI \\ \cline{2-5} & [11] & Secrecy rate & Active and passive eavesdroppers & Analytical framework to design the beamforming. & Statistical CSI \\ \hline \hline UAV & [12] & Average secrecy rate & Passive eavesdropper & Optimizing the UAV’s trajectory and AN allocation via iterative algorithm. & Perfect CSI \\ \cline{2-5} & [13] & Secrecy rate & Passive eavesdropper & Jointly optimizing the source/ UAV relay transmit power and the UAV trajectory through an iterative algorithm. & Perfect CSI \\ \cline{2-5} & [14] & Secrecy rate & Passive eavesdropper & The UAV’s trajectory and transmit power allocation are jointly optimized by applying DQL algorithm. & Unknown CSI \\ \hline \hline Beamforming and UAV & [15] & Minimum secrecy rate & Passive eavesdropper & Jointly optimizing the UAV beamforming and position to enhance UE’s secrecy rate through applying multi-objective dragonfly algorithm. & Perfect CSI \\ \cline{2-5} & [16] & Secrecy capacity & Passive eavesdroppers & A DRL is proposed to optimize the UAV trajectory and transmitter and jammer UAVs beamforming. & Perfect CSI \\ \cline{2-5} & [17] & Secrecy rate & Passive eavesdropper & Jointly optimize the beamforming of multi-beam satellite and the power allocation of UAV through an iterative alternating optimization approach. & Perfect CSI \\ \hline \hline This work & & Secrecy sum capacity & Passive eavesdroppers & User clustering for association with the GBS and AR, where a DQL is designed to optimize the UAV trajectory, beamforming, and power control without knowledge of the wiretap CSI. & Unknown CSI \\ \hline \end{tabular} \end{table} Table I: Prior Art and Proposed Research. The rest of paper is organized as follows. Section II presents the system model. Section III formulates the problem and defines the relevant metrics. Section IV derives the solution. Numerical results and analyses are presented in Section V. Section VI provides the concluding remarks. ## II System Model We consider a ground base station (GBS) serving ground UEs where the communication links are subject to passive eavesdropping attacks. The eavesdroppers have a radio receiver and can wiretap the downlink transmission. A UAV acting as an AR is dispatched to support secure communications. This scenario is illustrated in Fig. 1. We use the following notation: lower-case letters represent scalars and bold lower-case letters denote vectors. Bold upper-case letters are used for matrices. Tr(\(\mathbf{S}\)) and \(\mathbf{S}^{-1}\) represent the trace and the inverse of a square matrix \(\mathbf{S}\), respectively. The operator (.)\({}^{T}\) denotes transpose, and the operator (.)\({}^{\dagger}\) denotes conjugate transpose. \(\mathbf{S}(i,j)\) shows the \((i,j)\)th element of matrix \(\mathbf{S}\) and Rank(\(\mathbf{S}\)) shows the rank of the matrix. \(||\mathbf{v}||\) represents the Euclidean norm of a complex vector \(\mathbf{v}\). Also, \(|v|\) denotes the norm of a complex number \(v\). \(\mathcal{C}^{\alpha\times b}\) denotes the dimension of \(a\times b\) for a complex vector or matrix. Complex normal distribution vector with the mean vector \(\mathbf{m}\) and the covariance matrix \(\mathbf{\Sigma}\) is denoted by \(\mathcal{CN}(\mathbf{m},\mathbf{\Sigma})\), and \(\sim\) implies "distributed as". ### _Channel model_ #### Ii-A1 Air-to-ground In terms of modelling the A2G communication channel between the UAV and ground receivers, we consider small-scale Rician fading where the line of sight (LoS) component coexist with non-LoS (NLoS) components [18]. The GBS and AR have both a uniform linear array (ULA) of \(M\) and \(N\) antennas, respectively. The A2G channel model, \[\mathbf{G}_{TR}=\frac{\sqrt{\lambda_{0}}}{d_{TR}^{\alpha}}\bigg{(}\sqrt{\frac{ \beta}{1+\beta}}\ \mathbf{G}_{\mathbf{TR}}^{\mathbf{LoS}}+\sqrt{\frac{1}{\beta+1}}\ \mathbf{G}_{\mathbf{TR}}^{\mathbf{NLoS}}\bigg{)}, \tag{1}\] is obtained as the superposition of the LoS and NLoS channel components, where \(\lambda_{0}\) is the path loss at the reference distance of \(1\)\(m\), \(d_{TR}\) is the 3D distance between the GBS and AR, \(\alpha\) is the path loss exponent, and \(\beta\) is the Rician factor. Without loss of generality, the entries of \(\mathbf{G}_{\mathbf{TR}}^{\mathbf{NLoS}}\) are assumed to be independent and identically distributed (i.i.d.) zero-mean and unit variance circularly symmetric complex Gaussian (CSCG), i.e., \(\sim\mathcal{CN}(0,1)\). The LoS component, \[\mathbf{G}_{TR}^{LoS}=\mathbf{g}_{\mathbf{TR}}^{\mathbf{(A)}}\ \mathbf{g}_{\mathbf{TR}}^{\mathbf{(D)}}, \tag{2}\] where \[\mathbf{g}_{\mathbf{TR}}^{\mathbf{(A)}}=\Big{[}1,e^{-j\frac{2\pi}{\lambda}\mathrm{T} \mathrm{A}^{TR}},\cdots,e^{-j\frac{2\pi}{\lambda}(N-1)\mathrm{T}\mathrm{A}^{TR }}\Big{]} \tag{3}\] and \[\mathbf{g}_{\mathbf{TR}}^{\mathbf{(D)}}=\Big{[}1,e^{-j\frac{2\pi}{\lambda}\mathrm{T} \mathrm{T}^{TR}},\cdots,e^{-j\frac{2\pi}{\lambda}(M-1)\mathrm{T}\mathrm{T}^{TR }}\Big{]} \tag{4}\] correspond to channel contributions from the angel-of-arrival (AoA) and angel-of-departure (AoD) between the GBS and the AR. Parameter \(\lambda\) is the carrier wavelength, \(\Upsilon\) is the antenna separation, \(\Lambda^{TR}=cos\ \Theta\ sin\ \varphi\) is the AoA component (\(\Theta\)-azimuth and \(\varphi\)-elevation AoA), and \(\Gamma^{TR}=sin\ \vartheta\ cos\ \psi\) is the AoD component (\(\vartheta\)-elevation and \(\psi\)-azimuth AoD) of the transmitted signal from the GBS to the AR. The A2G channel between the AR and the ground users, \[\mathbf{G}_{RK}=\frac{\sqrt{\lambda_{0}}}{\mathbf{d}_{\mathbf{RK}}^{\mathbf{\alpha}}}\bigg{(} \sqrt{\frac{\beta}{1+\beta}}\ \mathbf{g}_{\mathbf{RK}}^{\mathbf{LoS}}+\sqrt{\frac{1}{\beta+1}}\ \mathbf{G}_{\mathbf{RK}}^{\mathbf{NLoS}}\bigg{)}, \tag{5}\] has an LoS and an NLoS term, where \(\mathbf{d}_{\mathbf{RK}}\) is the 3D distance between the AR and the ground user cluster. The \(\mathbf{G}_{\mathbf{RK}}^{\mathbf{NLoS}}\) entries follow the same CSCG distribution as \(\mathbf{G}_{\mathbf{TR}}^{\mathbf{NLoS}}\). The LoS term, \[\mathbf{g}_{\mathbf{RK}}^{\mathbf{LoS}}=\Big{[}1,e^{-j\frac{2\pi}{\lambda}\mathrm{T} \mathrm{\chi}^{RK}},\cdots,e^{-j\frac{2\pi}{\lambda}(N-1)\mathrm{T}\mathrm{ \chi}^{RK}}\Big{]}, \tag{6}\] defines the AoD components \(\chi^{RK}=cos\ \Phi\ sin\ \Omega\) (\(\Phi\)-azimuth and \(\Omega\)-elevation AoD) of the transmitted signal from the ULA of the AR to the single-antenna users. #### Ii-B2 Ground-to-ground the Alpha-beta-gamma (ABG) [19] channel model is adopted for the ground-to-ground (G2G) communication channels between the GBS and the eavesdropper and between the UEs and the eavesdropper. It is the closest path-loss model approximation to the actual 5G ground communications measurement results and it is employed by standard organizations such as ITU-R, 3GPP, mmMAGIC, and QuaDRiGa [20]. It is defined as \[h_{G2G}(f,d) =10\ \rho_{G}\times log\Big{(}\frac{d_{gg}}{1\,m}\Big{)}+J \tag{7}\] \[+10\ \gamma_{G}\times log\Big{(}\frac{f_{c}}{1GHz}\Big{)}+\chi_{ \sigma}^{G2G},\] where \(d_{gg}\) is the 2D distance between the transmitter and receiver nodes, \(j\) is the intercept, and \(\rho_{G}\) and \(\gamma_{G}\) correspond to the distance and the frequency-dependent exponents. Shadow fading, \(\chi_{\sigma}^{G2G}\), is modeled as a Gaussian random variable of zero-mean and standard deviation \(\sigma_{sh}\). ### _Communication Model for Legitimate Users_ The \(M\)-antenna GBS can communicate with the \(K\) single-antenna UEs either directly or using the \(N\)-antenna AR at the same frequency, employing space-division multiple access (SDMA) and time-division multiple access (TDMA) [21]. The GBS serves \(K_{b}\) users directly and \(K_{r}\) users via the AR, where \(K=K_{b}+K_{r}\). In what follows, we provide the corresponding communication models and channel capacities. #### Ii-B1 Direct communication from GBS For the direct communication, the GBS forms multiple simultaneous beams to spatially separated users employing SDMA. The transmit Figure 1: System model. beamforming assigns one beam vector for each user. However, transmit power leakage can occur between beams causing multi-user interference. We consider the downlink transmission, where the GBS transfers \(K_{b}\) data streams to \(K_{b}\) users. The transmitted signal model is \[\mathbf{x}_{b}=\sum\limits_{k=1}^{K_{b}}\mathbf{w}_{b,k}\;s_{k}, \tag{8}\] where \(\mathbf{x}_{b}\in\mathbb{C}^{M\times 1}\), \(\mathbf{w}_{b,k}\in\mathbb{C}^{M\times 1}\) is the beamforming vector and \(s_{k}\) the transmitted information symbol for the \(k\)th user. The beamforming, or precoding, matrix of the GBS contains \(K_{b}\) beamforming vectors, \(\mathbf{W}_{kk}\in\mathbb{C}^{M\times K_{b}}\), where \(\mathbf{W}_{bk}=[\mathbf{w}_{b,1},\cdots,\mathbf{w}_{b,K_{b}}]\). The allocated transmit power for the \(k\)th user can then be calculated by the squared norm of the beamforming vector \(\parallel\mathbf{w}_{b,k}\parallel^{2}\). The received signal at the \(K_{b}\) users can is expressed as \[\mathbf{y}_{0}=\mathbf{H}_{0}\,\mathbf{x}_{b}\,+\,\mathbf{n}_{0}, \tag{9}\] where \(\mathbf{y}_{0}\in\mathbb{C}^{K_{b}\times 1}\), \(\mathbf{H}_{0}\in\mathbb{C}^{K_{b}\times M}\) represents the channel between the \(M\) antennas of the GBS and the \(K_{b}\) single-antenna users, and \(\mathbf{n}_{0}\in\mathbb{C}^{K_{b}\times 1}\) represents noise. It is assumed that the distribution of noise at each user is complex normal with zero-mean and unit variance, i.e., \(n_{k}\sim\mathcal{CN}(0,1)\). The received signal at user \(k\), \[y_{0,k} =\mathbf{h}_{0,k}\,\mathbf{x}_{b}+\,n_{k},\] \[=\mathbf{h}_{0,k}\Big{(}\sum\limits_{k=1}^{K_{b}}\mathbf{w}_{b,k}\;s_{k} \Big{)}+\,n_{k},\] \[=\mathbf{h}_{0,k}\mathbf{w}_{b,k}\,s_{k}\,+\,\mathbf{h}_{0,k}\Big{(}\sum \limits_{\begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{K_{b}}\mathbf{w}_{b,i}\,s_{i}\Big{)}\,+\,n_{k} \tag{10}\] has the signal-to-interference-plus-noise-ratio (SINR) \[\gamma_{b,k}=\frac{\mid\mathbf{h}_{0,k}\mathbf{w}_{b,k}\mid^{2}}{\sum\limits_{i\neq k }\mid\mathbf{h}_{0,k}\mathbf{w}_{b,i}\mid^{2}+1}\,, \tag{11}\] where \(\mathbf{h}_{0,k}\in\mathbb{C}^{1\times M}\) denotes the MISO channel from the GBS to the \(k\)th user. The channel capacity of the direct link is obtained from \[C_{b,k}=\log_{2}\big{(}1+\gamma_{b,k}\big{)}. \tag{12}\] #### Ii-B2 Indirect communication via AR We assume a time-slot based synchronization between the GBS transmission and the AR transmission [22, 23]. In odd time-slots (phase), the BS transmits \(K_{r}\) data streams to the AR, each of which is designed to one UE. The transmission between the GBS and AR can be modeled as a standard point-to-point MIMO channel. The received signal at the AR can be written as \[\mathbf{y}_{1} =\mathbf{H}_{1}\mathbf{x}_{b}+\mathbf{n}_{1},\] \[=\mathbf{H}_{1}\Big{(}\sum\limits_{k=1}^{K_{r}}\mathbf{w}_{b,k}\;s_{k} \Big{)}+\,\mathbf{n}_{1}, \tag{13}\] where \(\mathbf{y}_{1}\in\mathbb{C}^{N\times 1}\), \(\mathbf{H}_{1}\in\mathbb{C}^{N\times M}\) is the MIMO communication channel between the BS and the AR, and \(\mathbf{n}_{1}\sim\mathcal{CN}(0,\mathbf{I})\in\mathbb{C}^{N\times 1}\) is the noise vector. In the even time-slots, the AR transmits \[\mathbf{x}_{r}=\mathbf{W}_{r}\;\mathbf{y}_{1}, \tag{14}\] where \(\mathbf{x}_{r}\in\mathbf{C}^{N\times 1}\) and \(\mathbf{W}_{r}\in\mathbb{C}^{N\times N}\) is the beamforming matrix. The received signals at the \(K_{r}\) UEs are modeled as \[\mathbf{y}_{2} =\mathbf{H}_{2}\;\mathbf{x}_{r}+\mathbf{n}_{2},\] \[=\mathbf{H}_{2}\left(\mathbf{W}_{r}\left(\mathbf{H}_{1}\Big{(}\sum\limits_{k= 1}^{K_{r}}\mathbf{w}_{b,k}\;s_{k}\Big{)}+\,\mathbf{n}_{1}\right)\right)+\mathbf{n}_{2}, \tag{15}\] where \(\mathbf{y}_{2}\in\mathbb{C}^{K_{r}\times 1}\), \(\mathbf{H}_{2}\in\mathbb{C}^{K_{r}\times N}\) is the A2G communication channel between the AR and the \(K_{r}\) UEs, and \(\mathbf{n}_{2}\sim\mathcal{CN}(0,\mathbf{I})\in\mathbb{C}^{K_{r}\times 1}\) is the noise vector. The \(k\)th user receives \[y_{2,k} =\mathbf{h}_{2,k}\mathbf{W}_{r}\;\mathbf{H}_{1}\;\mathbf{w}_{b,k}\;s_{k}\] \[+\mathbf{h}_{2,k}\mathbf{W}_{r}\;\mathbf{H}_{1}\Big{(}\sum\limits_{i\neq k} \mathbf{w}_{b,i}\;s_{i}\Big{)}\] \[+\mathbf{h}_{2,k}\mathbf{W}_{r}\mathbf{n}_{1}+n_{2,k}^{2}, \tag{16}\] where \(\mathbf{h}_{2,k}\in\mathbb{C}^{1\times N}\) denotes the MISO channel from the AR to the \(k\)th UE and \(n_{2,k}\sim\mathcal{CN}(0,1)\) is the additive noise. The SINR of this relayed communication link from the BS to the \(k\)th UE via the AR can then be calculated as \[\gamma_{r,k}=\frac{\mid\mathbf{h}_{2,k}\mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w}_{b,k}\mid^{2}}{ \sum\limits_{i\neq k}\mid\mathbf{h}_{2,k}\mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w}_{b,i}\mid^{2} +\mid\mathbf{h}_{2,k}\mathbf{W}_{r}\parallel^{2}+1}\,. \tag{17}\] The channel capacity \(C_{r,k}\) of the indirect link is obtained from (12) using \(\gamma_{r,k}\) instead of \(\gamma_{b,k}\). Note that \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) are directly influenced by UAV mobility due to changes in distance, altitude, and orientation relative to ground receivers. ### _Communication Model for Eavesdroppers_ #### Ii-C1 Eavesdropping on the direct communication link The eavesdropper listens on the the direct link between the GBS and the associated UEs and receives \[y_{0,e} =\mathbf{h}_{0,e}\;\mathbf{x}_{b}+n_{e},\] \[=\mathbf{h}_{0,e}\;\big{(}\sum\limits_{k=1}^{K}\mathbf{w}_{b,k}\;s_{k} \big{)}+n_{e},\] \[=\mathbf{h}_{0,e}\mathbf{w}_{b,k}\,s_{k}\;+\mathbf{h}_{0,e}\Big{(}\sum\limits_{ \begin{subarray}{c}i=1\\ ijk\end{subarray}}^{K}\mathbf{w}_{b,i}\;s_{i}\Big{)}+n_{e}, \tag{18}\] where \(\mathbf{h}_{0,e}\in\mathbb{C}^{1\times M}\) is the G2G communication channel between the BS and the eavesdropper and \(n_{e}\) is the noise at the eavesdropper such that \(n_{e}\sim\mathcal{CN}(0,1)\). The SINR associated with the direct link between the GBS and the eavesdropper--for the beam formed to user \(k\)--can be calculated as \[\gamma_{b,e,k}=\frac{\mid\mathbf{h}_{0,e}\mathbf{w}_{b,k}\mid^{2}}{\sum\limits_{i\neq k }\mid\mathbf{h}_{0,e}\mathbf{w}_{b,i}\mid^{2}+1}\,. \tag{19}\] Consequently, the capacity of the eavesdropper associated with the direct link from the BS to the \(k^{th}\) user can be derived as \[C_{b,e,k} =\log_{2}\big{(}1+\gamma_{b,e,k}\big{)},\] \[=\log_{2}\bigg{(}1+\frac{\mid\mathbf{h}_{0,e}\mathbf{w}_{b,k}\mid^{2}}{ \sum\limits_{i\neq k}\mid\mathbf{h}_{0,e}\mathbf{w}_{b,i}\mid^{2}+1}\bigg{)}. \tag{20}\] #### Ii-C2 Eavesdropping from relay communication link The eavesdropper can wiretap the A2G relay communication link between the UAV and the UEs. Similar to the section II-B2, the capacity of the eavesdropper associated with the relay link can be derived as \[C_{r,e}=\log_{2}(1+\gamma_{r,e})\] \[=\log_{2}\bigg{(}1+\frac{\mid\mathbf{h}_{2,e}\mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w} _{b,k}\mid^{2}}{\sum\limits_{i\neq k}\mid\mathbf{h}_{2,e}\mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w}_{b,i}\mid^{2}+\mid\mathbf{h}_{2,e}\mathbf{W}_{r}\parallel^{2}+1}\bigg{)}, \tag{21}\] where \(\gamma_{r,e}\) is the SINR, and \(\mathbf{h}_{2,e}\in\mathbb{C}^{1\times N}\) denotes the A2G channel between the UAV and the eavesdropper, and \(n_{2,e}\) is the noise at the eavesdropper such that \(n_{2,e}\sim\mathcal{CN}(0,1)\). ### _Secrecy Capacity_ The term _secrecy capacity_ is a measure of the information rate that can be transmitted securely without being intercepted. It is obtained as the difference between the achievable data rate of a legitimate receiver and the achievable data rate of an eavesdropper, taking into account the channel conditions and the employed security measures. It corresponds to the rate at which no data will be decoded by the eavesdropper [24]. For the system model of Section II, the average sum-secrecy capacity of the \(K_{b}\) UEs that are directly served by the GBS over \(T\) time slots is \[C_{sec,b} =\frac{1}{T}\sum_{t=1}^{T}\sum_{k=1}^{K_{b}}\left(C_{b,k}-C_{b,e }\right)^{+}\] \[=\frac{1}{T}\sum_{t=1}^{T}\sum_{k=1}^{K_{b}}\Bigg{[}\log_{2} \left(1+\frac{\mid\mathbf{h}_{0,k}\mathbf{w}_{b,k}\mid^{2}}{\sum\mid\mathbf{h}_{0,k}\mathbf{w}_ {b,i}\mid^{2}+1}\right)-\] \[\log_{2}\left(1+\frac{\mid\mathbf{h}_{0,e}\mathbf{w}_{b,k}\mid^{2}}{\sum \mid\mathbf{h}_{0,e}\mathbf{w}_{b,i}\mid^{2}+1}\right)\Bigg{]}^{+}. \tag{22}\] Likewise, the average sum-secrecy capacity of the \(K_{r}\) UEs served via the AR over \(T\) time slots is obtained as \[C_{sec,r} =\frac{1}{T}\sum_{t=1}^{T}\sum_{k=1}^{K_{r}}\left(C_{r,k}-C_{r,e }\right)^{+}=\frac{1}{T}\sum_{t=1}^{T}\] \[\sum_{k=1}^{K_{r}}\Bigg{[}\log_{2}\left(1+\frac{\mid\mathbf{h}_{2,k} \mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w}_{b,k}\mid^{2}}{\sum\mid\mathbf{h}_{2,k}\mathbf{W}_{r}\mathbf{H }_{1}\mathbf{w}_{b,i}\mid^{2}+\mid\mathbf{h}_{2,k}\mathbf{W}_{r}\mid^{2}+1}\right)\] \[-\log_{2}\left(1+\frac{\mid\mathbf{h}_{2,e}\mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w }_{b,k}\mid^{2}}{\sum\mid\mathbf{h}_{2,e}\mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w}_{b,i}\mid^{2} +\mid\mathbf{h}_{2,e}\mathbf{W}_{r}\mid^{2}+1}\right)\Bigg{]}^{+}. \tag{23}\] where \([\omega]^{+}\triangleq max(\omega,0)\). The total secrecy capacity is \[C_{T}=C_{sec,b}+C_{sec,r}. \tag{24}\] Formulas (22)-(24) are derived from information theory and provide quantitative measures of the level of secrecy achieved in the communications channels according to the system model of Fig. 1. The secrecy capacity is maximized when maximizing the SINRs at the legitimate receivers and minimizing the SINRs at the eavesdroppers. ## III Problem Formulation This paper aims to maximize the total secrecy capacity of the UEs whether they are directly served by the GBS or through the AR. Considering the degrees of freedom for serving UEs directly or via the AR, we formulate two optimization problems. #### Iii-1 Direct communication For the directly served UEs, the optimization problem is defined as \[\underset{\mathbf{w}_{b,k}}{\text{max}} C_{sec,b}\] (25) subject to (s.t.) \[P_{b}\leq P_{b,max},\] where \(C_{sec,b}\) is the secrecy capacity defined in (22), \(P_{b,max}\) is the maximum transmit power of the GBS, and \(P_{b}\) is the transmit power of the GBS, that is, \(P_{b}=\text{Tr}\left(\mathbf{x}_{b}\mathbf{x}_{b}^{\dagger}\right)\). Problem (25) requires the knowledge of the eavesdropping channel. We assume the location of the eavesdropper and thus its CSI to be unknown, which is the scenario of interest in practice where it is difficult to detect or estimate the presence, location, or channel of eavesdroppers because of their passive nature. Therefore, we can only consider the capacity of the legitimate user and reformulate the optimization problem: \[\underset{\mathbf{w}_{b,k}}{\text{max}} \frac{1}{T}\sum_{t=1}^{T}\sum_{k=1}^{K}\Bigg{[}\log_{2}\left(1+ \frac{\mid\mathbf{h}_{0,k}\mathbf{w}_{b,k}\mid^{2}}{\sum\mid\mathbf{h}_{0,k}\mathbf{w}_{b,i} \mid^{2}+1}\right)\Bigg{]}\] (26) s.t. \[\text{Tr}\big{(}\mathbf{x}_{b}\mathbf{x}_{b}^{\dagger}\big{)}\leq P_{b,max}.\] The eavesdropper location and channel are used only for calculating the resulting secrecy capacity for performance evaluation. #### Iii-2 Relay communication For the UEs that are served via the AR, the optimization problem is defined as \[\underset{\begin{subarray}{c}\{\mathbf{w}_{b,k},x_{r},y_{r},z_{r}\} \end{subarray}}{\text{max}} C_{sec,r} \tag{27}\] \[P_{r}\leq P_{r,max}\] \[(x_{r},y_{r},z_{r})\leq(L_{x},L_{y},L_{z}),\] where \(C_{sec,r}\) is the secrecy capacity defined in (23), \(P_{r,max}\) is the maximum transmit power of the AR, \(P_{r}=\text{Tr}\left(\mathbf{x}_{r}\mathbf{x}_{r}^{\dagger}\right)\) is the transmit power of the AR and \(P_{r,max}\) the maximum transmit power, and \((x_{r},y_{r},z_{r})\) are the 3D coordinates of the UAV bound to \((L_{x},L_{y},L_{z})\). Because of the unknown eavesdropper location and CSI, the optimization problem is rewritten as \[\underset{\begin{subarray}{c}\{\mathbf{W}_{r},x_{r},y_{r},z_{r}\} \end{subarray}}{\text{max}} \frac{1}{T}\sum_{t=1}^{T}\sum_{k=1}^{K}\Bigg{[}\log_{2}\left(1+\right.\] \[\left.\begin{array}{c}\right.\\ \left.\frac{\mid\mathbf{h}_{2,k}\mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w}_{b,k}\mid^{2}}{\sum\mid \mathbf{h}_{2,k}\mathbf{W}_{r}\mathbf{H}_{1}\mathbf{w}_{b,i}\mid^{2}+\mid\mathbf{h}_{2,k}\mathbf{W}_{r} \mid^{2}+1}\right)\right]\end{subarray}}\] (28) s.t. \[\text{Tr}\big{(}\mathbf{x}_{r}\mathbf{x}_{r}^{\dagger}\big{)}\leq P_{r,max}\] \[(x_{r},y_{r},z_{r})\leq(L_{x},L_{y},L_{z}).\] Although we have incorporated practical system constraints in our model, we acknowledge that there are additional operational aspects, such as UAV energy consumption, flight time, and speed [25], which are not optimized in this paper. ## IV Proposed Solution Given the available resources, which are one multi-antenna GBS and one multi-antenna AR, the secrecy capacity optimization problem becomes a user association and transmission parameter optimization problem. We perform UE clustering for user association, followed by GBS and AR beamforming and transmit power control, and UAV trajectory optimization. Figure 2 illustrates this. It is important to mention that the beamforming/power control and the UAV trajectory optimization are done through an iterative process. That is, the algorithm obtains the optimal power coefficients for every 3D location of the UAV. Hence, the beamforming and transmit power control of the UAV affects its trajectory adjustment. The details are discussed in Sections IV.B and IV.C ### _User Clustering_ The goal of user clustering is to divide \(K\) users into two clusters, one cluster is to be served by the GBS and the other cluster is served the UAV. For solving the user clustering problem, we can employ an exhaustive search, but it entails a high computational complexity, which increases exponentially with the number of users. We instead apply K-means clustering, a unsupervised machine learning algorithm that is used for grouping a set of objects so that the similarity criterion of members in a group and the dissimilarity with members of other groups is maximized. K-mean works with any single or multi-dimensional metric that the data captures and user-defined target number of clusters [26]. It is a computationally efficient method compared to other techniques such as graph theory, fuzzy c-means clustering, and hierarchical clustering [27]. We consider the characteristics of wireless communication systems to determine the similarities of the data points. Because the objective it to associate users to base stations, one fixed (GBS) and one mobile (UAV), we take the normalized channel coefficients between UEs and GBS as the data points of the K-means clustering algorithm. This captures the variations of channel gains resulting from different RF propagation effects such as small-scale fading and shadow fading. Hence, we can define \[h^{n}_{b,k}=\frac{h_{b,k}}{\parallel h_{b,k}\parallel_{2}}, \tag{29}\] where \(h^{n}_{b,k}\) is the normalized channel gain, \(h_{b,k}\) is the channel gain between the GBS and \(k\)-th UE, and \(\parallel.\parallel_{2}\) is the \(L_{2}\) vector norm. Having these channel gains as data points, we apply K-mean clustering algorithm to determine the cluster centers, or centroids, and consequently the UEs associated with each centroid. The goal is to leverage the similarity of channels between the GBS and the UEs to create two UE clusters, where the UEs of one cluster are to be served by the GBS and the UEs of the other cluster by the UAV. This approach is applying the same clustering principle as other studies in the literature [28, 29, 30, 31]. The K-mean clustering algorithm can be done as follows [32, 33]: (i) the initial centroids \(C=\{c_{1},c_{2},\ldots c_{n}\}\) are randomly selected as the \(n\) cluster centers of the \(K\) available data points: \(U=\{u_{1},u_{2},\ldots,u_{k}\ldots,u_{K}\}\). Here, we consider two clusters \(c_{1}\) and \(c_{2}\) for the GBS and the UAV, and \(K\) users where \(u_{k}=h^{n}_{b,k}\) can be considered as a data point of the \(k\)-th user. (ii) Distance between each data point, e.g., channel status, and the cluster centers is calculated to assign the data point to the nearest center. Different metrics can be used to measure the distance between data points such as Euclidean distance, Manhattan distance, etc. In this paper, we use \(L_{2}^{2}\) norm or Euclidean distance. (iii) the centroids are updated to minimize the sum of squared distances between a user and its centroid, \[\min_{C}\sum_{k}\min_{r\in R}\ d_{r,k} \tag{30}\] where \(d_{r,k}=\parallel u_{k}-c_{r}\parallel_{2}^{2}\) is the Euclidean distance between \(C\triangleq\{c_{r}\mid r\in R\}\) and \(R=2\) represents the number of clusters. For example, the distance between the normalized channel gains and the centroids is \(d_{r,k}=\parallel h^{n}_{b,k}-c_{r}\parallel_{2}^{2}\). Algorithm 1 represents a pseudocode for the K-mean clustering algorithm. The implementation of algorithm 1 for one scenario is shown in Figure 3 wherein data points are \(h^{n}_{b,k}\) In addition to UEs' channel status, other data points can also be considered in the algorithm to study the problem. For example, distance based clustering or rate based clustering, where UEs with the nearest distance to the GBS and the highest downlink rates, respectively, would be grouped. Each scenario can have an effect on the system performance and should be chosen based on the objective and the ability or Figure 2: Proposed solution flowchart. simplicity to obtain the necessary information to calculate the value for each UE. In Section V.C, we discuss about the results of different scenarios. ``` Input:\(U\) and \(C\) Output:\(K_{c}\) and \(C\), \(\forall\,c\in C\) 1 Initialize cluster head set \(C_{\mathcal{CH}}=\emptyset\) and \(c=1\); 2while\(c<C\)do 3 Randomly select a cluster head \(\mathcal{CH}_{c}\) from \(U\); 4 Update \(C_{\mathcal{CH}}=\{\mathcal{CH}_{c},\forall\,c\in C\}\); 5\(c=c+1\); 6 end while 7repeat 8 For each user \(m\in U\), calculate the minimum distance from the \(\mathcal{CH}_{c}\); 9 Fit each user to the closest cluster; 10 Update the cluster head \(\mathcal{CH}_{c}\) by taking the average of all the users \(m\); 11 12until The cluster members do not change; ``` **Algorithm 1** K-means user clustering in MU-MISO environment. ### _Optimal Beamforming and Power Control_ Problem (26) is a traditional beamforming power control problem between a GBS and a user. This problem has been extensively studied in the literature [34, 35]. The beamforming vectors of the GBS are obtained by applying the weighted minimum mean square error (WMMSE) algorithm [36]. The WMMSE is an iterative closed form solution that optimizes the transmitter and receiver precoding vectors to maximize the sum rate of all UEs for a GBS power constraint. The precoding solution for the direct communication links is then [37] \[\boldsymbol{W}_{\boldsymbol{bk}}=\Big{(}\boldsymbol{H}_{\boldsymbol{0}}^{H} \boldsymbol{Q}^{H}\boldsymbol{F}\boldsymbol{Q}\boldsymbol{H}_{\boldsymbol{0}} +\frac{\text{Tr}(\boldsymbol{FQQ}^{H})}{P_{b,max}}\boldsymbol{I}_{\boldsymbol{ M}}\Big{)}^{-1}\boldsymbol{H}_{\boldsymbol{0}}^{H}\boldsymbol{Q}^{H} \boldsymbol{F} \tag{31}\] where \(Q=diag\{q_{1},\cdots,q_{k}\}\) is the receiver precoding, \(F=diag\{f_{1},\cdots,f_{k}\}\) is the weight matrix, and \(\boldsymbol{I}_{\boldsymbol{M}}\) is the covariance matrix. The beamforming and power control for the relay communications problem is solved in the remainder of this section. Inspired by the zero-forcing (ZF) criterion and the channel singular value decomposition (SVD) based structure introduced in [21], we first propose a beamforming matrix structure for the UAV (i.e., \(\boldsymbol{W}_{r}\)) to eliminate interference among users. This converts the optimization problem (28) into a simplified convex optimization problem. Then, we solve the modified optimization problem using the Lagrangian function and Karush-Kuhn-Tucker (KKT) conditions to obtain the UAV's optimal beamforming matrix. #### Iv-B1 Beamforming Matrices Beamforming is done at the GBS and the AR, each serving a distinct set of users. By using the concepts of channel inversion, ZF, and linear algebra, the multi-user interference can be minimized. From (15), the received signal at \(K\) UEs transmitted from the UAV can be written as \[\boldsymbol{y}_{2}=\boldsymbol{H}_{2}\boldsymbol{W}_{r}\boldsymbol{H}_{1} \boldsymbol{W}_{br}\,\boldsymbol{s}_{K}\,+\,\boldsymbol{H}_{2}\boldsymbol{W}_{ r}\,\boldsymbol{n}_{1}\,+\,\boldsymbol{n}_{2}, \tag{32}\] where \(\boldsymbol{s}_{K}\in\mathbb{C}^{K\times 1}\) corresponds to the \(K\) transmit signals to the \(K\) UEs. The ZF criterion requires that \(\boldsymbol{H}_{2}\boldsymbol{W}_{r}\boldsymbol{H}_{1}\boldsymbol{W}_{br}\) is to be a diagonal matrix with rank \(K\), which implies that \(\text{Rank}(\boldsymbol{H}_{1})\geq K\) and \(\text{Rank}(\boldsymbol{H}_{2})\geq K\)[21]. By applying the SVD, \(\boldsymbol{H}_{2}\) and \(\boldsymbol{H}_{1}\) can be expressed as \[\boldsymbol{H}_{1}=\boldsymbol{U}_{1}\,\boldsymbol{\Sigma}_{1} \,\boldsymbol{V}_{1}^{\dagger}, \tag{33}\] \[\boldsymbol{H}_{2}=\boldsymbol{U}_{2}\,\boldsymbol{\Sigma}_{2} \,\boldsymbol{V}_{2}^{\dagger}, \tag{34}\] where \(\boldsymbol{U}_{i}\) and \(\boldsymbol{V}_{i}\), for \(i=1,2\), are unitary matrices and \(\boldsymbol{\Sigma}_{i}\in\mathbb{C}^{K\times K}\) is a diagonal matrix with positive diagonal elements. Knowing the channel coefficients at the BS and at the UAV, to satisfy the ZF criterion, we propose the following beamforming matrices for the GBS and the UAV \[\boldsymbol{W}_{br}=\boldsymbol{V}_{1}\,\boldsymbol{\Lambda}_{b} \,\boldsymbol{U}_{1}^{\dagger}, \tag{35}\] \[\boldsymbol{W}_{r}=\boldsymbol{V}_{2}\,\hat{\boldsymbol{\Lambda}} _{r}\,\boldsymbol{U}_{1}^{\dagger}\,\hat{\boldsymbol{\Lambda}}_{r}, \tag{36}\] where \(\boldsymbol{\Lambda}_{b},\hat{\boldsymbol{\Lambda}}_{r}\), and \(\boldsymbol{\Lambda}_{r}\) are all \(K\times K\) diagonal matrices. Without loss of generality, it can be assumed that the elements of these two diagonal matrices are non-negative, representing the allocated beamforming power at the BS and the UAV, respectively. #### Iv-B2 Optimal AR Transmit Power Allocation **Lemma IV.1**.: _The objective function defined in (28) can be written as_ \[\frac{1}{T}\sum_{t=1}^{T}\sum_{k=1}^{K}log_{2}\Big{(}1+\frac{\lambda_{r,k}^{2 }}{\lambda_{r,k}^{2}+1}\Big{)}, \tag{37}\] _where \(\lambda_{r,k}\) is the \(\boldsymbol{\Lambda}_{r}(k,k)\)._ **Lemma IV.2**.: _The beamforming power constraint defined in (28) can be expressed as_ \[2\sum_{m=1}^{K}\sum_{n=1}^{K}|\boldsymbol{U}_{2}(m,n)|^{2}\,\sigma_{2,n}^{-2} \,\lambda_{r,m}^{2}\leq P_{r,max}, \tag{38}\] _where \(\sigma_{2,n}\) is the \(\boldsymbol{\Sigma}_{2}(n,n)\)._ The lemmas are proved in the Appendix (Section VI). Leveraging (37) and (38), the beamforming power optimization problem for the UAV at any location can be written as \[\underset{\{\lambda_{r,m}\}}{\text{max}} \frac{1}{T}\sum_{t=1}^{T}\sum_{m=1}^{K}log_{2}\Big{(}1+\frac{ \lambda_{r,m}^{2}}{\lambda_{r,m}^{2}+1}\Big{)}\] (39) s.t. \[2\sum_{m=1}^{K}\sum_{n=1}^{K}|\boldsymbol{U}_{2}(m,n)|^{2}\, \sigma_{2,n}^{-2}\,\lambda_{r,m}^{2}\leq P_{r,max}, \tag{40}\] \[0\leq\lambda_{r,m}\leq\lambda_{r,max},\quad m\in\{1,...,K\}, \tag{41}\] where \(P_{r,max}\) is the maximum available transmit power at the Figure 3: Channel-based clustering. UAV, and \(\lambda_{r,max}\) is the maximum allocated power for each antenna. The Lagrangian function of the optimization problem can be expressed as \[\mathcal{L}(\lambda_{r,l},\alpha_{1},\alpha_{2,l},\alpha_{3,l})=+ \sum_{l=1}^{K}log_{2}\Big{(}1+\frac{\lambda_{r,l}^{2}}{\lambda_{r,l}^{2}+1}\Big{)}\] \[-\alpha_{1}\bigg{(}2\sum_{l=1}^{K}\sum_{n=1}^{K}|\mathbf{U}_{2}(l,n)| ^{2}\,\sigma_{2,n}^{-2}\,\lambda_{r,l}^{2}-P_{r,max}\bigg{)}\] \[-\bigg{(}\alpha_{2,l}\Big{(}\sum_{l=1}^{K}\lambda_{r,l}-\lambda_{ r,max}\Big{)}\bigg{)}-\bigg{(}\alpha_{3,l}\sum_{l=1}^{K}-\lambda_{r,l}\bigg{)}, \tag{42}\] where \(\alpha_{1}\), \(\alpha_{2,l}\), and \(\alpha_{3,l}\) are the non-negative Lagrangian multipliers corresponding to the first and the second constraints, respectively. **Theorem IV.3**.: _The optimal beamforming power for the \(l\)th antenna of the UAV can be obtained as_ \[\lambda_{r,l}^{*}=\begin{cases}0&\lambda_{r,l}^{\dagger}\leq 0,(\alpha_{1}^{*} \,F\,\text{ln}\,2)>0.25\\ \lambda_{r,l}^{\dagger}&0<\lambda_{r,l}^{\dagger}<\lambda_{r,max}\\ \lambda_{r,max}&\lambda_{r,l}^{\dagger}\geq\lambda_{r,max}\end{cases}\] _in which_ \[\lambda_{r,l}^{\dagger}=\sqrt{\frac{1}{4}\Big{(}\sqrt{1+\frac{2}{ \alpha_{1}^{*}\,F\,\text{ln}\,2}}-3\Big{)}}, \tag{43}\] _where \(F\) is a constant and equals to \(\sum\limits_{n=1}^{K}|\mathbf{U}_{2}(l,n)|^{2}\,\sigma_{2,n}^{-2}\), and the Lagrangian multiplier \(\alpha_{1}^{*}\) can be obtained by replacing (43) into the first constraint of (40) when the equality holds._ The optimal beamforming matrix and transmit power formulations for the AR defined above are used for the UAV trajectory optimization. ### _UAV Trajectory Optimization_ The objective function of (28) is non-convex with respect to parameters \(x_{r}\), \(y_{r}\), \(z_{r}\), \(P\), and the constraints, and the problem is NP-hard [38, 39, 21, 40]. We, therefore, propose a machine learning solution where the UAV trajectory is updated through a transition process based on the current system state. Since the next system state is independent from the previous state and action, the process can be modeled as a Markov decision process (MDP). In order to avoid intractably high dimensionality for the high state-action space, we propose a DQN. It is noteworthy that the following proposed DQN is based on basic reinforcement learning algorithms such as Q-learning and deep reinforcement learning. The aim is to use DQN as an alternative tool for solving this NP-hard optimization problem while consuming less power and computational resources [41, 42, 43]. Depending on an application, one can extend the following framework to more advanced learning models that suit particular use cases. #### Iv-C1 MDP Settings The MDP for the UAV agent is composed of the state space \(\mathcal{S}\), the action space \(\mathcal{A}\), the reward space \(\mathcal{R}\), and the transition probability space \(\mathcal{T}\). At time slot \(t\), the agent observes the state \(s_{t}\in\mathcal{S}\), and takes action \(a_{t}\in\mathcal{A}\) based on its policy. Depending on the distribution of the transition probability \(\mathcal{T}(s_{t+1}|s_{t},a_{t})\), the agent is then transferred to the new state \(s_{t+1}\). Since the transition probability is specific to the operational environment, we choose the Q-learning method as a model-free algorithm to find the best policy for each action in each state. This means that we do not need to know \(\mathcal{T}\), but we need to carefully define the states, the actions, and the reward. **State:** The set of states is defined as \(\mathcal{S}=\{s_{1},s_{2},...,s_{t},...,s_{T}\},\) where \(t\) is the time slot index. Each state \(s_{t}\) corresponds to the 3D coordinates of the UAV and the users served by the AR. **Action:** The states are transitioned according to the defined set of actions defined as \(\mathcal{A}=\{a_{1},a_{2},...,a_{t},..,a_{T}\}\), where each action consists of three parts related to the UAV movement, \(a_{t}=\{\delta_{x},\delta_{y},\delta_{z}\}\), where \(\delta_{x}\), \(\delta_{y}\), and \(\delta_{z}\) represent the movement in the \(x\), \(y\) and \(z\) directions. The movement along each axis is assumed to change positively or negatively, or remain in the original position. Hence, here we consider \(3\) possible directional movements for the \(3\) axes of the AR trajectory, resulting in \(27\) possible actions for the AR. **Reward:** After taking action \(a_{t}\) in state \(s_{t}\), the UAV agent will receive a reward \(R_{t}(s_{t},a_{t})\). The UAV gets more rewards for actions that lead to higher legitimate user rates. We define the reward function accordingly: \[R_{t}(s_{t},a_{t})=\sum_{k=1}^{K_{r}}C_{r,k} \tag{44}\] #### Iv-C2 Deep Q-Network Method The DQN, initially proposed by Google Deep Mind [44], integrates the RL and deep learning methods. This technique uses the power of nonlinear functions, specifically DNNs, in order to approximate the Q-values and handle highly dimensional state-action problems. There are two DNNs of the same structure: a training network and a target network. The training network outputs the Q-values associated with the actions of the UAV in each state. The target network supervises the training network by providing the target Q-values obtained from the Bellman equation [45], \[Q^{*}(s,a)=E_{s^{\prime}}\bigg{[}R(s,a)+\gamma\times\max_{a\in \mathcal{A}}\ Q(s^{\prime},a^{\prime})\bigg{]}, \tag{45}\] which provides the optimal state-action pairs, where \(s^{\prime}\) and \(a^{\prime}\) symbolize the next state and action. Parameter \(\gamma\in(0,1)\) denotes the discount factor that affects the importance of the future reward. The target values are compared with the outputs of the training network to minimize the loss function, \[L(\theta)=\mathbb{E}\Bigg{[}\bigg{(}\Big{[}r_{t}+\gamma\times \max_{a\in\mathcal{A}}\ Q(s_{t+1},a_{t+1};\theta^{\dagger})\Big{]}-\] \[\Big{[}Q(s_{t},a_{t};\theta)\Big{]}\bigg{)}^{2}\Bigg{]}, \tag{46}\] where the Q-value of the first term is obtained from the target network and the Q-value of the second term is obtained from the training network. Parameters \(\theta^{\dagger}\) and \(\theta\) denote the weights of the target network and training network, respectively. The \(\theta^{\dagger}\) coefficients are updated every few time slots in order to ensure the stability of the target values and, hence, facilitate stable learning. As the UAV takes an action, the system generates a record of experience. At time step \(t\), the experience contains the current state \(s_{t}\), the action \(a_{t}\), the reward \(r_{t}\), and the next state \(s_{t+1}\), formed as a tuple \(e_{t}=(s_{t},a_{t},r_{t},s_{t+1})\). Each such experience is stored in a replay memory with the capacity of \(N\), such that \(\mathcal{M}=\{e_{1},...,e_{t},...,e_{N}\}\). The memory is a queue-like buffer that stores the latest N experience vectors. We use a mini-batch sample from the replay memory to feed the input of the training network. The main reason for using the mini-batch samples from the reply memory is to break possible correlations between sequential states of the environment, and thereby facilitate generalization. The UAV applies a gradient descent algorithm, \[\nabla_{\theta}\,L(\theta)=-\mathbb{E}\Bigg{[}2\ \nabla_{ \theta}Q(s_{t},a_{t};\theta)\bigg{(}\,r_{t}+\,\gamma\ \times\] \[\max_{a\in\mathcal{A}}\ Q(s_{t+1},a_{t+1};\theta^{\dagger})-Q(s_{t },a_{t};\theta)\bigg{)}\Bigg{]}, \tag{47}\] to update \(\theta\) an \(\theta^{\dagger}\) as the weights of the DNNs with the aim of minimizing the prediction error. Finally, we apply the \(\epsilon-\)greedy algorithm to select an action while balancing the exploration and the exploitation of the UAV in the environment. In this algorithm, the UAV explores the environment with the probability of \(\epsilon\) by choosing a random action. More precisely, the UAV exploits the environment with the probability of \(1-\epsilon\) by choosing the actions that maximize the Q-value function, i.e., \(a^{*}=\text{argmax}_{a\in\mathcal{A}}\ Q(s,a;\theta)\). A high value of \(\epsilon\) is initially set in the model for the UAV to spend more time for the exploration. As the agent obtains more knowledge about the environment, the \(\epsilon\) value is gradually decreased to leverage the experience and choose the best actions for the UAV, rather than continuing with the exploration. Algorithm 2 details the DQN-based algorithm used by the UAV agent for optimizing the sum-rate of the UEs that are served via the AR. In summary, the proposed techniques accomplish the following: i) User-BS association employing K-means clustering (e.g., channel, rate, or distance based), ii) multi-user beamforming and power management for the GBS, iii) UAV trajectory optimization in conjunction with multi-user beamforming and power management. The objective is to maximize the secrecy rate which can be used to evaluate the effectiveness of the proposed security measurein protecting against eavesdropping attacks [46, 47], especially in the context of wireless communication [48, 49, 50, 51]. Since the CSI of the eavesdropping channel cannot be obtained for passive, receive-only eavesdroppers, our solution maximizes the user rate for it to be generally applicable without requiring collaboration with eavesdroppers or wasting power for generating artificial noise in random directions, because of the unknown eavesdropper locations, as opposed to using this power to increase the user rate. The secrecy capacity also provides a unified measurement framework for the numerical analyses presented in the following section. ## V Numerical Analysis and Discussion In this section, we present simulation results to evaluate the secrecy performance of the UAV-assisted communications system, where users are clustered and served by a fixed and a mobile access point. In the presence of an eavesdropping attack, our solution jointly optimizes of the UAV trajectory, GBS beamforming, and AR beamforming coefficients. The numerical analysis quantifies the impact of different user clustering technique, discount factor (Gamma) values, learning rates, and number of users on the achievable secrecy capacity of the system. The simulation scenario is illustrated in Fig. 1 and consist of multiple single antenna ground UEs, an AR, and a group of malicious nodes that is performing a passive eavesdropping attack on the downlink transmission. The terrestrial users and the eavesdroppers are randomly distributed in a 2D area. The AR is launched at a random location and height and is equipped with an antenna array to enable communications with the GBS and the UEs. Table II captures the simulation parameters. The simulations are performed with Python 3.6 and PyTorch 1.7. ### _Hyper-parameters_ The hyper-parameters of the learning algorithm need to be optimized for our specific problem and environment. Therefore, Fig. 4 and Fig. 5 numerically evaluate the secrecy capacity of the UEs served through the AR for different discount factors Gamma and learning rates (LRs). Additionally, Fig. 4 and Fig. 5 verify the convergence of the proposed solution across these different settings. The results presented in both figures are for the case of 16 ground users where there are 8 users in each clusters as shown earlier in Fig. 3. These figures plot the total achieved secrecy capacity of the user cluster served by the AR over the training time for different hyper-parameter values. When the discount factor is very high, the agent equally considers the future and current rewards. Fig. 4 shows that this leads to low performance. The best result for our scenario is achieved by slightly discounting the future reward, corresponding to a Gamma of 0.9. By configuring higher LRs, the agent becomes increasingly biased to take the same action that will enforce the learning policy to be particular to a deterministic environment. On the other hand, for very low LRs the DQL agent keeps exploring the environment in a complete random behavior without learning. A moderate LR provides the equilibrium between a deterministic and stochastic environment. Fig. 5 compares the learning outcome for three LRs, where a LR of \(10^{-4}\) provides the best result. Note that one reason of DQN failure is related to the choice of the hyperparameters. There are a number of search techniques that can be used to adjust hyperparameters. In this analysis, we have employed the grid search technique that involves specifying a range of values for each hyperparameter and then training the DQN with all possible combinations of these values. The combination of hyperparameters that produces the best performance is then selected. ### _Learning Performance Evaluation_ Figure 6 compares of learning and convergence performance of the proposed DQL scheme with the Q-learning as a benchmark learning algorithm. It plots the total secrecy capacity of the user cluster served by the AR over the number of learning episodes for the DQL and Q-learning trajectory and power optimization. The curves show that the total secrecy capacity of user served by the UAV tends to increase over the episodes until convergence. This validates our approach to define the reward so as to maximize the user rates with unknown CSI of the eavesdropping channel. Initially, the total secrecy capacity of user served by the UAV tends to increase over the episodes until convergence. This validates our approach to define the reward so as to maximize the user rates with unknown CSI of the eavesdropping channel. Initially, the total secrecy capacity of user served by the UAV tends to increase over the episodes until convergence. This validates our approach to define the reward so as to maximize the user rates with unknown CSI of the eavesdropping channel. \begin{table} \begin{tabular}{|c|c|} \hline **Parameter** & **Value** \\ \hline Area length (\(L_{x}\)) & 20 m \\ \hline Area width (\(L_{y}\)) & 20 m \\ \hline UAV height (\(z_{u}\)) & 20-80 m \\ \hline UAV trajectory step along the \(x\) or \(y\) axis & 0.5 m \\ \hline UAV trajectory step along the \(z\) axis & 2 m \\ \hline GBS height (\(z_{s}\)) & 15 m \\ \hline Path loss at 1 m reference distance (\(\lambda_{0}\)) & -40 dB [55] \\ \hline Path loss exponent (\(\alpha\)) & 2 \\ \hline Rician factor (\(\beta\)) & 10 dB \\ \hline ABG distance-dependent exponents (\(\rho_{G}\)) & 2.1 \\ \hline ABG intercept (\(\beta\)) & 31.7 dB \\ \hline ABG frequency-dependent exponents (\(\gamma_{G}\)) & 2 \\ \hline Shadow fading (\(\chi^{G20}_{x}\)) & 3.9 dB \\ \hline Central frequency & 3.2 GHz \\ \hline Noise variance & \(10^{-2}\) \\ \hline Number of ground users (K) & 4-64 \\ \hline Number of ground eavesdroppers (E) & 2-10 \\ \hline Number of K-means clusters (R) & 2 \\ \hline Number of episodes & 2 \(\times\)\(10^{4}\) \\ \hline Number time slots per episode & 200 \\ \hline Learning rate (LR) & \(10^{-4}\) \\ \hline Discount factor & 0.9 \\ \hline Replay memory size & \(10^{5}\) entries \\ \hline Mini-batch size & 64 \\ \hline Update rate of target network & 10 \\ \hline \end{tabular} \end{table} Table II: Simulation parameters. Figure 4: DQL performance for different discount factors Gamma for a learning rate of \(10^{-4}\). Figure 5: DQL performance for different learning rates (LRs) for Gamma = 0.9. Figure 6: Comparison of the learning performance of the DQL and Q-learning for the UAV trajectory optimization. secrecy capacities match for the two algorithms. This is so because of lacking interaction with the environment to provide enough data for training the learning agents. As the learning evolves, favorable actions become more easily discriminated from the unfavorable ones by exploring the environment. It is noticeable that the DQL performance substantially exceeds the Q-learning performance due to its ability to approximate the Q-values instead of an inefficient Q-table of the QL, which allows learning gigantic state and action in fewer episodes. In order to further analyze the effectiveness and the convergence performance of the learning performance of the DQL design for optimizing the UAV trajectory, Fig. 7 plots the mean square error (MSE) and mean absolute error (MAE) of the DQL and Q-learning solutions over the number of episodes. The results show how the loss error is minimized by adjusting the weights of the neural network used to approximate the Q-function. This indicates how well the proposed algorithm is performing and illustrates its convergence toward an optimal policy. The MSE and MAE are calculated by comparing the estimated Q-values using the learned DQL and Q-learning models versus their actually computed values. Those figures reveal that the quality of the UAV actions are rather poor during the early training phase. As the learning continues, more measurements are accumulated that yield to improved actions taken by the UAV agent for both algorithms. The DQL method meets a MSE target of 50 an order of magnitude faster than Q-learning (Fig. 7a). It converges faster and achieves a 35% higher secrecy capacity after 20000 episodes (Fig. 6). ### _Clustering Performance Evaluation_ Here we evaluate the performance of the proposed user clustering scheme and the employed metric on the total secrecy rate performance of the system. We consider three metrics for the clustering algorithm presented in Algorithm 1: _distance clustering_, where UEs are grouped based on their distances to the GBS, _rate clustering_, where UEs are grouped based on their downlink rates while being served by the GBS, and _channel clustering_, where UEs are grouped based on the normalized channel coefficients. Figure 8, shows the total secrecy capacity of the system after clustering, beamforming, and UAV trajectory optimization for the three clustering metrics. We observe that the proposed channel clustering metric outperforms the rate and distance clustering metrics in terms of total secrecy capacity. ### _Overall Performance Evaluation_ We explain the overall performance evaluation of the proposed method in two parts. In the first part, the impact of the optimal beamforming and power control performance evaluation is studied. In particular, the proposed method is compared with three scenarios where our optimal beamforming and power control are only partially implemented. In the second part, the impact of the UAV trajectory on the secrecy capacity is studied. Specifically, the UAV's 3D movement is shown in a scenario in which two clusters of UEs are simultaneously served, one by the UAV, which relocates to best serve the UEs in the cluster, and the other by the GBS. The context that we study is unique compared to other studies as captured in Table I. We develop a framework that involves user clustering, multi-user beamforming, power control, and reinforcement learning for solving the problem and Figure 8: Comparison of different clustering techniques. Figure 7: Comparison of the MSE (a) and MAE (b) losses of the DQL and Q-learning over the number of episodes. there are no existing studies that propose comparable solution. Therefore, we define our own benchmarks to evaluate the proposed framework and the importance of each component comprising it. The baseline techniques are: AR deployment without optimal GBS beamforming (UAV+NoBF), no AR deployment with optimal GBS beamforming (NoUAV+BF), and no AR deployment and without optimal GBS beamforming (NoUAV+NoBF). In all cases where the AR is deployed the optimal beamforming and power control of the AR is activated. Figure 9 shows the achieved total secrecy capacity over the number of users. The secrecy capacity improves with the number of users for all schemes. The proposed solution clearly outperforms the other techniques. The UAV+NoBF scheme achieves a better secrecy capacity than the NoUAV+BF scheme. That is, deploying an AR is more useful for improving the secrecy capacity than employing optimal multi-user beamforming at the GBS. Nevertheless, beamforming and power control schemes have a notable contribution to the secrecy performance of the system as can be observed when comparing the performance of the NoUAV+NoBF scheme with the proposed solution and other benchmark techniques. The optimal beamforming and power control increases the SNR and reduces the multi-user interference while minimizing the likelihood of eavesdropping and improving the overall secrecy capacity. The addition of the UAV as an AR allows to serve those users effectively that have a worse channel to the GBS. This is accomplished by the proposed clustering method and the UAV trajectory optimization along with beamforming and power control. Figure 10 illustrates the dynamic 3D trajectory optimization process for the case of 16 users where there are 8 users in each cluster as a result of Algorithm 1 that employs channel based clustering. We observe that the UAV moves toward the center of the area where the users are located, and its final position is near the minimum height to be as close as possible to the UEs served by the AR for ensuring good channels to be able to lower the transmission power and thus increase the secrecy capacity, while also ensuring that the UAV stays within its operational limits and avoids ground obstacles. Overall, the dynamic 3D trajectory optimization process, combined with optimal multi-user beamforming and power control, helps achieve high secrecy capacities (Fig. 9) in the presence of eavesdroppers. ### _Known vs. Unknown Information of Eavesdroppers_ In order to put our contribution in context and provide further justification for our optimization framework, we consider the case where the location and the channel states of eavesdroppers are known to the GBS and the UAV in the proposed communication context. Knowing the CSI of eavesdropping channels allows employing the secrecy capacity (24) as the reward function. We simulate 16 users that are clustered in two groups to be served by the GBS or AR resulting from the channel based clustering with known eavesdropper locations and CSI. Figure 11 shows the resulting average secrecy capacity per user with and without eavesdropping information available to the network. For the case of unknown eavesdropping channels, we employ the proposed optimization solution and reward function based on the legitimate user rate. The UAV adjusts its power and trajectory according to the available information. As expected, having the information of malicious actors eavesdropping on the wireless links allows the network to adjust its parameters better and increase the secrecy capacity. Figure 11 also indicates that the secrecy capacity performance gap of not knowing the channel characteristics of eavesdroppers is not significant. In other words, blindly optimizing the secrecy capacity by focusing on the legitimate user rates produces an outcome that is very close to an optimization framework that has and leverages the full information about eavesdroppers. The reason for this is that the proposed practical solution with unknown CSI imperceptibly considers the possible CSI between the base stations and the eavesdroppers. We conclude that despite the practical assumption of not knowing the CSI of eavesdropping channels and not exploring methods other than optimizing the user rates, the proposition of this paper, the proposed communications and optimization framework can accomplish a performance that is very close to a network that has access to the full information about eavesdroppers. This encourages doing further research on improving the proposed technique, for example, by considering partially known information about eavesdroppers or other reasonable assumptions, or even exploring new physical layer security metrics. ### _User Mobility_ In this subsection, we examine the secrecy capacity of mobile users with a static eavesdropper. Without loss of generality, the mobility of the ground users will be over the x-axis with a fixed y-position. We define the distance step parameter (\(dx\)), which corresponds to the granularity of movement. Then, \(UE_{X_{C}^{t+1}}=UE_{X_{C}^{t}}+dx\), can be defined to model the mobility of the ground users, where \(UE_{X_{C}^{t+1}}\) is the next center x-positions of all the ground users in the next time step and for each new center the users are redistributed randomly around the center. This process enables the users to Figure 9: Comparison of different techniques Vs number of users for total secrecy capacity of the system. simulate a realistic movement pattern that reflects their actual movement. We consider 16 ground users to be served either directly by the GBS or through the AR. Additionally, with each movement of step \(dx\), the proposed solution of Fig. 2 re-clusters the users, re-performs beamforming and power control for the GBS and UAV transmissions, and re-optimizes the trajectory of the UAV given the new positions of users. Figure 12 presents the obtained total secrecy capacity over the center position of the moving user cluster for the proposed solution and the benchmarks introduced in Fig. 9. The results of Fig. 12 show that the proposed solution achieves a higher total secrecy capacity compared to the other schemes. By optimizing the trajectory of the UAV, the system ensures that it flies as close as possible to the users being served by the AR enabling good channels and lower transmit powers. We observe that the secrecy capacity of the proposed solution drops to zero only for the case where the center of the user cluster matches the eavesdropper position. After the users pass the eavesdropper, the secrecy capacity rapidly recovers. Notice the secrecy capacity after passing the eavesdroppers is lower than before reaching it. This is because of the lower data rates achieved by the direct GBS links experiencing a higher path loss with increasing distance. On the other hand, the NoUAV+NoBF scheme has reached the zero secrecy capacity much earlier and remains at this state even after leaving the eavesdropper behind. When comparing the performance of the UAV+NoBF and the NoUAV+BF schemes, we again realize the effectiveness of deploying the AR for achieving a higher secrecy capacity. ## VI Conclusions This paper addressed the major eavesdropping problem in present-day wireless communications. We developed a practical framework against passive eavesdroppers in multi-user Figure 11: Secrecy capacity of the proposed optimization framework for the cases of known and unknown CSI of eavesdroppers, employing the secrecy capacity and user capacity as the reward function, respectively. Figure 12: Comparing the performance of the proposed solution under user mobility scenario with other benchmark techniques. Figure 10: Illustration of channel based user clustering of Algorithm 1 and UAV trajectory optimization of Algorithm 2. cellular networks without knowledge of the eavesdroppers' locations and CSI channels. Considering the unknowns, we optimized the user rates employing advanced wireless techniques at the physical layer to improve the sum-secrecy capacity among all users in a cell. Specifically, we suggested employing multi-user beamforming and deploying a UAV that serves as an AR. We clustered the users into two groups wherein users are either served by the GBS or by the AR, whose 3D position, multiuser beamforming matrix, and transmit powers are optimized combining closed-form expressions with machine learning techniques. Specifically, we designed and analyzed a DQN for the UAV trajectory optimization subproblem. Numerical results showed that the proposed system achieves highest secrecy capacities and scales well over the number of users to be served. Lessons learned from this work can lead to a number of research directions for solving open research challenges. We will examine additional UAV specific operational constraints, including energy consumption, flight time, and speed, in future work. These are especially important for implementation and deployments of ARs with today's small UAVs. One can prototype and validate the presented techniques on the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW) [53], which facilitates implementing the proposed communications system with software radios and conducting different types of mobility experiments by leveraging AERPAW's unmanned ground vehicles. ### _Proof of Lemma iv.1_ Proof.: The substitution of \(\mathbf{H}_{1}\), \(\mathbf{H}_{2}\), \(\mathbf{W}_{br}\), and \(\mathbf{W}_{r}\), which are defined in (33), (34), (35), and (36), respectively, in \(\mathbf{y}_{2}\) defined in (32) yields \[\mathbf{y}_{2} =\mathbf{U}_{2}\,\mathbf{\Sigma}_{2}\,\underbrace{\mathbf{V}_{2}^{\dagger}\, \mathbf{V}_{2}}_{I}\,\hat{\mathbf{A}}_{r}\,\mathbf{U}_{1}^{\dagger}\,\mathbf{A}_{r}\, \mathbf{U}_{1}\,\mathbf{\Sigma}_{1}\,\underbrace{\mathbf{V}_{1}^{\dagger}\,\mathbf{V}_{1}}_{I }\,\mathbf{A}_{b}\,\mathbf{U}_{2}^{\dagger}\,\mathbf{s}_{K}\] \[+\,\mathbf{U}_{2}\,\mathbf{\Sigma}_{2}\,\underbrace{\mathbf{V}_{2}^{\dagger} \,\mathbf{V}_{2}}_{I}\,\hat{\mathbf{A}}_{r}\,\mathbf{U}_{1}^{\dagger}\,\mathbf{\Lambda}_{r }\,\mathbf{n}_{1}\,+\,\mathbf{n}_{2}. \tag{48}\] Therefore we have \[\mathbf{y}_{2} =\underbrace{\mathbf{U}_{2}\,\mathbf{\Sigma}_{2}\,\hat{\mathbf{A}}_{r}\, \mathbf{U}_{1}^{\dagger}}_{I}\,\mathbf{A}_{r}\,\underbrace{\mathbf{U}_{1}\,\mathbf{\Sigma }_{1}\,\mathbf{\Lambda}_{b}\,\mathbf{U}_{2}^{\dagger}}_{I}\,\mathbf{s}_{K}\] \[+\,\mathbf{U}_{2}\,\mathbf{\Sigma}_{2}\,\hat{\mathbf{A}}_{r}\,\mathbf{U}_{1}^ {\dagger}\,\mathbf{\Lambda}_{r}\,\mathbf{n}_{1}\,+\,\mathbf{n}_{2}, \tag{49}\] where the matrices \(\mathbf{I}\) are obtained using the \(\mathbf{ZF}\) criterion, i.e., \(\mathbf{U}_{2}\,\mathbf{\Sigma}_{2}\,\hat{\mathbf{A}}_{r}\,\mathbf{U}_{1}^{\dagger}=\mathbf{I}\) and \(\mathbf{U}_{1}\,\mathbf{\Sigma}_{1}\,\mathbf{\Lambda}_{b}\,\mathbf{U}_{2}^{\dagger}=\mathbf{I}\). As a result, the simplified equation is \[\mathbf{y}_{2}=\mathbf{\Lambda}_{r}\,\mathbf{s}_{K}\,+\,\mathbf{\Lambda}_{r}\,\mathbf{n}_{1}\,+\, \mathbf{n}_{2} \tag{50}\] in which \[\mathbf{\Lambda}_{r}=\begin{pmatrix}\lambda_{r,1}&0&\dots&0&0\\ \vdots&\vdots&\dots&\vdots&0\\ 0&\dots&\lambda_{r,k}&0&0\\ 0&\vdots&\dots&\vdots&0\\ 0&0&\dots&0&\lambda_{r,K}\end{pmatrix}_{K\times K},\mathbf{s}_{K}=\begin{pmatrix}s _{1,1}\\ \vdots\\ s_{k,1}\\ \vdots\\ s_{K,1}\end{pmatrix}_{K\times 1}\] \(\mathbf{\Lambda}_{r}\) is a diagonal matrix and \(\mathbf{n}_{1},\mathbf{n}_{2}\in\mathbb{C}^{K\times 1}\) are the noise vectors. The simplified SINR can then be written as \[SINR=\frac{\lambda_{r,k}^{2}}{\lambda_{r,k}^{2}+1}, \tag{51}\] which proves Lemma IV.1. ### _Proof of Lemma iv.2_ Proof.: The beamforming power at the relay can be simplified as follows \[P_{r} =Tr\Big{(}\mathbf{x}_{r}\mathbf{x}_{r}^{\dagger}\Big{)} \tag{52}\] \[=Tr\bigg{(}\mathbf{W}_{r}\Big{(}\underbrace{\mathbf{H}_{1}\mathbf{W}_{b}\mathbf{W }_{b}^{\dagger}\mathbf{H}_{1}^{\dagger}}_{term\,\,i}+\mathbf{I}\Big{)}\mathbf{W}_{r}^{ \dagger}\bigg{)},\] (53) \[=Tr\bigg{(}\mathbf{W}_{r}\Big{(}\underbrace{\mathbf{U}_{1}\mathbf{\Sigma}_{ 1}\mathbf{\Lambda}_{b}\mathbf{\Lambda}_{b}^{\dagger}\mathbf{\Sigma}_{1}^{\dagger}\mathbf{U}_ {1}^{\dagger}}_{I}+\mathbf{I}\Big{)}\mathbf{W}_{r}^{\dagger}\bigg{)},\] (54) \[=2\times Tr\Big{(}\mathbf{W}_{r}\,\mathbf{W}_{r}^{\dagger}\Big{)} \tag{55}\] where (53) is obtained by replacing (13) and (14) in (52), (54) is derived by substituting (33) and (35) in _term i_ of (53), and (54) is obtained from \(\mathbf{U}_{1}\,\mathbf{\Sigma}_{1}\,\mathbf{\Lambda}_{b}\,\mathbf{U}_{2}^{\dagger}=\mathbf{I}\). Subsequently, by replacing (36) into (55), we have \[P_{r} =2\times Tr\Big{(}\mathbf{W}_{r}\,\mathbf{W}_{r}^{\dagger}\Big{)} \tag{56}\] \[=2\times Tr\Big{(}\mathbf{V}_{2}\,\underbrace{\hat{\mathbf{A}}_{r}\, \mathbf{U}_{1}^{\dagger}}_{term\,\,ii}\,\mathbf{\Lambda}_{r}^{\dagger}\,\underbrace{ \mathbf{U}_{1}\,\hat{\mathbf{A}}_{r}}_{term\,\,iii}\,\mathbf{V}_{2}^{\dagger}\Big{)}\] (57) \[=2\times Tr\Big{(}\mathbf{V}_{2}\mathbf{\Sigma}_{2}^{-1}\mathbf{U}_{2}^{ \dagger}\mathbf{\Lambda}_{r}^{\dagger}\mathbf{U}_{2}\mathbf{\Sigma}_{2}^{-1}\mathbf{V}_{2}^{ \dagger}\Big{)}\] (58) \[=2\,\sum_{m=1}^{K}\sum_{n=1}^{K}|\mathbf{U}_{2}(m,n)|^{2}\,\sigma_{2,n }^{-2}\,\lambda_{r,m}^{2},\] where considering \(\mathbf{U}_{2}\,\mathbf{\Sigma}_{2}\,\hat{\mathbf{A}}_{r}\,\mathbf{U}_{1}^{\dagger}=\mathbf{I}\) former _ii_ and _term iii_ of (56) yields (57). ### _Proof of Theorem iv.3_ Proof.: We need to obtain the optimum beamforming power elements and Lagrange multipliers, i.e., \(\lambda_{r,l}^{*}\), \(\alpha_{1}^{*}\), \(\alpha_{2,l}^{*}\), and \(\alpha_{3,l}^{*}\), where \(l=\{1,\cdots,K\}\). To this end, we apply the Karush-Kuhn-Tucker (KKT) conditions to this problem as has been applied to similar ones [54][55][56]. From the gradient condition and the complementary slackness condition, we have \[\nabla_{\lambda_{r,l}}\mathcal{L}(\lambda_{r,l}^{*},\alpha_{1}^{*}, \alpha_{2,l}^{*},\alpha_{3,l}^{*})=0, \tag{59}\] \[-\,\alpha_{1}^{*}\Big{(}2\,\sum_{l=1}^{K}\sum_{n=1}^{K}|\mathbf{U}_{2 }(l,n)|^{2}\,\sigma_{2,n}^{-2}\,\lambda_{r,l}^{*}\,{}^{2}-P_{r,max}\Big{)}=0,\] (60) \[-\,\alpha_{2,l}^{*}\Big{(}\lambda_{r,l}^{*}-\lambda_{r,max}\Big{)}=0,\] (61) \[-\,\alpha_{3,l}^{*}\Big{(}-\lambda_{r,l}^{*}\Big{)}=0. \tag{62}\] By simplifying (59), we obtain \[\frac{2\,\lambda_{r,l}^{*}}{\big{(}2\lambda_{r,l}^{*}+1\big{)}\, \big{(}\lambda_{r,l}^{*}+1\big{)}\,\ln 2}\,-4\,\alpha_{1}^{*}\,\lambda_{r,l}^{*}\] \[\quad\times\sum_{n=1}^{K}|\mathbf{U}_{2}(l,n)|^{2}\,\sigma_{2,n}^{-2} \,-\,\alpha_{2,l}^{*}\,+\,\alpha_{3,l}^{*}=0. \tag{63}\] Applying the KKT conditions yields the optimal beamforming power as follow \[\lambda_{r,l}^{\star}=\begin{cases}0&\lambda_{r,l}^{\dagger}\leq 0,(\alpha_{1}^{ \star}F\ln 2)>0.25\\ \lambda_{r,l}^{\dagger}&0<\lambda_{r,l}^{\dagger}<\lambda_{r,max}\\ \lambda_{r,max}&\lambda_{r,l}^{\dagger}\geq\lambda_{r,max}\end{cases}\] in which \[\lambda_{r,l}^{\dagger}=\sqrt{\frac{1}{4}\Big{(}\sqrt{1+\frac{2}{\alpha_{1}^{ \star}F\ln 2}}-3\Big{)}}, \tag{64}\] where \(F\) is a constant that equals to \(\sum\limits_{n=1}^{K}|\mathbf{U}_{2}(l,n)|^{2}\,\sigma_{2,n}^{-2}\), and the Lagrangian multiplier \(\alpha_{1}^{\star}\) can be obtained by replacing (43) into the first constraint of (40) when the equality holds: \(\alpha_{1}^{\star}=f_{1}(P_{r,max},\mathbf{U}_{2},\Sigma_{2})\). Note that if the last constraint defined in the condition (62) is binding, i.e., if \(\lambda_{r,l}^{\star}=0\), then \(\alpha_{1}^{\star}=\alpha_{2,l}^{\star}=0\) due to the complementary slackness conditions. Substituting these multipliers in (59) results in \(\alpha_{3,l}^{\star}=0\). Also, replacing \(\lambda_{r,l}^{\star}=0\) in the objective function of (59) results in a zero capacity rate, which is not desired. In the same way as in [55, 56], it can be considered that \(\lambda_{r,l}^{\star}=\lambda_{r,max}\) for the values of beamforming powers above than the maximum. In addition, the value of \(\lambda_{r,l}^{\dagger}\) in (64) can be numerically obtained for different values of channel coefficients and the UAV's power limitation, i.e., \(\mathbf{U}_{2}\), \(\sigma_{2,n}^{-2}\), and \(P_{r,max}\). However, using the Taylor series in (64) at \(x=\frac{2}{\alpha_{1}^{\star}F\ln 2}\) with negligible \(O(x^{2})\), one can further simplify the obtained \(\lambda_{r,l}^{\dagger}\) as \[\lambda_{r,l}^{\dagger 2}=\frac{1}{4\,f_{1}\,\sum\limits_{n=1}^{K}|\mathbf{U}_{2}(l, n)|^{2}\,\sigma_{2,n}^{-2}\,\ln 2}-0.5, \tag{65}\] where \(\lambda_{r,l}^{\dagger}\) is a positive number and \(f_{1}\) is the above defined function of the UAV power and channel coefficients. Finally, it is worth pointing out that if the UAV uses only one antenna for communicating with its users, then the beamforming power for the antenna is set to be \(\lambda_{r,max}\).
2301.02447
Regret theory, Allais' Paradox, and Savage's omelet
We study a sufficiently general regret criterion for choosing between two probabilistic lotteries. For independent lotteries, the criterion is consistent with stochastic dominance and can be made transitive by a unique choice of the regret function. Together with additional (and intuitively meaningful) super-additivity property, the regret criterion resolves the Allais' paradox including the cases were the paradox disappears, and the choices agree with the expected utility. This superadditivity property is also employed for establishing consistency between regret and stochastic dominance for dependent lotteries. Furthermore, we demonstrate how the regret criterion can be used in Savage's omelet, a classical decision problem in which the lottery outcomes are not fully resolved. The expected utility cannot be used in such situations, as it discards important aspects of lotteries.
Vardan G. Bardakhchyan, Armen E. Allahverdyan
2023-01-06T10:10:14Z
http://arxiv.org/abs/2301.02447v1
# Regret theory, Allais' Paradox, and Savage's omelet ###### Abstract We study a sufficiently general regret criterion for choosing between two probabilistic lotteries. For independent lotteries, the criterion is consistent with stochastic dominance and can be made transitive by a unique choice of the regret function. Together with additional (and intuitively meaningful) super-additivity property, the regret criterion resolves the Allais' paradox including the cases were the paradox disappears, and the choices agree with the expected utility. This super-additivity property is also employed for establishing consistency between regret and stochastic dominance for dependent lotteries. Furthermore, we demonstrate how the regret criterion can be used in Savage's omelet, a classical decision problem in which the lottery outcomes are not fully resolved. The expected utility cannot be used in such situations, as it discards important aspects of lotteries. **Keywords:** Regret theory, Allais' paradox, stochastic dominance, transitive regret. **JEL Classification:** D81. Introduction The history of expected utility theory (EUT) started with Bernoulli's work resolving the St. Petersburg paradox [1]. Several axiomatic schemes for EUT are known [2; 3]. Currently, EUT has applications in a wide range of fields, including economics [4], psychology [5], evolutionary game theory [6], and general artificial intelligence [7]. EUT shows how to choose between two lotteries [2; 3; 4]: \[(x,p)=\begin{pmatrix}x_{1}&x_{2}&...&x_{n}\\ p_{1}&p_{2}&...&p_{n}\end{pmatrix},\qquad(y,q)=\begin{pmatrix}y_{1}&y_{2}&...&y_ {n}\\ q_{1}&q_{2}&...&q_{n}\end{pmatrix}, \tag{1}\] \[\sum\nolimits_{k=1}^{n}p_{k}=\sum\nolimits_{k=1}^{n}q_{k}=1, \tag{2}\] where \((p_{1},...,p_{n})\) and \((q_{1},...,q_{n})\) are (resp.) the probabilities of monetary outcomes \((x_{1},...,x_{n})\) and \((y_{1},...,y_{n})\) within each lottery. EUT proposes the following functional for each lottery [2; 3; 4]: \[V(x,p)=\sum\nolimits_{i=1}^{n}u(x_{i})p_{i}, \tag{3}\] where \(u(x_{i})\) is the utility of the monetary value \(x_{i}\). EUT recommends choosing in (1) the first lottery, if \(V(x,p)>V(y,q)\). Experiments revealed problems with EUT and its axiomatic foundations. In particular, several classic experiments cannot be explained by EU for any choice of the utility function \(u(.)\) in (3) [8; 9]. People generally choose in contradiction to EUT, violating the independence axiom, one of four axioms of the von Neumann-Morgenstern formulation of EUT [2]. The most prominent example of this is Allais's paradox [8], where each human subject chooses between two lotteries. The prospect theory [10; 11], and rank-dependent utility theory [12; 13] discarded the independence axiom, and proposed functionals similar to \(V(x,p)\) in (3), where instead of probabilities \(p_{i}\) one employs weights \(\pi_{i}\) that generally depend both on \((p_{1},...,p_{n})\) and \((x_{1},...,x_{n})\). Refs. [4; 5; 9] discuss these and other alternatives to EUT. There are also other situations where EUT does not apply. EUT cannot be used directly when the lottery outcome remains uncertain even after the lottery choice has been made. A good example of this situation is the decision problem known as Savage's omelet [3]. In our knowledge, this problem has never been studied from a viewpoint of EUT's inapplicability. As we show below, both Allais' paradox and Savage's omelet can be resolved by the regret theory (RT), which is one of the alternatives of EUT. The main difference of RT compared to EUT is that RT does not operate with a value functional for a single lottery. Instead it counter-factually compares two lotteries. RT has an intuitive emotional appeal, and it is also related to cognitive aspects of decision making [14]. RT was first proposed by Savage in minimax form [3] [see [15] for an update of this approach], and later brought to its current form in [16; 17]; see [9; 18] for a review. Ref. [16] extended the regret to independent lotteries and noted its potential in explaining Allais' paradox. Ref. [16] also analyzed transitivity, common ration effect, and preference reversals. Functional forms involving two lotteries were given axiomatic foundation in [19]. An axiomatic formulation of regret was attempted in [20]. This work has three purposes. First, we want to show how Allais' paradox is solved by a transitive and super-additive RT. People mentioned regret in the context of Allais' paradox [see e.g. [5; 14; 16]], but so far no systematic and complete solution of this paradox was provided. Our solution is rather complete, because it also predicts conditions under which the paradox does not hold. Both transitivity and super-additivity have transparent meaning for regret theories in general. We do clarify their applicability range. This is especially important for transitivity, because generally regret theories do not lead to transitive predictions [21]. Second, we prove that the transitive and super-additive regret theory is consistent with the stochastic dominance criterion [2]. Stochastic dominance is a useful tool, but it does not apply to comparing any pair of lotteries. The previous literature in this direction is mostly negative showing that regret-based approaches violate first order stochastic dominance [22; 23]1. Third, we demonstrate--using as an example Savage's omlet problem--that RT can recommend choosing between lotteries with not resolved outcomes, a task which cannot be consistently addressed by EUT. Footnote 1: Ref. [22] analyzed relations between RT and stochastic dominance for a specific case. This analysis is based on more general formulation of first order stochastic dominance that compares cumulative distribution functions. Here we focus on the simplest version of stochastic dominance. The paper is organized as follows. Section II is devoted to regret functional for independent lotteries and some of its properties related to the expected utility. In decision making theory the functional form is frequently derived from axiomatic foundation. In contrast, here we first introduce the functional considered, then derive its properties. Section III is devoted to Allais' paradox and its relations to other concepts. ance is considered in section IV. Section V analyzes Savage's omelet problem, identifies an aspect that prevents the applicability of the expected utility theory, and solves this problem via the regret. We summarize in the last section. ## II Regret and its features ### Axioms of Expected Utility Theory (EUT) We remind the four axioms of EUT (3)--completeness, transitivity, continuity, independence--since they will motivate our further consideration. First of all one introduces a preference relation \(\succeq\), and indifference relation \(\sim\) between the lotteries (1), where \(\sim\) means that both \(\succeq\) and \(\preceq\) hold. When comparing two lotteries in (1) we sometimes assume (without loss of generality) the same outcomes: \(\{x_{k}=y_{k}\}_{k=1}^{n}\). If they are initially different, we can introduce suitable zero-probability events and make them identical. **1.** The completeness axiom states that any pair of lotteries in (1) can be compared: \[(x,p)\succeq(x,q)\quad\text{or}\quad(x,q)\succeq(x,p)\quad\text{or}\quad(x, q)\sim(x,p), \tag{4}\] where \((x,p)\succeq(x,q)\) means that lottery \((x,q)\) is not preferred to \((x,p)\). **2.** The transitivity axiom states: \[(x,p)\succeq(x,q)\succeq(x,r)\quad\text{means}\quad(x,p)\succeq(x,r), \tag{5}\] **3.** The continuity axiom states for any three lotteries \[(x,p)\succeq(x,q)\succeq(x,r)\quad\text{implies}\quad(x,q)\sim(x,\alpha p+(1 -\alpha)r), \tag{6}\] for some \(\alpha\in[0,1]\). This axiom implies continuity of the value function to be deduced from the four axioms. **4.** The independence axiom--also known independence of irrelevant alternatives or the sure-thing principle--claims that combining each of two lotteries with any fixed one will not alter the preferences [5; 24]: \[(x,p)\succeq(x,q)\quad\text{means}\quad(x,\alpha p+(1-\alpha)r)\succeq(x, \alpha q+(1-\alpha)r), \tag{7}\] where the irrelevant alternative is \((x,r)\). Eq. (7) is among the most controversial axioms in decision theory and has triggered many debates [5; 24]; see in this context also Appendix B, where we explain why specifically the meaning of (7) can be ambiguous. Ref. [25] briefly reviews its current status with counter-examples. Experimental studies showed violations of (7), with some concerns on whether these violations are systematic [5]. ### Definition of regret The regret defines a counterfactual outcome-wise comparison between the lotteries (1) using certain ideas of EUT. Hence for particular cases it would coincide with the decision criterion of EUT. The utility function \(u(x)\) is assumed to exist beforehand and known to the decision-maker [5]. Assume that \((y,q)\) is chosen and its outcome \(y_{j}\) is found. The decision-maker compares this outcome with what would be found if \((x,p)\) would be taken and defines: \[R(x,p;y_{j})\equiv{\sum}_{i=1}^{n}f(u(x_{i})-u(y_{j}))p_{i}, \tag{8}\] where \(u(x)\) is the utility function, and \(f(x)\) is a function holding \[f(x\geq 0)\geq 0,\qquad f(x\leq 0)\leq 0,\qquad f(0)=0. \tag{9}\] In particular, \(R(x,p;y_{j})>0\) (positive regret), if \(x_{i}>y_{j}\). Generally, \(f(x)\) accounts for both regret and appreciation. We get a pure regret (appreciation), if \(f(x\leq 0)=0\) (\(f(x\geq 0)=0\)). Since \((x,p)\) was not actually chosen, its outcomes are not known; hence the averaging in (8). Moreover, once the decision-maker keeps on choosing \((y,q)\) and explores all its outcomes according to their probabilities, the average of (8) reads: \[R(x,p;y,q)\equiv{\sum}_{j=1}^{n}q_{j}R(x,p;y_{j})={\sum}_{i,j=1}^{n}f(u(x_{i}) -u(y_{j}))p_{i}q_{j}, \tag{10}\] where (10) already assumed that the events \((y_{j},x_{i})\) are independent, i.e. their joint probability is \(q_{j}p_{i}\). This additional information is to be provided for unambiguous definition of lotteries in (1). Note that (8, 10) are asymmetric with respect to the lotteries (1), because \((y,q)\) is actually chosen, while \((x,p)\) is reasoned counter-factually given this choice. The regret preference \(\succeq_{\rm reg}\) is defined as [9; 16; 17; 18] \[(x,p)\succeq_{\rm reg}(y,q)\quad\mbox{iff}\quad R(y,q;x,p)-R(x,p;y,q)={\sum}_{ i,j=1}^{n}g(u(y_{j})-u(x_{i}))p_{i}q_{j}\leq 0, \tag{11}\] where \[g(x)\equiv f(x)-f(-x), \tag{12}\] is anti-symmetric and monotonic: \[g(x) =-g(-x), \tag{13}\] \[g(x) \geq g(y)\quad\text{for}\quad x\geq y. \tag{14}\] The meaning of \(R(y,q;x,p)-R(x,p;y,q)\leq 0\) is that \((x,p)\) is preferred if its leads to a smaller average regret. For a particular case \[g(x)=ax,\qquad a>0, \tag{15}\] where \(a\) is a constant, we revert from (11) to the expected utility. Note that (15) is achieved for various functions \(f(x)\); e.g. \(f(x)=ax/2\) or \(f(x)=a\max[x,0]\). The above definition generalizes for a non-trivial joint probability \(P(x_{i},y_{j})\) of \((x_{i},y_{j})\) with \[\sum\nolimits_{i=1}^{n}P(x_{i},y_{j})=q_{j},\qquad\sum\nolimits_{j=1}^{n}P(x _{i},y_{j})=p_{i}. \tag{16}\] Now \(p_{i}\) in (8) should be replaced by conditional probability \(P(x_{i}|y_{j})\), which is reasonable for a counter-factual reasoning, and instead of (8-11) we have \[R(x,p;y_{j})\equiv\sum\nolimits_{i=1}^{n}f(u(x_{i})-u(y_{j}))P(x _{i}|y_{j}), \tag{17}\] \[R(x,p;y,q)\equiv\sum\nolimits_{j=1}^{n}q_{j}R(x,p;y_{j})=\sum \nolimits_{i,j=1}^{n}f(u(x_{i})-u(y_{j}))P(x_{i},y_{j}),\] (18) \[(x,p)\succeq_{\text{reg}}(y,q)\quad\text{iff}\quad\sum\nolimits_ {i,j=1}^{n}g(u(y_{j})-u(x_{i}))P(x_{i},y_{j})\leq 0. \tag{19}\] In particular, the outcomes in (16) can refer to the same states of nature [2; 20; 24]. This implies \[P(x_{i},y_{j})=p_{i}\delta_{ij},\quad i,j=1,...,n, \tag{20}\] where \(\delta_{ij}\) is the Kroenecker delta, and where \(\{p_{i}=q_{i}\}_{i=1}^{n}\) are the probabilities for those unknown states of nature; see section V for details. ### Two propositions about regret Note that for the regret preference relation (11) we can take lotteries (1) to have the same outcomes, \(x_{k}=y_{k}\), using the same argument as before (4). Now the completeness axiom (4) obviously holds for \(\succeq_{\text{reg}}\). The continuity axiom is valid as well. **Proposition 1.** For the regret preference relation (11) \[(x,p)\succeq_{\rm reg}(x,q)\succeq_{\rm reg}(x,r)\quad\mbox{implies}\quad(x,q) \sim_{\rm reg}(x,\alpha p+(1-\alpha)r), \tag{21}\] for some \(\alpha\in[0,1]\). Working out the last relation in (21) we find \[\alpha=B/(A+B)\in[0,1], \tag{22}\] \[A=\sum\nolimits_{i,j=1}^{n}\!p_{i}q_{j}g(u(x_{i})-u(x_{j}))\geq 0,\qquad B=\sum\nolimits_{i,j=1}^{n}\!r_{i}q_{j}g(u(x_{j})-u(x_{i}))\geq 0, \tag{23}\] where (23) follows from first and second relations in (21). It is known that \(\succeq_{\rm reg}\) violates transitivity for a general choice of \(f(x)\)[21]. In particular, the transitivity is violated under (20) [26]; e.g. for the same states of nature. Transitivity violation is not necessarily a drawback, since there are arguments for involving non-transitive choices even in normative choices [27]. Ref. [28] shows that for the most general form of regret there exist models not violating transitivity. Let us now provide a sufficiently complete solution for the transitivity of \(\succeq_{\rm reg}\). First, we show that \(\succeq_{\rm reg}\) will be transitive for a particular choice of \(f(x)\) in (11). Define \[f(x)=b(a^{x}-1), \tag{24}\] where \(a>0\) and \(b>0\). Eq. (9) holds. Now \((x,p)\succeq_{\rm reg}(x,q)\) amounts to \[v(p)w(q)\geq v(q)w(p), \tag{25}\] \[v(p)\equiv\sum\nolimits_{i=1}^{n}\!a^{u(x_{i})}p_{i}>0,\qquad w(q )\equiv\sum\nolimits_{i=1}^{n}\!a^{-u(x_{i})}q_{i}>0. \tag{26}\] Eqs. (25, 26) imply that with the choice (24), \(\succeq_{\rm reg}\) is transitive. Fisburn's theorem on transitivity [29] shows that (24) is also necessary for transitivity. **Proposition 2.** The regret preference relation \(\succeq_{\rm reg}\) given by (11) preserves transitivity iff (24) holds. Returning to (4-7) we see that only the independence axiom can be violated by \(\succeq_{\rm reg}\); see below for more details. ## III Solving allais' paradox with regret There was a great deal of attention focused on Allais' paradox as one of the major systematic violations of EUT [5; 8; 9; 10; 11; 30]. Regret theory is mentioned in the context of Allais's paradox [5; 14; 16], but no systematic solution of the paradox via the regret theory was so far provided. We show below that this solution can be achieved by respecting the transitivity and that it does provide an important constraint on the form of \(g(x)\) in (11, 12). Consider the standard formulation of the Allais' paradox [5; 8]. A decision make is choosing between the following two lotteries [cf. (1)]: \[\mathrm{I}\equiv\begin{pmatrix}1\\ 1\end{pmatrix},\qquad\mathrm{II}\equiv\begin{pmatrix}0&1&5\\ 0.01&0.89&0.1\end{pmatrix}, \tag{27}\] and then between \[\mathrm{III}\equiv\begin{pmatrix}0&1\\ 0.89&0.11\end{pmatrix},\qquad\mathrm{IV}\equiv\begin{pmatrix}0&5\\ 0.9&0.1\end{pmatrix}, \tag{28}\] where the monetary outcomes in (27, 28) are normally given in millions of \(\$\). There are 4 possible outcomes here: \((\mathrm{I},\mathrm{III})\), \((\mathrm{I},\mathrm{IV})\), \((\mathrm{II},\mathrm{III})\), \((\mathrm{II},\mathrm{IV})\), where \((\mathrm{I},\mathrm{III})\) means choosing \(\mathrm{I}\) in (27) and \(\mathrm{III}\) in (28). Choosing \((\mathrm{I},\mathrm{III})\) or \((\mathrm{II},\mathrm{IV})\) is consistent with the EUT; e.g. \((\mathrm{I},\mathrm{III})\) is achieved if \(u(1)<u(5)\) and \(u(1)\approx u(5)\). In contrast, most of people take \((\mathrm{I},\mathrm{IV})\) thereby violating the expected utility theory (EUT) [5]. Applying preference relation (11) to the choice \((\mathrm{I},\mathrm{IV})\), we will find an important and intuitive condition for function \(g(x)\). Now \(\mathrm{I}\succeq_{\mathrm{reg}}\mathrm{II}\) reads from (11): \[0.01\cdot g(u(0)-u(1))+0.1\cdot g(u(5)-u(1))<0. \tag{29}\] Since \(g(x)\) is an increasing function [cf. (14)], (29) implies \[u(5)-u(1)<u(1)-u(0). \tag{30}\] Thus (30)--which can be realized with a concave function \(u(x)\) and hence relates to risk-aversion--is a necessary condition for (11) to explain Allais' paradox. Likewise, demanding \(\mathrm{IV}\succeq_{\mathrm{reg}}\mathrm{III}\) in (28) we get \[0.089\cdot g(u(5)-u(0))-0.099\cdot g(u(1)-u(0))+0.011\cdot g(u(5)-u(1))>0 \tag{31}\] Taking the difference of (31) and (29) we get \[-0.089\cdot g(u(5)-u(0))+0.089\cdot g(u(1)-u(0))+0.089\cdot g(u(5)-u(1))<0,\] yielding \[g(u(5)-u(0))>g(u(1)-u(0))+g(u(5)-u(1)). \tag{32}\] Now (32) is the second necessary condition for solving Allais's paradox. Taking (32) and (29) together is necessary and sufficient for solving the paradox. It is intuitively clear what (32) means. The decision maker is more impressed (i.e. experiences more regret) with the difference \(u(5)-u(0)\), than with this difference \(u(5)-u(0)=u(1)-u(0)+u(5)-u(1)\) coming in two separate pieces: \(u(1)-u(0)\) and \(u(5)-u(1)\). We rewrite (32) as a more general condition: \[g(x+y)\geq g(x)+g(y),\quad x\geq 0,\quad y\geq 0, \tag{33}\] which is the super-additivity (in positive domain) for \(g(x)\). Noting from (13) that \(g(0)=0\), we recall that any convex function \(g(x)\) with \(g(0)=0\) is super-additive 2. A simple example of a function that is easily shown to be super-additive, but is not convex is \(g(x)=x\,e^{-x^{-2}}\)[31]. Indeed, \(\frac{{\rm d}^{2}}{{\rm d}x^{2}}g(x)=2\,e^{-x^{-2}}\,x^{-5}(2-x^{2})\), i.e. \(g(x)\) is concave (convex) for \(x>\sqrt{2}\) (\(\sqrt{2}>x>0\)) 3. We formulate our results as follows. Footnote 2: This fact should be known, but let us present its short proof. First note that \(g(tx)\leq tg(x)\) for \(0<t<1\) due to \(g(t(x)+(1-t)\cdot 0)\leq tg(x)+(1-t)g(0)=tg(x)\). Next, \(g(x)+g(y)=g\left((x+y)\frac{x}{x+y}\right)+g\left((x+y)\frac{y}{x+y}\right) \leq\frac{x}{x+y}g(x+y)+\frac{y}{x+y}g(x+y)=g(x+y)\). Footnote 3: Ref. [20] mentioned the super-additivity condition in the context of regret. Ref. [16] employed convexity (concavity) features of regret functional, but without any definite reason. **Proposition 3.** Allais's paradox can be explained by regret, if and only if function \(g(x)\) in (11) is strongly super-additive for some values in positive domain. **Example**. We take the transitive regret and logarithmic utility [cf. (13, 24)] \[g(x)=\sinh\left(\frac{x}{\beta}\right),\qquad u(x)=\ln\left(\frac{x}{\gamma} +1\right), \tag{34}\] where \(\beta>0\) and \(\gamma>0\) are positive parameters that characterize the decision maker. Here \(\gamma>0\) defines the threshold of the concave (risk-averse) utility \(u(x)\) (\(u(0)=0\)), because only for \(\frac{x}{\gamma}\ll 1\) we have \(u(x)\simeq 0\). In a sense, \(\gamma\) defines the initial money, since only for \(\frac{x}{\gamma}\gtrsim 1\) the decision maker will care about money. Likewise, \(\beta\) has a similar meaning of threshold, but for the regret function: if \(\frac{x}{\beta}\ll 1\), then \(g(x)=\sinh(\frac{x}{\beta})\simeq\frac{x}{\beta}\) is effectively in the regime EUT. Now \(g(x)\) in (34) holds super-additivity condition (33), since \(\sinh(0)=0\) and \(\frac{{\rm d}^{2}}{{\rm d}x^{2}}\sinh(x)=\sinh(x)\geq 0\) for \(x\geq 0\); hence (32) holds. For solving Allais' paradox we need to look at condition (29), which from (34) amounts to \[\gamma<\zeta(\beta), \tag{35}\] \[\zeta(\beta\to\infty)=5^{-10},\quad\zeta(1)=0.021,\quad\zeta( \beta\to 0)=1/3. \tag{36}\] Hence \(\zeta(\beta)\) changes from \(5^{-10}\) to \(1/3\), when \(\beta\) moves from \(\infty\) to \(0\). Let us focus on \(\gamma<0.021\) in (36). We know that (27, 28) are to be given in millions of \(\$\). Hence we multiply both \(x\) and \(\gamma\) in \(u(x)=\ln(\frac{x}{\gamma}+1)\) by \(10^{6}\), and reach the following conclusion: starting from the initial money \(\geq 21000\)\(\$\) the decision maker will behave according to the expected utility and choose lotteries \((\mathrm{II},\mathrm{IV})\) in (27, 28). The interpretation of the other two values of \(\zeta(\beta)\) in (36) is similar. Note in this context that \(5^{-10}\) is equivalent to \(5^{-10}\times 10^{8}\simeq 10\) cents. It is reported that with smaller outcomes--not millions of \(\$\) in (27, 28)--Allais' paradox need not hold [32; 33; 34]. Other authors note that when shifting all outcomes in (27, 28) with the same substantial positive amount, Allais' paradox will not hold (aversion of "0" outcome) [35]. The scheme given by (34) handles both experimental results. **Remark 1**. The super-additivity (33) of \(g(x)\) (and its ensuing relations with convexity) does not relate to risk-aversion and risk-seeking, as defined via utility \(u(x)\). To understand this, compare the following two lotteries: \[\begin{pmatrix}x\\ 1\end{pmatrix}\quad\text{and}\quad\begin{pmatrix}x-\epsilon&x+\epsilon\\ 0.5&0.5\end{pmatrix},\qquad\epsilon>0. \tag{37}\] Now the first (certain) lottery is regret-preferable compared with the second (uncertain) lottery if \(g(u(x)-u(x-\epsilon))>-g(u(x)-u(x+\epsilon))\), which is achieved due to a monotonically increasing \(g(x)\), and concavity of \(u(x)\); i.e. the risk-aversion at the level of the utility. Likewise, the convexity of \(u(x)\) (risk-seeking utility) will lead to preferring the uncertain lottery. **Remark 2.** Note that the regret is invariant with respect to \(u(x)\to u(x)+a\), where \(a\) is arbitrary, but it is not invariant with respect to \(u(x)\to bu(x)\), where \(b>0\); see e.g. the very example (34). After transformation \(u(x)\to bu(x)\), one can redefine \(g_{b}(x)=g(bx)\) such that the regret stays invariant. This redefinition respects transitivity and super-additivity of \(g(x)\). **Remark 3.** Recall that the independence axiom (7) (or the axiom of irrelevant alternatives) is the main axiom of EUT violated by the regret theory. Allais' paradox can be reformulated in such a way that the presence of this axiom is made obvious. To this end one writes (27, 28) as \[\mathrm{I} =\begin{pmatrix}1&1&1\\ 0.01&0.1&0.89\end{pmatrix}, \mathrm{III} =\begin{pmatrix}1&1&0\\ 0.01&0.1&0.89\end{pmatrix}, \tag{38}\] \[\mathrm{II} =\begin{pmatrix}0&5&1\\ 0.01&0.1&0.89\end{pmatrix}, \mathrm{IV} =\begin{pmatrix}0&5&0\\ 0.01&0.1&0.89\end{pmatrix}. \tag{39}\] We emphasize that \(\mathrm{I}\) and \(\mathrm{II}\) in (38, 39) (as well as \(\mathrm{III}\) and \(\mathrm{IV}\)) refer to independent events. It is seen that \(\mathrm{I}\) and \(\mathrm{II}\) have the common last column \(\left(\begin{smallmatrix}1\\ 0.89\end{smallmatrix}\right)\), while for \(\mathrm{III}\) and \(\mathrm{IV}\) the common last column is \(\left(\begin{smallmatrix}0\\ 0.89\end{smallmatrix}\right)\). These last columns (i.e. the corresponding outcomes with their probabilities) plays the role of independent alternatives. If they are deemed to be irrelevant, e.g. \(\left(\begin{smallmatrix}1\\ 0.89\end{smallmatrix}\right)\) is irrelevant when deciding between \(\mathrm{I}\) and \(\mathrm{II}\), then \(\mathrm{I}\) becomes equivalent to \(\mathrm{III}\), and \(\mathrm{II}\) is equivalent to \(\mathrm{IV}\). Hence one takes either \(\mathrm{(I,III)}\) or \(\mathrm{(II,IV)}\). Note that this reasoning is more general than appealing directly to the axiom (7), since this mathematical axiom does not specify the interpretation of the mixture model \(\alpha p+(1-\alpha)r\); see Appendix B for details. If experimental subjects are presented Allais' lotteries in the form (38, 39), then majority of them behave according to EUT than for (27, 28) [5]. Naturally, for the regret (11) the difference between (38, 39) and (27, 28) is absent. Hence these subjects did not use the regret theory in their decision making. ## IV Regret and Stochastic Dominance For lotteries (1) with independent probabilities, a clear-cut definition of superiority is provided by the stochastic dominance \(\succeq_{\mathrm{sto}}\)[2]. Recall its definition: we assume 4 that \(x_{k}=y_{k}\) in (1) hold with Footnote 4: This assumption of identical outcomes is not necessary, since the stochastic dominance can be formulated more generally. We do not focus on this general definition, since it is equivalent to the situation, when the outcomes are made the same by increasing their number via adding zero-probability events; cf. the discussion before (4). \[x_{i}<x_{j}\quad\text{for}\quad i<j. \tag{40}\] Now define [2] \[(x,p)\succeq_{\mathrm{sto}}(x,q)\quad\text{iff}\quad\sum\nolimits_{i=1}^{k} p_{i}\leq\sum\nolimits_{i=1}^{k}q_{i}\quad\text{for}\quad k=1,..,n. \tag{41}\] Recall that the utility \(u(x)\) in (11) is an increasing function of \(x\). Stochastic dominance does not depend on a specific form of the utility \(u(x)\) in (11) provided that it is an increasing function of \(x\), as we assume. This is an advantage of stochastic dominance. Its weakness is that it clearly does not apply to all lotteries, i.e. the completeness axiom (4) is violated. Indeed, it is sufficient to violate (41) for one value of \(k\), and this will make \(\succeq_{\rm sto}\) inapplicable. A related weakness is that its applicability is not stable with respect to small variations of outcomes. To see this, assume that (40, 41) hold and perturb \(y_{1}=x_{1}\to y_{1}^{\prime}<x_{1}\). Even a small variation of this type violates condition (41) for \(k=1\). Regret and stochastic dominance do not contradict each other, as the following proposition shows. **Proposition 4.**\((x,p)\succeq_{\rm sto}(x,q)\) implies \((x,p)\succeq_{\rm reg}(x,q)\) defined from (11). The proof is given in Appendix A. Note that Proposition 4 does not require any specific feature of \(g(x)\) apart of (13, 14). However, it does require independent probabilities for the lotteries, as implied by (11). Lotteries with independent probabilities have vast but still limited range of applications. Even within the framework of initially independent lotteries, one can envisage new dependent lotteries for which the regret is given via (19). For dependent lotteries the relation between regret and stochastic dominance is partially explained by the following proposition. **Proposition 5.** For the joint probability \(P(x_{i},x_{j})\) given by (16), let us define the marginal probabilities \(\{p_{i}\}_{i=1}^{n}\) and \(\{q_{j}\}_{j=1}^{n}\), as well as deviation of \(P(x_{i},x_{j})\) from \(p_{i}q_{j}\): \[p_{i}:=\sum\nolimits_{j=1}^{n}\!P(x_{i},x_{j}),\quad q_{j}:=\sum \nolimits_{i=1}^{n}\!P(x_{i},x_{j}), \tag{42}\] \[\theta_{i,j}:=P(x_{i},x_{j})-p_{i}q_{j},\] (43) \[\sum\nolimits_{i=1}^{n}\!\theta_{i,j}=\sum\nolimits_{j=1}^{n}\! \theta_{i,j}=0,\qquad|\theta_{i,j}|\leq p_{i}q_{j}. \tag{44}\] Then if \(g(x)\) is super-additive on positive domain [see (33)] and if \[\theta_{i,j}\geq\theta_{j,i},\quad\mbox{for}\quad i>j, \tag{45}\] one has that \((x,p)\succeq_{\rm sto}(x,q)\) defined via (40, 41) leads to \((x,p)\succeq_{\rm reg}(x,q)\) in the sense of (19). Thus the super-additivity of \(g(x)\) plus condition (45) make the regret consistent with the stochastic dominance. The proof of Proposition 5 is given in Appendix C. Savage's Omelet is solved via the regret theory Eq. (1) with \(\{p_{k}=q_{k}\}_{k=1}^{n}\) can refer to the to the decision model which assumes that at the moment of action-taking there is an uncertain state of nature (environment) \(\mathcal{S}_{k}\) to be realized from \(\{\mathcal{S}_{k}\}_{k=1}^{n}\) with probabilities \(\{p_{k}\}_{k=1}^{n}\), which are known to the decision maker [2; 24]. \(\mathcal{S}_{k}\) are called states of nature, since their future realization is independent from the action taken, but an action \(A\) (\(B\)) in a state \(\mathcal{S}_{k}\) leads to consequences with monetary outcome \(x_{k}\) (\(y_{k}\)) and utilities \(u(x_{k})\) (\(u(y_{k})\)) [2; 24]; cf. (1, 20). The following classic decision problem is described in [3]: A decision maker has to finish making an omelet began by his wife, who has already broken into a bowl five good eggs. A sixth unbroken egg is lying on the table, and it must be either used in making the omelet, or discarded. There are two states of the nature: good (the sixth egg is good) and rotten (the sixth egg is rotten), which do not depend on the actions \(A_{1}\), \(A_{2}\) and \(A_{3}\) of the decision maker. \(A_{1}\): break the sixth egg into the bowl. \(A_{2}\): discard the sixth egg. \(A_{3}\): break the sixth egg into a saucer; add it to the five eggs if it is good, discard it if it is rotten. The consequences of the acts can be written as lotteries: \[A_{1}=\begin{pmatrix}u_{-5}&u_{6}\\ p&1-p\end{pmatrix},\qquad A_{2}=\begin{pmatrix}u_{5}&u_{5}+z\\ p&1-p\end{pmatrix},\qquad A_{3}=\begin{pmatrix}u_{5}+w&u_{6}+w\\ p&1-p\end{pmatrix}, \tag{46}\] where \(p\) (\(1-p\)) is the objective probability for the sixth egg to be rotten (good), \(u_{6}\) (\(u_{5}\)) is the utility of the six-egg (five-egg) omelet, \(u_{-5}<0\) is the utility of five spoiled eggs and no omelet whatsoever, \(w<0\) is the utility of washing the saucer, and \(z<0\) is the utility of the good egg being lost. 5 Footnote 5: The concrete utilities of washing the saucer may differ depending on the state of the sixth egg. We, however, neglect this difference. Also, for simplicity \(w\) was simply added to \(u_{5}\) and \(u_{6}\). Now looking at the consequences of \(A_{2}\), we see that--in contrast to \(A_{1}\) and \(A_{3}\)--acting \(A_{2}\) does not resolve the uncertain state of nature: once the egg is discarded, the decision maker will not know (without additional actions), whether it was rotten or good. Put differently, utility \(z\) is not obtained after acting \(A_{2}\), and cannot be obtained without additional actions. Calculating the expecting utility of \(A_{2}\) in the usual way as \(pu_{5}+(1-p)(u_{5}+z)\) does not apply, because it disregards this aspect \(A_{2}\). It is natural to take the expected utility as \(pu_{5}+(1-p)u_{5}=u_{5}\) (i.e. once \(z\) is not obtained, it is not included), but then comparing with expected utilities of \(A_{1}\) and \(A_{3}\), we see that the parameter \(z\) will appear nowhere. Hence, we suggest that the expected utility does not apply to comparing \(A_{2}\) with the other two actions. Employing in (46) the reasoning of regret [cf. (8, 18, 20)] does take into account the difference between \(A_{2}\) and the other two actions. Let us for example calculate the regret about not taking \(A_{1}\) once \(A_{2}\) has been taken: \[R(A_{1},A_{2})=pf(u_{-5}-u_{5})+(1-p)f(u_{6}-u_{5}). \tag{47}\] This expression does not contain \(z\), since the uncertain state of nature was not resolved after acting \(A_{2}\), i.e. after acting \(A_{2}\) the obtained utility is \(u_{5}\). On the other hand, acting \(A_{1}\) resolves the uncertainty about the state of nature. Hence the regret of not taking \(A_{2}\), once \(A_{1}\) was acted reads: \[R(A_{2},A_{1})=pf(u_{5}-u_{-5})+(1-p)f(u_{5}+z-u_{6}), \tag{48}\] i.e. once \(A_{1}\) is taken and the egg is rotten (good), then the decision maker already knows that if \(A_{2}\) would be taken, then the egg will turn out rotten (good). It is seen that (48) contains \(z\) (the utility of discarding a good egg), while (47) does not. Now \[A_{1}\succeq_{\rm reg}A_{2}\quad{\rm iff}\quad R(A_{2},A_{1})\leq R(A_{1},A_{2 }), \tag{49}\] where \(R(A_{2},A_{1})-R(A_{1},A_{2})\) does feel the parameter \(z\). As an example of (49) consider \(f(x)=x\) [cf. the discussion after (15)]: \[p(u_{5}-u_{-5})<(1-p)(u_{6}-u_{5}-\frac{z}{2}). \tag{50}\] where we recall that \(u_{6}>u_{5}>u_{-5}\) and \(z<0\). We can naturally assume \(u_{5}-u_{-5}>u_{6}-u_{5}>0\) under which (50) is non-trivial even for \(p=1/2\). Note that the formal application of the expected utility will claim that \(A_{1}\) is preferred over \(A_{2}\) for \(p(u_{5}-u_{-5})<(1-p)(u_{6}-u_{5}-z)\), which is clearly different from (50). This is not just a different outcome; rather, the expected utility does not apply. Summary This paper studied regret functionals over the utility differences of two probabilistic lotteries; see section II. There are various types of lotteries, from independent to fully dependent that refer to the same state of nature. The regret functional compares the lotteries counterfactually taking notice of their probabilities. For particular cases, the regret reverts to the expected utility. More generally, it does not satisfy the independence (from irrelevant alternatives) axiom of the expected utility, also known as the sure thing principle. This not satisfying is by itself non-trivial and is explored in Appendix B. In contrast to the expected utility, the regret is also generally not invariant with respect to multiplying the utility by a positive number. It is also due to these two differences compared to the expected utility that the regret is efficient for explaining and resolving Allais's paradox; see section III. The resolution demands a non-trivial features of the regret functional, _viz._ its super-additivity, which does make an intuitive sense. We show that the regret functional can be chosen such that the regret-ordering holds transitivity. In particular, Allais's paradox can be resolved via a transitive regret, and this resolution provides a consistent account of changes in monetary outcomes. We devoted a special attention to relations between (the first-order) stochastic dominance and the regret-preference; see section IV. The former ordering is normatively appealing, but it is incomplete, since not every two lotteries can be compared with each other. We show that for independent lotteries the stochastic dominance implies the regret-preference. For dependent lotteries the relations between the two are more complex. Here we proposed a sufficient condition for the implication stochastic dominance \(\rightarrow\) regret-preference, which, interestingly is also based on the super-additivity of the regret; see Proposition 5. Finally, we show in section V how the considered regret theory can be useful in those situations, where actions of the decision maker do not resolve the uncertain situation. The expected utility theory does not apply to such a situation in the sense that there is an important information about the lotteries that it simply discards. In the regret, this information is employed, since the regret compares the unresolved uncertainty with the resolved uncertainty. Our results show that though the concept of regret was initially deduced from certain emotional features of decision makers, it does have many features one intuitively expects from rationality. Hence we envisage its further applications in e.g. reinforcement learning. ## Acknowledgements This work was supported by State Science Committee of Armenia, grants No. 21AG-1C038. We thank Andranik Khachatryan for useful remarks and for participating in initial stages of this work.
2302.03209
Delving into the $ B_s \to \ell \ell^{\prime}$, $B_{(s)} \to (K^{(*)}, φ, f_2^{\prime}, K_2^*) \ell \ell ^{\prime}$ processes
To shed light on the indirect search for new physics beyond the standard model, the long standing discrepancies between the theory and experiment mediated by FCNC $b\to s \ell \ell$ quark level transitions set an ideal testing ground. Though the very recent measurements of $R_K$ and $R_{K^*}$ are consistent with the standard model, still the excitements remain on the measurements of LHCb experiment with the observables $\mathcal{B} (B_s \to \phi \mu ^+ \mu ^-)$ which has deviations at the level of $3.6 \sigma$. Additionally, standard deviation of $\sim 3.3 \sigma$ and $1.2 \sigma$, respectively for $P_5^{\prime}$ in $B \to K^* \mu ^+ \mu ^-$ and the branching ratio in $B_s \to \mu^+ \mu^-$ processes are observed. Inspired by these discrepancies, we work out the constraints on the new physics coupling parameters in the presence of a non-universal $Z'$ model. We then probe the exclusive leptonic decay channels $ B_s \to \ell \ell^{\prime}$, $B_{(s)} \to (K^{(*)}, \phi, f_2^{\prime}, K_2^*) \ell \ell ^{\prime}$ induced by the neutral current transition $b\to s \ell \ell^{\prime}$. We find that the $q^2$ variation of the observables, such as, branching ratio, forward-backward asymmetry, lepton polarization asymmetry, and the very sensible observable, so called non-universality observables for LFV decays display the sensitivity of new physics. In this analysis. we estimate above mentioned observables that could shed light on the window of new physics in the near future.
Manas K. Mohapatra, Lopamudra Nayak, Rashmi Dhamija, Anjan Giri
2023-02-07T02:35:49Z
http://arxiv.org/abs/2302.03209v2
# Scrutinizing new physics in exclusive \(b\to s\ell\ell^{\prime}\) processes ###### Abstract To shed light on the indirect search for new physics beyond the standard model, the long standing discrepancies between the theory and experiment mediated by FCNC \(b\to s\ell\ell\) quark level transitions set an ideal testing ground. Though the very recent measurements of \(R_{K}\) and \(R_{K^{*}}\) are consistent with the standard model, still the excitements remain on the measurements of LHCb experiment with the observables \({\cal B}(B_{s}\to\phi\mu^{+}\mu^{-})\) which has deviations at the level of \(3.6\sigma\). Additionally, standard deviation of \(\sim 3.3\sigma\) and \(1.2\sigma\), respectively for \(P_{5}^{\ell}\) in \(B\to K^{*}\mu^{+}\mu^{-}\) and the branching ratio in \(B_{s}\to\mu^{+}\mu^{-}\) processes are observed. Inspired by these discrepancies, we work out the constraints on the new physics coupling parameters in the presence of a non-universal \(Z^{\prime}\) model. We then probe the exclusive leptonic decay channels \(B_{s}\to\ell\ell^{\prime}\), \(B_{(s)}\to(K^{(*)},\phi,f_{2}^{\prime},K_{2}^{*})\ell\ell^{\prime}\) induced by the neutral current transition \(b\to s\ell\ell^{\prime}\). We find that the \(q^{2}\) variation of the observables, such as, branching ratio, forward-backward asymmetry, lepton polarization asymmetry, and the very sensible observable, so called non-universality observables for LFV decays display the sensitivity of new physics. In this analysis. we estimate above mentioned observables that could shed light on the window of new physics in the near future. Introduction Our best understanding of how the particles and three of the forces such as strong, electromagnetic and weak interactions communicate to each other is encapsulated in the Standard Model (SM) of particle physics. Over time and through many experiments, the SM has proven a well established physics theory. Despite its spectacular success at explaining the data, the SM is believed to be incomplete which fails to describe a few challenging shortcomings such as the matter dominance over anti-matter in the present universe, neutrino masses, hierarchy problem, dark matter and dark energy, the unification of gravity with other three fundamental forces etc. To solve these open puzzles, the quest for physics beyond the SM (BSM) is of prime importance. In this context, the B factories have been an excellent testing ground to shed light in exploring the new physics (NP) beyond the SM through low energy experiments. It has literally witnessed the breaking of lepton flavor universality in the charged (neutral) current decays mediated by \(b\to c\) (\(b\to s\)) quark level transitions. Experimentally, the \(b\to c\tau\nu\) quark level transition have lepton non universality anomalies in exclusive \(B\to D^{(*)}\tau\nu\), \(B\to J/\psi\tau\nu\) decays with a tension of \(1.4\sigma\) (\(2.5\sigma\)) and \(1.8\sigma\), respectively, obtained from the HFLAV group [1]. The \(\tau\) polarization fraction and the longitudinal polarization fraction of \(D^{*}\) meson in \(B\to D^{*}\tau\nu\) have \(1.6\sigma\)[2; 3] and \(1.6\sigma\)[4] deviations, respectively. Similarly, several measurements in \(b\to s\ell^{+}\ell^{-}\) transitions such as the angular observable \(P_{5}^{\prime}\) in \(B\to K^{*}\mu^{+}\mu^{-}\) in the bins \(q^{2}\in[4.0,6.0]\), [\(4.3\), \(6.0\)] and [\(4.0\), \(8.0\)] from ATLAS [5], LHCb [6; 7], CMS [8], Belle [9] respectively deviate at \(3.3\sigma\), \(1\sigma\) and \(2.1\sigma\) from the SM expectations [10; 11; 12]. The updates in the measurement of the the branching fraction of \(B_{s}\to\phi\mu^{+}\mu^{-}\)[13; 14; 15] in \(q^{2}\in[1.1,6.0]\) region has discrepancy at the level of \(3.6\sigma\) from the SM expectations [16; 17]. However, the recent updates by LHCb Collaboration [18; 19] in the measurements of \(R_{K}=\mathcal{B}(B\to K\mu^{+}\mu^{-})/\mathcal{B}(B\to Ke^{+}e^{-})\) and \(R_{K^{*}}=\mathcal{B}(B\to K^{*}\,\mu^{+}\mu^{-})/\mathcal{B}(B\to K^{*}e^{+} e^{-})\), in the bin range \(q^{2}\in[0.1,1.1]\) and \([1.1,6.0]\), are consistent with the SM predictions. On the other hand, the lepton flavor violating (LFV) decays are forbidden at the tree-level in the SM and can in principle occur via neutrino mixing through loop and box diagrams. Because of such mixing, the rate is considerably below current or future experimental sensitivities. Consequently, this causes mixing between different generations of leptons which give rise to flavor-changing neutral current (FCNC) transitions. Keeping this in mind, the leptonic LFV decays such as \(\tau\to\mu\mu\mu\), \(\tau\to eee\) \(\mu\to eee\), etc have been analysed in various NP models though the experimental upper limit exists [20]. However, the principle in the FCNC transition of the quark sector for LFV decays could be similar to that of the lepton sector. In this regard, we explore the exclusive LFV decays in quark sector such as \(B_{s}\to\ell\ell^{\prime}\), and \(B_{(s)}\to(K^{(*)},\phi,f_{2}^{\prime},K_{2}^{*})\ell\ell^{\prime}\) decays which occur via \(b\to s\ell\ell^{\prime}\) quark level transition. Experimentally, these decay channels are not yet observed but the upper limits of few observables exist. The leptonic \(B_{s}\to\mu e\) and \(B_{s}\to\tau\mu\) processes have upper limits of \(5.4\times 10^{-9}\) and \(4.2\times 10^{-5}\), respectively by LHCb Collaboration [21; 22]. Similarly, the upper bounds are measured by LHCb and BaBar collaboration in the branching ratios of \(B\to K\ell\ell^{\prime}\) processes are \(\mathcal{B}(B\to Ke\mu)<7.0\times 10^{-9}\)[23], \(\mathcal{B}(B\to K\tau\mu)<1.5\times 10^{-5}\)[24], and \(\mathcal{B}(B\to Ke\tau)<4.5\times 10^{-5}\)[24], respectively. The upper limit of the branching ratio of \(B^{0}\to K^{*}e\mu\) channel is observed as \(1.8\times 10^{-7}\) by Belle Collaboration [25]. We analyse the \(B\to K_{2}^{*}\ell\ell^{\prime}\) process because a better understanding of \(B\to K_{2}^{*}\gamma\) channel has been given by BaBar Collaboration in ref. [26]. On the other hand, the \(B\to f_{2}^{\prime}\ell\ell^{\prime}\) process provides very less attention both in theory and experiment. Thus, it can be studied similarly to the \(B\to K_{2}^{*}\ell\ell^{\prime}\) process. It is interesting to see if the associated observables could be enhanced in some new physics models that could simultaneously explain the observed \(b\to s\ell\ell\) data. In this analysis, we consider a simplified non-universal \(Z^{\prime}\) model in which the NP effects originate from \(U(1)^{\prime}\) abelian group extension to the SM gauge symmetry. Consequently, it provides a heavy new gauge boson \(Z^{\prime}\) of mass \(m_{Z^{\prime}}\) with generic couplings to quarks and leptons, and induces FCNC transition at tree level. Inspired by these available upper limits, we study the above discussed LFV decays in the presence of non-universal \(Z^{\prime}\) model. The outline of the paper is as follows. In section II, we discuss the theoretical toolkit that includes the most general effective weak Hamiltonian for \(b\to s\ell\ell^{\prime}\) NP operators. We also report the relevant formula for all the decay observables pertaining to \(B_{s}\to\ell\ell^{\prime}\), \(B_{(s)}\to(K^{(*)},\phi,f_{2}^{\prime},K_{2}^{*})\ell\ell^{\prime}\) decay channels. In section III, we discuss the new physics analysis in the presence of non-universal \(Z^{\prime}\) model by using the updated experimental limits on the \(b\to s\ell\ell\) data. In section IV, we discuss the numerical analysis of the aforementioned observables of rare (semi)leptonic LFV decays. We conclude with the summary of our results in section V. Theoretical Toolkit: ### Effective Hamiltonian In this section, we focus on the exclusive lepton flavor violating \(b\to s\ell\ell^{\prime}\)\((\ell,\ell^{\prime}=e,\mu,\tau)\) transition processes. In the SM, the leptons \(\ell\) and \(\ell^{\prime}\) are considered to have same flavor whereas the non-universal \(Z^{\prime}\) boson couple differently in the NP models. The most structured weak effective Hamiltonian describing the \(b\to s\ell\ell^{\prime}\) processes can be represented as [27; 28; 29], \[\mathcal{H}^{eff}=-\frac{G_{F}\alpha}{2\sqrt{2}\pi}V_{tb}V_{ts}^{ *}\sum_{m=9,10}C_{m}^{NP}O_{m}+h.c., \tag{1}\] where \(G_{F}\)\((\alpha)\) represents the Fermi (electromagnetic) coupling constant and \(V_{tb}V_{ts}^{*}\) is the CKM matrix element. The primed counter parts of the operators can be obtained by replacing \(P_{L}\rightleftharpoons P_{R}\). It is very sensitive to the semileptonic operators \(O_{9}\) and \(O_{10}\) and are given by, \[O_{9}=[\bar{s}\gamma_{\mu}(1-\gamma_{5})b][\ell\gamma^{\bar{\mu} }\ell^{\prime}],\hskip 28.452756ptO_{10}=[\bar{s}\gamma_{\mu}(1-\gamma_{5})b][ \ell\gamma^{\bar{\mu}}\gamma_{5}\ell^{\prime}]. \tag{2}\] The standard decomposition of the hadronic matrix element are given as, \[\langle 0|\bar{b}\gamma P_{L}(R)s|B_{s}(p)\rangle=\pm\frac{i}{2}p_{ \mu}f_{B_{s}},\] \[\langle 0|\bar{b}\gamma P_{L}(R)s|B_{s}(p)\rangle=\pm\frac{i}{2}p_{ \mu}f_{B_{s}},\] \[\langle 0|\bar{b}P_{L}(R)s|B_{s}(p)\rangle=\pm\frac{i}{2}\frac{M_{B_ {s}}^{2}f_{B_{s}}}{m_{b}+m_{s}},\] where \(f_{B_{s}}\) and p\({}_{\mu}\) are the decay constant and momentum of the \(B_{s}\) meson, respectively. ### Decay observables of (semi)leptonic LFV \(b\to s\ell\ell^{\prime}\) processes: #### ii.2.1 \(B_{s}\to\ell\ell^{\prime}\) From the effective Hamiltonian (II.1) one can obtain the amplitude and the associated branching ratios of \(B_{s}\to\ell\ell^{\prime}\) process in the SM are given as [28]; \[\mathcal{B}(B_{s} \to\ell\ell^{\prime})=\frac{\tau_{B_{s}}}{64\pi^{3}}\frac{\alpha^{2 }G_{F}^{2}}{m_{B_{s}}^{3}}f_{B_{s}}^{2}|V_{tb}V_{ts}^{*}|^{2}\lambda^{1/2}(m_{B_ {s}},m_{\ell},m_{\ell^{\prime}})\] \[\times\Bigg{\{}[m_{B_{s}}^{2}-(m_{\ell}+m_{\ell^{\prime}})^{2}] \cdot\left|(C_{9}^{NP}-C_{9}^{\prime})(m_{\ell}-m_{\ell^{\prime}})\right|^{2}\] \[\quad+[m_{B_{s}}^{2}-(m_{\ell}-m_{\ell^{\prime}})^{2}]\cdot\left| (C_{10}^{NP}-C_{10}^{\prime})(m_{\ell}+m_{\ell^{\prime}})\right|^{2}\Bigg{\}}, \tag{4}\] where \(\lambda(a,b,c)=[a^{2}-(b-c)^{2}][a^{2}-(b+c)^{2}]\). #### ii.1.2 \(B\to K\ell\ell^{\prime}\) The semileptonic \(B\to K\ell\ell^{\prime}\) decay mode involves \(b\to s\) quark level transition mediated by the effective Hamiltonian (II.1). Here the kinematic variables given in Ref. [30] are defined in such a way that the main decay axis, denoted by \(z\), is defined in the rest frame of \(B\) meson whereas the K meson, and the lepton pair \(\ell\) and \(\ell^{\prime}\) travel in the opposite directions. The polar angle \(\theta_{\ell}\) is the angle between the meson K and the lepton \(\ell\) in the \(\ell-\ell^{\prime}\) rest frame. The standard parametrizations for hadronic matrix elements are provided by \[\langle\bar{K}(k)|\bar{s}\gamma_{\mu}b|\bar{B}(p)\rangle =\Big{[}(p+k)_{\mu}-\frac{m_{B}^{2}-m_{K}^{2}}{q^{2}}q_{\mu}\Big{]} f_{+}(q^{2})+\frac{m_{B}^{2}-m_{K}^{2}}{q^{2}}q_{\mu}f_{0}(q^{2}), \tag{5}\] \[\langle\bar{K}(k)|\bar{s}\sigma_{\mu\nu}b|\bar{B}(p)\rangle =-i(p_{\mu}k_{\nu}-p_{\nu}k_{\mu})\frac{2f_{T}(q^{2},\mu)}{m_{B}+m _{K}}. \tag{6}\] The hadronic form factors (FFs) \(f_{+}(q^{2})\), \(f_{0}(q^{2})\) and \(f_{T}(q^{2})\) are the functions of \(q^{2}\) which lies between \((m_{1}+m_{2})^{2}\) and \((m_{B}-m_{K})^{2}\). By employing the above definitions, the differential decay rate can be written as [28], \[\frac{\mathrm{d}\mathcal{B}}{\mathrm{d}q^{2}}(\bar{B}\to\bar{K} \ell_{1}^{-}\ell_{2}^{+}) =|\mathcal{M}_{K}(q^{2})|^{2}\times\Big{\{}\varphi_{7}(q^{2})|C_{7} +C_{7}^{\prime}|^{2}+\varphi_{9}(q^{2})|C_{9}+C_{9}^{\prime}|^{2}+\varphi_{10 }(q^{2})|C_{10}+C_{10}^{\prime}|^{2}\] \[+\varphi_{S}(q^{2})|C_{S}+C_{S}^{\prime}|^{2}+\varphi_{P}(q^{2}) |C_{P}+C_{P}^{\prime}|^{2}+\varphi_{79}(q^{2})\mathrm{Re}[(C_{7}+C_{7}^{\prime })(C_{9}+C_{9}^{\prime})^{*}]\] \[+\varphi_{9S}(q^{2})\mathrm{Re}[(C_{9}+C_{9}^{\prime})(C_{S}+C_{ S}^{\prime})^{*}]+\varphi_{10P}(q^{2})\mathrm{Re}[(C_{10}+C_{10}^{\prime})(C_{P}+C_{ P}^{\prime})^{*}]\Big{\}}, \tag{7}\] where \(\varphi_{i}(q^{2})\) depends on kinematical quantities and on the form factors and shown in Appendix A. The normalization factor given in eq. (7) reads \[|\mathcal{M}_{K}(q^{2})|^{2}=\tau_{B_{d}}\frac{\alpha^{2}G_{F}^{2}|V_{tb}V_{ts }^{*}|^{2}}{512\pi^{5}m_{B}^{3}}\frac{\lambda^{1/2}(\sqrt{q^{2}},m_{1},m_{2})} {q^{2}}\lambda^{1/2}(\sqrt{q^{2}},m_{B},m_{K}), \tag{8}\] whereas the kinematic factor is given as \[\lambda=m_{B}^{4}+m_{K}^{4}+q^{4}-2(m_{B}^{2}m_{K}^{2}+mK^{2}q^{2}+m_{B}^{2}q^{2}). \tag{9}\] #### ii.1.3 \(B\to K^{*}\ell\ell^{\prime}\) and \(B_{s}\to\phi\ell\ell^{\prime}\) Here we focus on \(B\to V\ell\ell^{\prime}\) (\(V=K^{*},\phi\)) decays proceeding via \(b\to s\ell\ell^{\prime}\) processes where the vector mesons further decay as \(K^{*}\to K\pi\) and \(\phi\to K\bar{K}\), respectively. We also express the angular distributions of \(B\to K^{*}(\to K\pi)\ell\ell^{\prime}\) process. Similarly, the distributions associated with the \(B_{s}\to\phi\) transition can be obtained by trivial replacement of the form factor and the mass of the particle involved. We adopt the details concerning the kinematics from Ref. [30]. In the angular conventions [28], \(\theta_{\ell}\) is the angle between the lepton \(\ell\) with the decay axis in the lepton pair rest frame, \(\theta_{K}\) is the made by the decay axis with the direction of flight of \(K\) meson in the rest frame of \(K^{*}\) vector meson. The angle \(\phi\) is the angle spanned between the \(K\pi\) and \(\ell\ell^{\prime}\) planes, respectively shown in Fig. 1. The transition amplitude of exclusive \(B\to K^{*}\ell\ell^{\prime}\) decay mode are associated with the hadronic matrix elements. These are parametrized in terms of form factors, and are given as [28] \[\langle\bar{K}^{*}(k)|\bar{s}\gamma^{\mu}(1-\gamma_{5})b|\bar{B}( p)\rangle =\varepsilon_{\mu\nu\rho\sigma}\varepsilon^{*\nu}p^{\rho}k^{\sigma }\frac{2V(q^{2})}{m_{B}+m_{K^{*}}}-i\varepsilon_{\mu}^{*}(m_{B}+m_{K^{*}})A_{ 1}(q^{2}) \tag{10}\] \[+i(p+k)_{\mu}(\varepsilon^{*}\cdot q)\frac{A_{2}(q^{2})}{m_{B}+m _{K^{*}}}+iq_{\mu}(\varepsilon^{*}\cdot q)\frac{2m_{K^{*}}}{q^{2}}[A_{3}(q^{2} )-A_{0}(q^{2})],\] \[\langle\bar{K}^{*}(k)|\bar{s}\sigma_{\mu\nu}q^{\nu}(1-\gamma_{5})b|\bar{B}( p)\rangle =2i\varepsilon_{\mu\nu\rho\sigma}\varepsilon^{*\nu}p^{\rho}k^{ \sigma}T_{1}(q^{2})+[\varepsilon_{\mu}^{*}(m_{B}^{2}-m_{K^{*}}^{2})-( \varepsilon^{*}\cdot q)(2p-q)_{\mu}]T_{2}(q^{2})\] \[+(\varepsilon^{*}\cdot q)\Big{[}q_{\mu}-\frac{q^{2}}{m_{B}^{2}-m_ {K^{*}}^{2}}(p+k)_{\mu}\Big{]}T_{3}(q^{2}), \tag{11}\] where \(\varepsilon_{\mu}\) is the polarization vector of \(K^{*}\) meson. The transition form factor \(A_{3}(q^{2})\) is associated to the combinations of both \(A_{1}(q^{2})\) and \(A_{2}(q^{2})\) form factors and is given as \[2m_{K^{*}}A_{3}(q^{2})=(m_{B}+m_{K^{*}})A_{1}(q^{2})-(m_{B}-m_{K^{*}})A_{2}(q^{2}). \tag{12}\] The full angular distribution of the \(B\to K^{*}\ell\ell^{\prime}\) decay mode can be read as \[\frac{\mathrm{d}^{4}\mathcal{B}(B\to K^{*}(\to K\pi)\ell\ell^{\prime})}{ \mathrm{d}q^{2}\mathrm{d}\cos\theta_{\ell}\mathrm{d}\cos\theta_{K}\mathrm{d} \phi}=\frac{9}{32\pi}I(q^{2},\theta_{\ell},\theta_{K},\phi), \tag{13}\] where \[I(q^{2},\theta_{\ell},\theta_{K},\phi)= I_{1}^{s}(q^{2})\sin^{2}\theta_{K}+I_{1}^{c}(q^{2})\cos^{2}\theta_{K} +[I_{2}^{s}(q^{2})\sin^{2}\theta_{K}+I_{2}^{c}(q^{2})\cos^{2}\theta_{K}]\cos 2 \theta_{\ell}\] \[+I_{3}(q^{2})\sin^{2}\theta_{K}\sin^{2}\theta_{\ell}\cos 2\phi+I_{ 4}(q^{2})\sin 2\theta_{K}\sin 2\theta_{\ell}\cos\phi\] \[+I_{5}(q^{2})\sin 2\theta_{K}\sin\theta_{\ell}\cos\phi+[I_{6}^{s}(q^{ 2})\sin^{2}\theta_{K}+I_{6}^{c}(q^{2})\cos^{2}\theta_{K}]\cos\theta_{\ell}\] \[+I_{7}(q^{2})\sin 2\theta_{K}\sin\theta_{\ell}\sin\phi+I_{8}(q^{2}) \sin 2\theta_{K}\sin 2\theta_{\ell}\sin\phi\] \[+I_{9}(q^{2})\sin^{2}\theta_{K}\sin^{2}\theta_{\ell}\sin 2\phi. \tag{14}\] The \(q^{2}\)-dependent differential branching fraction, after integrating over the physical region of the phase space \(\theta_{K}\), \(\theta_{\ell}\) and \(\phi\), is simply given as \[\frac{\mathrm{d}\mathcal{B}}{\mathrm{d}q^{2}}=\frac{1}{4}\left[3I_{1}^{c}(q^{ 2})+6I_{1}^{s}(q^{2})-I_{2}^{c}(q^{2})-2I_{2}^{s}(q^{2})\right] \tag{15}\] with \[(m_{i}+m_{j})^{2}\leq q^{2}\leq(M_{B}-M_{K^{*}})^{2},\ \ -1\leq\cos\theta_{l}\leq 1,\ \ -1\leq\cos\theta_{K}\leq 1,\ \ 0\leq\phi\leq 2\pi. \tag{16}\] Here the angular coefficients \(I_{j}^{i}(q^{2})\) (i= c,s ; j= 1, 2,..9) are defined in terms of the transversity amplitudes and given in Appendix B. #### ii.1.4 \(B\to T\{K_{2}^{*},f_{2}^{\prime}\}\ell\ell^{\prime}\) processes In contrast to the previous sub-section, we study the exclusive \(B\to T\) (\(K_{2}^{*},f_{2}^{\prime}\))\(\ell\ell^{\prime}\) decay modes mediated via \(b\to s\) quark level transition. The long distance contribution in terms of the hadronic matrix element of \(B\to K_{2}^{*}\) transition are given as [31; 32] \[\langle K_{2}^{*}(k,\epsilon^{*})|\bar{s}\gamma^{\mu}b|\overline{B}( p)\rangle = -\frac{2V(q^{2})}{m_{B}+m_{K_{2}^{*}}}\epsilon^{\mu\nu\rho\sigma} \epsilon_{T\nu}^{*}p_{\rho}k_{\sigma},\] \[\langle K_{2}^{*}(k,\epsilon^{*})|\bar{s}\gamma^{\mu}\gamma_{5}b| \overline{B}(p)\rangle = 2im_{K_{2}^{*}}A_{0}(q^{2})\frac{\epsilon_{T}^{*}\cdot q}{q^{2}}q ^{\mu}+i(m_{B}+m_{K_{2}^{*}})A_{1}(q^{2})\left[\epsilon_{T}^{*\mu}-\frac{ \epsilon_{T}^{*}\cdot q}{q^{2}}q^{\mu}\right] \tag{17}\] \[-iA_{2}(q^{2})\frac{\epsilon_{T}^{*}\cdot q}{m_{B}+m_{K_{2}^{*}} }\left[(p+k)^{\mu}-\frac{m_{B}^{2}-m_{K_{2}^{*}}^{2}}{q^{2}}q^{\mu}\right],\] where \(p\) (\(k\)) is the four momentum of \(B\) ( \(K_{2}^{*}\)) meson. We use the relevant form factors in our analysis for \(B_{(s)}\) to light \(J^{PC}=2^{++}\) tensor meson (\(T\)) derived from the light-cone sum rule (LCSR) approach. Within this technique, the parameterized \(q^{2}\) dependent form factors are given in the form as [32]: \[F^{B_{(s)}T}(q^{2})=\frac{F^{B_{(s)}T}(0)}{1-a_{T}(q^{2}/m_{B_{q}}^{2})+b_{T}( q^{2}/m_{B_{q}}^{2})^{2}}, \tag{18}\] where \(F=V,A_{0,1,2}\) and \(T_{1,2,3}\) are the transition form factors. The \(B\to K_{2}^{*}\ell\ell^{\prime}\) decay mode which undergoes \(b\to s\ell\ell^{\prime}\) quark level transition can be expressed in terms of the leptonic polar angle \(\theta_{\ell}\) and leptonic mass squared \(q^{2}\). The angle \(\theta_{\ell}\) is the angle made by the lepton \(\ell\) with respect to the di-lepton momentum in the rest frame of \(B\) meson. The two-fold differential decay distribution in terms of the variables \(\theta_{\ell}\) and \(q^{2}\) is given as follows [33] \[\frac{d^{2}\Gamma}{dq^{2}d\cos\theta_{\ell}}=A(q^{2})+B(q^{2})\cos\theta_{ \ell}+C(q^{2})\cos^{2}\theta_{\ell}, \tag{19}\] where the \(q^{2}\) parameters \(A(q^{2})\), \(B(q^{2})\) and \(C(q^{2})\) includes form factors and Wilson coefficients. The detailed expressions are given in Appendix C. Now after integrating Eq. (19) over \(\theta_{\ell}\), we obtain the differential branching ratio as \[\frac{d\mathcal{B}}{dq^{2}}=2\tau_{B}\left(A+\frac{C}{3}\right), \tag{20}\] and the lepton forward-backward asymmetry is represented as \[A_{\rm FB}(q^{2})=\frac{1}{d\Gamma/dq^{2}}\left(\int_{0}^{1}d\cos\theta_{\ell }\frac{d\Gamma}{d\cos\theta_{\ell}dq^{2}}-\int_{-1}^{0}d\cos\theta_{\ell} \frac{d\Gamma}{d\cos\theta_{\ell}dq^{2}}\right)=\frac{B}{2\left(A+\frac{C}{3} \right)}. \tag{21}\] Similarly, one can do the analysis for \(B\to f_{2}^{\prime}\ell\ell\) process like the \(B\to K_{2}^{*}\) transition where the form factor can be obtained from Ref. [32]. Analogously, we would like to see whether it is possible to observe non-universality in the LFV decays. Hence, we define the ratios of branching ratios of various LFV \(b\to s\ell\ell^{\prime}\) decays as \[R_{K\ell}^{\ell\ell^{\prime}} =\frac{\mathcal{B}\left(\bar{B}\to\bar{K}\ell\ell^{\prime}\right)}{ \mathcal{B}\left(\bar{B}\to\bar{K}\ell\ell\right)}, \tag{22}\] \[R_{V\ell}^{\ell\ell^{\prime}} =\frac{\mathcal{B}\left(\bar{B}\to\bar{V}\ell\ell^{\prime}\right) }{\mathcal{B}\left(\bar{B}\to\bar{V}\ell\ell\right)},(V=K^{*},\phi),\] (23) \[R_{Kl}^{\ell\rho^{\prime}} =\frac{\mathcal{B}\left(\bar{B}\to\bar{T}\ell\ell^{\prime}\right) }{\mathcal{B}\left(\bar{B}\to\bar{T}\ell\ell\right)},(T=K_{2}^{*},f_{2}^{ \prime}), \tag{24}\] with \(\ell=e,\mu\). ## III New physics analysis in the non-universal \(Z^{\prime}\) model: Out of all the physics beyond the SM scenarios, an extra abelian \(U(1)^{\prime}\) gauge group in extension to the SM is more ubiquitous, which provides a neutral massive vector (spin - 1) boson \(Z^{\prime}\)[34]. At tree level, this heavy \(Z^{\prime}\) boson does flavor changing neutral current transition proceeding via \(b\to s(d)\ell\ell^{\prime}\) quark level. It is the most obvious candidate that can evolve in the form of weak effective Hamiltonian of \(b\to s(d)\) quark level transition. This is also responsible for an appreciable deviation from the SM results and explain the collider data. In this work, we will formalise it's application to the case of \(b\to s\) transition (In general, it is straightforward to generalize for \(b\to d\) quark level transition). There are different kinds of new physics models, such as \(Z^{\prime}\), leptoquark (LQ), FCNC mediated \(Z\) boson model etc, and have been analysed in Refs [35; 36; 37; 38; 39; 40; 41]. With the tree level exchange, the new physics scenarios for the parton level \(b\to s\ell\ell^{\prime}\) can be explained in two scenarios: \(\mathcal{S}\) - I : \(C_{9}^{NP}\neq 0\) and \(\mathcal{S}\) \(\text{II}:C_{9}^{NP}=-C_{10}^{NP}\)[29; 42]. However, these two scenarios are possible in \(Z^{\prime}\) model whereas LQ does only scenario (II). In this analysis, we probe the \(b\to s\ell\ell^{\prime}\) exclusive decays in the presence of non-universal \(Z^{\prime}\) model. The Feynman diagram in the presence of \(Z^{\prime}\) boson for the exclusive \(b\to s\ell\ell^{\prime}\) decays is given in Fig. 2. The new physics couplings associated with the \(Z^{\prime}\) model can be read as [29] \[C_{9,10}^{NP}=-\frac{\pi}{\sqrt{2}M_{Z^{\prime}}^{2}}\frac{1}{ \alpha G_{F}V_{tb}V_{ts}^{*}}\Gamma_{bs}^{L}(\Gamma_{NP}^{R}\pm\Gamma_{\ell \ell^{\prime}}^{L}). \tag{25}\] The scenario (I) can be obtained with \(\Gamma_{\ell\ell^{\prime}}^{L}=\Gamma_{\ell\ell^{\prime}}^{R}\), whereas the scenario (II) comes with the condition \(\Gamma_{\ell\ell^{\prime}}^{R}=0\). Here, for simplicity, we consider \(\Gamma_{bs}^{R}=0\) and the non zero \(Z^{\prime}-b-s\) coupling (\(\Gamma_{bs}^{L}\)) is taken to be real in our analysis. ### Constraints on \(Z^{\prime}\) couplings from leptonic decays In lepton flavor violating \(\tau\) decays, the \(\tau\to\ell\ell\ell\) (\(\ell=\mu,e\)) channel provides very sensitive probe of the coupling \(\Gamma_{\ell\ell^{\prime}}\) in \(Z^{\prime}\) model. Experimentally, the branching ratio for the process \(\tau\to\mu\mu\mu\) and \(\tau\to eee\) processes are \(2.1\times 10^{-8}\) and \(2.7\times 10^{-8}\) at 90% CL, respectively [20]. Additionally, an upper bound of the branching fraction of \(\mu\to eee\) process is \(1.0\times 10^{-12}\)[20]. However, a significant sensitivity in beyond the SM could be provided by additional observation of such distinct processes in the collider experiments. The \(Z^{\prime}\) boson contributes to the LFV 3-body leptonic decay \(\tau\to\mu\mu\mu\) at the tree level with the branching ratio which is given by [43; 44] \[\mathcal{B}(\tau\to\mu\mu\mu)=\frac{m_{\tau}}{1536\pi^{3}\Gamma_{ \tau}M_{Z^{\prime}}^{4}}[2(|\Gamma_{\mu\tau}^{L}\Gamma_{\mu\mu}^{L}|^{2}+| \Gamma_{\mu\tau}^{R}\Gamma_{\mu\mu}^{R}|^{2})+|\Gamma_{\mu\tau}^{L}\Gamma_{\mu \mu}^{R}|^{2}+|\Gamma_{\mu\tau}^{R}\Gamma_{\mu\mu}^{L}|^{2}], \tag{26}\] where the \(m_{\tau}\) (\(\Gamma_{\tau}\)) is the mass (total decay width) of \(\tau\) lepton. Similarly, the branching ratio of \(\tau\to eee\) decay mode can be obtained by replacing \(\mu\) by \(e\). The LFV branching fraction of \(\mu\to eee\) decay mode is given as [45] \[\mathcal{B}(\mu\to eee)=\frac{g_{V\mu e}^{2}g_{Vee}^{2}}{m_{Z^{ \prime}}^{4}}\frac{2}{4G_{F}^{2}}, \tag{27}\] where \(g_{V}^{2}=|g_{\mu e}^{V}|^{2}+|g_{\mu e}^{A}|^{2}\), and \(G_{F}\) is the Fermi coupling constant. Using the upper limits on the above discussed LFV leptonic decays, we obtain the values of lepton flavor violating NP couplings as given in Table - (III.1) where the coupling of \(Z^{\prime}\) to \(\ell\ell\) is considered as SM like in this analysis. ### Fit Results In this sub-section, we consider the \(q^{2}\) bin SM and experimental measurements of various observables that include the angular observable \(P_{5}^{\prime}\), \(\mathcal{B}(B_{s}\to\phi\mu\mu)\) and \(\mathcal{B}(B_{s}\to\mu\mu)\). The form factor independent observable \(P_{5}^{\prime}\) in \(B\to K^{*}\mu\mu\) process is defined as \[P_{5}^{\prime}=\frac{J_{5}}{2\sqrt{-J_{2}^{c}J_{2}^{s}}}, \tag{28}\] where the auxiliary functions \(J_{i}^{p}(i=2,5;p=c,s)\) includes the relevant form factors of the transition \(B\to K^{*}\) and the Wilson coefficients. For the numerical calculation of \(B\to K^{*}\ell\ell\), we employ the FF from the LQCD method [46]. Similarly, \(B_{s}\to\phi\mu\mu\) processes induced by FCNC \(b\to s\mu\mu\) transition, we consider the FFs from the combined analysis of the LCSR and LQCD fit results [17]. The branching ratio of \(B_{s}\to\mu\mu\) leptonic decay mode has also been studied. Now, using these three observables, we perform a naive \(\chi^{2}\) analysis to obtain the NP coupling parameters for the non-universal \(Z^{\prime}\) model. We define the \(\chi^{2}\) formula which is defined as follows \[\chi^{2}(C_{i}^{\rm NP})=\sum_{i}\frac{\left(\mathcal{O}_{i}^{\rm th }(C_{9,10}^{\rm NP})-\mathcal{O}_{i}^{\rm exp}\right)^{2}}{(\Delta\mathcal{O} _{i}^{\rm exp})^{2}+(\Delta\mathcal{O}_{i}^{\rm sm})^{2}}, \tag{29}\] where, \(\mathcal{O}_{i}^{\rm th}\) represents the theoretical expressions including the NP contributions and \(\mathcal{O}_{i}^{\rm exp}\) are the experimental values. The denominator includes \(1\sigma\) uncertainties associated with the theoretical and experimental results. As the new vector boson \(Z^{\prime}\) is not yet observed in collider experiments, its mass scale is constrained in different Grand unified theories and \begin{table} \begin{tabular}{|l|c|c|c|} \hline Couplings & \(m_{Z^{\prime}}=4.5\) TeV & \(m_{Z^{\prime}}=6.0\) TeV & \(m_{Z^{\prime}}=7.0\) TeV \\ \hline \hline \(\Gamma_{\tau\mu}(\mathcal{S}-{\rm I})\) & 0.128 & 0.227 & 0.310 \\ \(\Gamma_{\tau e}(\mathcal{S}-{\rm I})\) & 0.145 & 0.258 & 0.351 \\ \(\Gamma_{\mu e}(\mathcal{S}-{\rm I})\) & \(3.230\times 10^{-4}\) & \(5.742\times 10^{-4}\) & \(7.815\times 10^{-4}\) \\ \hline \hline \(\Gamma_{\tau\mu}(\mathcal{S}-{\rm II})\) & 0.221 & 0.394 & 0.537 \\ \(\Gamma_{\tau e}(\mathcal{S}-{\rm II})\) & 0.251 & 0.447 & 0.609 \\ \(\Gamma_{\mu e}(\mathcal{S}-{\rm II})\) & \(4.568\times 10^{-4}\) & \(8.120\times 10^{-4}\) & \(1.105\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: The NP couplings for LFV leptonic decays discussed in Refs. [47; 48; 49]. Bandopadhyay et al. have constrained as \(m_{Z^{\prime}}>4.4\) TeV using the recent Drell-Yan data of the LHC [49]. By using all the input parameters, the values of the NP parameters are given below \[\Gamma_{bs}^{L}|_{\mathcal{S}-\mathrm{I}} = 0.060~{}(m_{Z^{\prime}}=4.5~{}\mathrm{TeV}),0.108~{}(\mathrm{m_{ Z^{\prime}}}=6.0~{}\mathrm{TeV}),0.147~{}(\mathrm{m_{Z^{\prime}}}=7.0~{} \mathrm{TeV}),\] \[\Gamma_{bs}^{L}|_{\mathcal{S}-\mathrm{II}} = 0.062~{}(m_{Z^{\prime}}=4.5~{}\mathrm{TeV}),0.110~{}(\mathrm{m_{ Z^{\prime}}}=6.0~{}\mathrm{TeV}),0.150~{}(\mathrm{m_{Z^{\prime}}}=7.0~{} \mathrm{TeV}), \tag{30}\] where we have used three \(m_{Z^{\prime}}\) values in this analysis. ### Input parameters In this sub-section, we employ all the input parameters used for our analysis. From Ref. [50], we consider all the necessary parameters such as CKM matrix element, life time of \(B_{(s)}\) meson, Fermi coupling constant, fine structure constant, and masses of quarks, and leptons, etc. We employ the Wilson coefficients at the scale \(\mu=m_{b}\) from Ref. [51]. ## IV Numerical analysis and discussions ### Analysis of \(B_{s}\to\ell\ell^{\prime}\) process We estimate the numerical values of the branching ratios of \(B_{s}\to\ell\ell^{\prime}\) processes and shown in Table. 2. One can observe that the branching fraction of \(B_{s}\to\mu e\) mode is highly suppressed as compared to \(\tau\mu\) and \(\tau e\) channels (\(\mathcal{O}(10^{-9})\)) present in the final state. We show for three \(m_{Z^{\prime}}\) values for the given processes and obtain that the contribution increases in the presence of the NP couplings. \begin{tabular}{|l|c|c|c|c|} \hline Observable & & \(m_{Z^{\prime}}\)=4.5 TeV & \(m_{Z^{\prime}}\)=6.0 TeV & \(m_{Z^{\prime}}\)=7.0 TeV \\ \hline \hline \multicolumn{5}{|c|}{\(B_{s}\to K\ell\ell^{\prime}\) (\(Z^{\prime}\) contribution)} \\ \hline \hline \({\cal B}_{\mu e}\times 10^{-13}\) & \({\cal S}-\rm I\) & 0.221 & 0.697 & 1.291 \\ \cline{2-5} & \({\cal S}-\rm II\) & 0.919 & 2.892 & 5.376 \\ \hline \({\cal B}_{\tau\mu}\times 10^{-8}\) & \({\cal S}-\rm I\) & 0.221 & 0.693 & 1.292 \\ \cline{2-5} & \({\cal S}-\rm II\) & 0.354 & 1.126 & 2.096 \\ \hline \({\cal B}_{\tau e}\times 10^{-8}\) & \({\cal S}-\rm I\) & 0.292 & 0.921 & 1.705 \\ \cline{2-5} & \({\cal S}-\rm II\) & 0.458 & 1.453 & 2.702 \\ \hline \hline \end{tabular} **TABLE III:** Estimated upper limit values of \(B\to K\ell\ell^{\prime}\) processes in \(Z^{\prime}\) model **FIG. 3:** Variation of branching ratios of \(B\to K\tau\mu\) (top), \(B\to K\tau e\) (middle), \(B\to K\mu e\) (bottom) processes in \(Z^{\prime}\) model. The left (right) panel corresponds to \({\cal S}-\rm I\) (II). ### Analysis of \(B\to K\ell\ell^{\prime}\) process After having knowledge about the NP coupling in details, we now proceed to analyse the above discussed prominent observables of \(B\to K\ell\ell^{\prime}\) lepton flavor violating process mediated by \(b\to s\ell\ell^{\prime}\) transition in the \(Z^{\prime}\) model. In Fig. 3, we show the variation of the differential branching ratio of \(B\to K\tau\mu\) (top-left), \(B\to K\tau e\) (middle-left) and \(B\to K\mu e\) (bottom) processes w.r.t \(q^{2}\) in the presence of \(Z^{\prime}\) model. Here the magenta, blue, and green lines represent the contributions in the \(Z^{\prime}\) model with three different values of \(m_{Z^{\prime}}\). We observe that the observables have a higher contribution in the mid \(q^{2}\) region for \(B\to K\tau\mu\) and \(B\to\tau e\) processes, and in low \(q^{2}\) region for \(B\to K\mu e\) transition. This behavior arises due to the lighter lepton masses involved in the later mode. However, the presence of NP does help to enhance the contribution while increasing the \(m_{Z^{\prime}}\) values. The predicted branching fractions are shown in Table. 3. This indicates that the branching fraction of \(B\to K\mu e\) channel is very suppressed as compared to the \(\tau\mu\) and \(\tau e\) final states. In this scenario, we display the \(q^{2}\) behavior of the branching ratios of \(B\to K\tau\mu\) and \(B\to K\tau e\) processes in the top-right and middle-right panels, respectively shown in Fig. 3. In the observable, we consider all the central values of the input parameters and form factors. The colour description of the figures is same as the \({\cal S}-{\rm I}\). In the variation of whole \(q^{2}\) region, the contributions in the presence of NP coupling indicate that the observable has higher values in the mid-region. However, the presence of the NP coupling increase the central values with same order \({\cal O}(10^{-8})\). Table - III summarizes the estimated branching fractions in the whole kinematic region. ### Analysis of \(B\to V(K^{*},\phi)\ell\ell^{\prime}\) process In this sub-section, we provide a detailed study of \(B\to V\ell\ell^{\prime}\) processes mediated by \(b\to s\ell\ell^{\prime}\) quark level transition where the vector meson \(V=K^{*},\ \phi\). We probe the NP effects on the associated observables such as differential branching ratio (\(d{\cal B}/dq^{2}\)), the forward-backward asymmetry (\(A_{FB}\)), and the longitudinal polarisation fraction (\(F_{L}\)). In the right panels of Fig. 4 and 5, we analyse the \(q^{2}\) variation of the above discussed observables of \(B\to K^{*}\tau\mu\) and \(B\to K^{*}\tau e\) processes with respect to \(q^{2}\), respectively. The color description of the plots are same as previous. The \(q^{2}\) dependent branching ratios \(d{\cal B}/dq^{2}\) have distinguished contributions in the presence of NP couplings. Higher the values of \(m_{Z^{\prime}}\) induce the larger contributions to the observable. In the middle - (left, right) panel, however, in the variation of the sensitive observable \(FL(q^{2})\), the contribution in presence of NP couplings are indistinguishable and coincides for all \(m_{Z^{\prime}}\) entries. In the observable \(A_{FB}(q^{2})\) shown in bottom - left panel, the NP contribution allows no contribution to the NP in scenario - I whereas a definite contribution arises from scenario - II. We also show the \(q^{2}\) variation of the branching ratio of \(B\to K^{*}\mu e\) process, shown in Fig. 6. In the low \(q^{2}\) region, the NP contribution in the presence of \(Z^{\prime}\) couplings gets higher and varies remarkably w.r.t \(q^{2}\) for three values of \(m_{Z^{\prime}}\). In the study of whole kinematic region of \(q^{2}\), the lepton polarisation asymmetry observable decreases but doesn't drop at zero point. On the other hand, the observable \(A_{FB}(q^{2})\) varies in the whole \(q^{2}\) kinematic region in scenario - I whereas it provides a significant contribution in scenario - II. The associated plots have been depicted in Fig. 6. We also present the values of the branching ratios in the whole kinematic region as shown in Table. 4. The branching ratios of \(B\to K^{*}\tau\mu\) and \(B\to K^{*}\tau e\) processes are of the same order i.e \({\cal O}(10^{-9})\) and differ only in their central values whereas the branching fraction of \(B\to K^{*}e\mu\) process are suppressed with \({\cal O}(10^{-13})\). The forward-backward asymmetry, and the polarisation asymmetry observables differ in all the above discussed processes individually. Similar to the \(B\to K^{*}\ell\ell^{\prime}\) process we investigate another decay channel \(B_{s}\to\phi\ell\ell^{\prime}\). We depict the \(q^{2}\) dependent physical observables such as branching ratio, forward-backward asymmetry, and polarisation asymmetry for \(B_{s}\to\phi\tau\mu\) and \(B_{s}\to\phi\tau e\) in Fig. 7 and 8, respectively whereas Fig. 9 indicates the variation of the above discussed observables of \(B_{s}\to\phi\mu e\) decay channel. The top-left, top-middle, and top-right panels are shown for branching ratio, polarisation asymmetry and forward-backward asymmetry for scenario - I, respectively. Similarly, the bottom-left, bottom-middle, and bottom-right panels depict for the given observables in scenario - II. In the presence of NP couplings from \(Z^{\prime}\) model, we obtain similar results compared to previous channel with difference in the variation due to the masses of mesons and the transition form factor involved in this analysis. **FIG. 7: Variation of BR (top-left),\(F_{L}\) (top-middle), \(A_{FB}\) (top-right) of \(B\to\phi\tau\mu\) process in \(\mathcal{S}-\mathrm{I}\) and the bottom panel (left, middle and right) depicts for \(\mathcal{S}-\mathrm{II}\).** **FIG. 8: The \(q^{2}\) variationn of BR (top-left),\(F_{L}\) (top-middle), \(A_{FB}\) (top-right) of \(B\to\phi\tau e\) process in \(\mathcal{S}-\mathrm{I}\), and the bottom panel (left, middle and right) depicts for \(\mathcal{S}-\mathrm{II}\).** ### Analysis of \(B\to T(K_{2}^{*},f_{2}^{\prime})\ell\ell^{\prime}\) process Here, we study the exclusive semileptonic lepton flavor violating \(B\to T(K_{2}^{*},f_{2}^{\prime})\ell\ell^{\prime}\) channels in details in the framework of non-universal \(Z^{\prime}\) model. Similar to the previous processes, we also analyse in the scanario -I and II. In Fig. 10 and 11, we analyse the variation of the branching ratio and forward-backward asymmetry of \(B\to K_{2}^{*}\tau\mu\) and \(B\to K_{2}^{*}\tau e\) processes with respect to \(q^{2}\), respectively. In both of the figures, the left panel corresponds to the scenario - I whereas the right one indicates to scenario - II. The former observable \(d{\cal B}/dq^{2}\) contributes distinguishable contributions with higher values in the mid \(q^{2}\) regime with the \(m_{Z^{\prime}}\) values. In scenario - I presented in the left panel, the later one has an indistinguishable significant contribution with no zero-crossing point while it allows the same at \(q^{2}\simeq 10.7\) GeV\({}^{2}\) in scenario - II in the presence of \(B\to K_{2}^{*}\tau\mu\) and \(B\to K_{2}^{*}\tau e\) processes. Figure 11: The variation of the \({\cal B}\) and \(A_{FB}\) of \(B\to K_{2}^{*}\tau e\) process in the \(Z^{\prime}\) model: Left and right panels are for scenario - I and II, respectively. Figure 12: The variation of differential branching ratio and forward-backward asymmetry of \(B\to K_{2}^{*}\mu e\) process with respect to \(q^{2}\). The left (right) panel indicates \({\cal S}-{\rm I(II)}\). of \(Z^{\prime}\) model. However, there is no change in the contribution for all \(m_{Z^{\prime}}\) entries. These are shown in the bottom - left panel of Fig. 10 and 11, respectively. Similarly, the analysis of \(B\to K_{2}^{*}\mu e\) process is shown in Fig. 12. The branching fraction starts from higher values at \(q^{2}=0\) and then reduces to zero in the \(\mu e\) final state. However in the forward backward asymmetry observable, the presence of new physics remains constant at \(q^{2}\simeq 0\) in scenario - I whereas the the scenario - II indicates significant variations with respect to \(q^{2}\). In both the scenarios, no zero-crossing point has been obsereved. Similar to the \(B\to K_{2}^{*}\ell\ell^{\prime}\) process, we also probe another \(B\to T\ell\ell^{\prime}\) process where \(T=f_{2}^{\prime}\) and \(\ell,\ell^{\prime}=e,\mu,\tau\). With the non-universal \(Z^{\prime}\) NP coupling, one can obtain the significant contributions of the branching ratio which are higher than the \(B\to K_{2}^{*}\) channel. We also investigate the \(A_{FB}(q^{2})\) observable and obtain quite similar results as compared to previous channel \(B\to K_{2}^{*}\tau\mu\) and \(B\to K_{2}^{*}\tau e\). One can view the plots that have been shown in Fig. 13 and Fig. 14 for \(B\to f_{2}^{\prime}\tau\mu\) and \(B\to f_{2}^{\prime}\tau e\) processes, respectively. We also obtain the similar results but with different contributions as the masses and form factors change in \(B\to f_{2}^{\prime}\ell\ell^{\prime}\) process accordingly. Here also we investigate for three \(m_{Z^{\prime}}=4.5,6.0\) and \(7.0\) (in the units of GeV\({}^{2}\)) values. Similarly in Fig. 15, we study the branching ratio and the forward-backward asymmetry of \(B\to f_{2}^{\prime}\mu e\) channel and obtain similar results. The top-left and top-right panel shows the branching ratio whereas the bottom-left and bottom-right panels depict the \(A_{FB}(q^{2})\) observable in the scenario - I and scenario - II, respectively. We also report the theoretical estimations of the given observables of both \(B\to K_{2}^{*}\ell\ell^{\prime}\) and \(B\to f_{2}^{\prime}\ell\ell^{\prime}\) processes in Table. 5. In respect to the scenario - I and II, the numerical values of the observables of these LFV decays, presented in the allowed \(q^{2}\) region, differ in the presence of non-universal \(Z^{\prime}\) model. **FIG. 14:** The \(q^{2}\) dependency of the branching ratio and forward-backward asymmetry of \(B\to f_{2}^{\prime}\tau e\) process. \({\cal S}-{\rm I}\) and \({\cal S}-{\rm II}\), respectively shown in left and right panel. ### Lepton non-universality observables Analogous to the clean observable \(R_{K}\) and \(R_{K^{*}}\), we present the behavior of the LNU observable for the exclusive LFV decays given in Eq. (II.2.4). In the left-panel of Fig. 16, we depict the \(q^{2}\) variations of the LNU observables \({\cal R}^{\mu e}_{K}\), \({\cal R}^{\mu e}_{K^{*}}\), \({\cal R}^{\mu e}_{\phi}\), \({\cal R}^{\mu e}_{K^{*}_{2}}\) and \({\cal R}^{\mu e}_{f^{\prime}_{2}}\) in scenario - I whereas the right-panel displays the scenario - II in the \(q^{2}\in[1.0,6.0]\) GeV\({}^{2}\) compatible with LHCb measurements. One can visualise that the LNU observable remains constant for different \(m_{Z^{\prime}}\) values. All the LNU observables \({\cal R}^{\mu e}_{(K,K^{*},\phi,K^{*}_{2},f^{\prime}_{2})}\) are shown with \({\cal O}(10^{-6})\) in the given figure. Here the magenta, blue and green line contributions are involved with \(m_{Z^{\prime}}=4.5,6.0\) and \(7.0\) TeV, respectively. The region \(1.0\leq q^{2}\leq 6.0\) behavior says, all the discussed observables show significant contributions with almost a constant value of less than 1. However, the other LNU observables corresponding to \(\tau(e,\mu)\) channels couldn't display the constant values in the given regime. Therefore we haven't considered them in our analysis. The numerical estimations of all the \({\cal R}\) observables are shown in Table. 6. ## V Conclusion In this work, we have investigated the flavor violating (semi)leptonic \(B_{s}\to\ell\ell^{\prime}\), \(B_{(s)}\to(K^{(*)},\phi,f^{\prime}_{2}\), \(K^{*}_{2})\ell\ell^{\prime}\) channels induced by \(b\to s\ell\ell^{\prime}\) neutral current transition in the presence of non-universal \(Z^{\prime}\) model. These decays are extremely rare in the SM because a tiny neutrino mass occur at the loop level. However, the extension of Abelian gauge group \(U(1)^{\prime}\) to the SM induces tree level contribution in presence of non-universal \(Z^{\prime}\) vector boson. We consider the NP couplings from the branching fractions of \(B_{s}\to\ell\ell\), \(B_{s}\to\phi\ell\ell\) and the angular observable \(P^{\prime}_{5}\) in the \(B\to K^{*}\ell\ell\) processes with the naive \(\chi^{2}\) analysis. Using such couplings, we mainly analyse the variation of the branching fractions, forward-backward asymmetries, polarisation asymmetries of all the associated semi(leptonic) \(B_{s}\to\ell\ell^{\prime}\), \(B_{(s)}\to(K^{(*)},\phi,f^{\prime}_{2},\,K^{*}_{2})\ell\ell^{\prime}\) decay channels in the presence of non-universal \(Z^{\prime}\) boson. We also compute the theoretical values of all the observables. To inspect the presence of lepton non-universality, we construct and analyse the observables \({\cal R}^{\mu e}_{K,K^{*},\phi,K^{*}_{2},f^{\prime}_{2}}\) in the \(q^{2}\in[1.0,6.0]\) regime which are compatible with the LHCb measurement. We obtain that the \(q^{2}\) variations of the observables have distinguished contributions in the presence of NP couplings and three different \(m_{Z^{\prime}}\) values. Additionally, the theoretically estimated values are sizeable and have definite contributions. However, these decay channels could be further analysed in upcoming LHCb and B-factories with large number of events which could lead to the origin of univocal signal of new physics. ###### Acknowledgements. LN and RD would like to acknowledge DST INSPIRE fellowship programme for financial support. ## Appendix A The \(\phi(q^{2})\) parameters in \(B\to K\ell\ell^{\prime}\) process The \(\phi(q^{2})\) parameters used in the \(B\to K\ell\ell^{\prime}\) are gievn as below \[\varphi_{7}(q^{2}) = \frac{2m_{b}^{2}|f_{T}(q^{2})|^{2}}{(m_{B}+m_{K})^{2}}\lambda(m_{B},m_{K},\sqrt{q^{2}})\left[1-\frac{(m_{1}-m_{2})^{2}}{q^{2}}-\frac{\lambda( \sqrt{q^{2}},m_{1},m_{2})}{3q^{4}}\right],\] \[\varphi_{9(10)}(q^{2}) = \frac{1}{2}|f_{0}(q^{2})|^{2}(m_{1}\mp m_{2})^{2}\frac{(m_{B}^{2}- m_{K}^{2})^{2}}{q^{2}}\left[1-\frac{(m_{1}\pm m_{2})^{2}}{q^{2}}\right]\] \[+ \frac{1}{2}|f_{+}(q^{2})|^{2}\lambda(m_{B},m_{K},\sqrt{q^{2}}) \left[1-\frac{(m_{1}\mp m_{2})^{2}}{q^{2}}-\frac{\lambda(\sqrt{q^{2}},m_{1},m_ {2})}{3q^{4}}\right],\] \[\varphi_{79}(q^{2}) = \frac{2m_{b}f_{+}(q^{2})f_{T}(q^{2})}{m_{B}+m_{K}}\lambda(m_{B},m _{K},\sqrt{q^{2}})\left[1-\frac{(m_{1}-m_{2})^{2}}{q^{2}}-\frac{\lambda(\sqrt{ q^{2}},m_{1},m_{2})}{3q^{4}}\right],\] \[\varphi_{S(P)}(q^{2}) = \frac{q^{2}|f_{0}(q^{2})|^{2}}{2(m_{b}-m_{s})^{2}}(m_{B}^{2}-m_{K }^{2})^{2}\left[1-\frac{(m_{1}\pm m_{2})^{2}}{q^{2}}\right],\] \[\varphi_{10P(9S)}(q^{2}) = \frac{|f_{0}(q^{2})|^{2}}{m_{b}-m_{s}}(m_{1}\pm m_{2})(m_{B}^{2}- m_{K}^{2})^{2}\left[1-\frac{(m_{1}\mp m_{2})^{2}}{q^{2}}\right]. \tag{20}\] In the function \(\varphi_{a(b)}(q^{2})\) the upper sign represents \(\varphi_{a}(q^{2})\) whereas the lower one to \(\varphi_{b}(q^{2})\). ## Appendix B The angular coefficient parameters in \(B\to(K^{*},\phi)\ell\ell^{\prime}\) processes The parameters \(I_{i}^{j}(q^{2})\) (\(i=1,2;j=c,s\)) are the \(q^{2}\)- dependent angular coefficients. These include the transversity amplitudes \(A_{\perp,\parallel,0,t}^{L(R)}(q^{2})\) and are given as follows: \[A_{\perp}^{L(R)} = \mathcal{N}_{K^{*}}\sqrt{2}\lambda_{B}^{1/2}\left[[(C_{9}+C_{9}^ {\prime})\mp(C_{10}+C_{10}^{\prime})]\frac{V(q^{2})}{m_{B}+m_{K^{*}}}+\frac{2 m_{b}}{q^{2}}(C_{7}+C_{7}^{\prime})T_{1}(q^{2})\right],\] \[A_{\parallel}^{L(R)} = -\mathcal{N}_{K^{*}}\sqrt{2}(m_{B}^{2}-m_{K^{*}}^{2})\left[[(C_{9 }-C_{9}^{\prime})\mp(C_{10}-C_{10}^{\prime})]\frac{A_{1}(q^{2})}{m_{B}-m_{K^{* }}}+\frac{2m_{b}}{q^{2}}(C_{7}-C_{7}^{\prime})T_{2}(q^{2})\right],\] \[A_{0}^{L(R)} = -\frac{\mathcal{N}_{K^{*}}}{2m_{K^{*}}\sqrt{q^{2}}}\Big{\{}2m_{b} (C_{7}-C_{7}^{\prime})\left[(m_{B}^{2}+3m_{K^{*}}^{2}-q^{2})T_{2}(q^{2})- \frac{\lambda_{B}T_{3}(q^{2})}{m_{B}^{2}-m_{K^{*}}^{2}}\right]\] \[+ [(C_{9}-C_{9}^{\prime})\mp(C_{10}-C_{10}^{\prime})]\cdot\left[(m_{ B}^{2}-m_{K^{*}}^{2}-q^{2})(m_{B}+m_{K^{*}})A_{1}(q^{2})-\frac{\lambda_{B}A_{2}(q^{2})}{m_{B }+m_{K^{*}}}\right]\Big{\}}\] \[A_{t}^{L(R)} = -\mathcal{N}_{K^{*}}\frac{\lambda_{B}^{1/2}}{\sqrt{q^{2}}}\left[( C_{9}-C_{9}^{\prime})\mp(C_{10}-C_{10}^{\prime})+\frac{q^{2}}{m_{b}+m_{s}} \left(\frac{C_{S}-C_{S}^{\prime}}{m_{1}-m_{2}}\mp\frac{C_{P}-C_{P}^{\prime}}{m _{1}+m_{2}}\right)\right]A_{0}(q^{2})\] with \[\mathcal{N}_{K^{*}}=V_{tb}V_{ts}^{*}\left[\frac{\tau_{B_{d}}G_{F}^{2}\alpha^{2} }{3\times 2^{10}\pi^{5}m_{B}^{3}}\lambda_{B}^{1/2}\lambda_{q}^{1/2}\right]^{1/2}. \tag{21}\] The kinematic factors are \(\lambda_{B}=\lambda(m_{B},m_{K^{*}},\sqrt{q^{2}})\) and \(\lambda_{q}=\lambda(m_{1},m_{2},\sqrt{q^{2}})\) where the corresponding formula are given in Eq. (9). The angular coefficients \(I_{1-9}(q^{2})\) in terms of the transversity amplitudes (14) are given as \[I_{1}^{s}(q^{2}) =\left[|A_{\perp}^{L}|^{2}+|A_{\parallel}^{L}|^{2}+(L\to R) \right]\frac{\lambda_{q}+2[q^{4}-(m_{1}^{2}-m_{2}^{2})^{2}]}{4q^{4}}+\frac{4m_{ 1}m_{2}}{q^{2}}\text{Re}\left(A_{\parallel}^{L}A_{\parallel}^{R*}+A_{\perp}^{ L}A_{\perp}^{R*}\right),\] \[I_{1}^{c}(q^{2}) =\left[|A_{0}^{L}|^{2}+|A_{0}^{R}|^{2}\right]\frac{q^{4}-(m_{1}^{ 2}-m_{2}^{2})^{2}}{q^{4}}+\frac{8m_{1}m_{2}}{q^{2}}\text{Re}(A_{0}^{L}A_{0}^{ R*}-A_{t}^{L}A_{t}^{R*})\] \[\qquad\qquad\qquad-2\frac{(m_{1}^{2}-m_{2}^{2})^{2}-q^{2}(m_{1}^{ 2}+m_{2}^{2})}{q^{4}}\big{(}|A_{t}^{L}|^{2}+|A_{t}^{R}|^{2}\big{)},\] \[I_{2}^{s}(q^{2}) =\frac{\lambda_{q}}{4q^{4}}[|A_{\perp}^{L}|^{2}+|A_{\parallel}^{L }|^{2}+(L\to R)],\] \[I_{2}^{c}(q^{2}) =-\frac{\lambda_{q}}{q^{4}}(|A_{0}^{L}|^{2}+|A_{0}^{R}|^{2}),\] \[I_{3}(q^{2}) =\frac{\lambda_{q}}{2q^{4}}[|A_{\perp}^{L}|^{2}-|A_{\parallel}^{L }|^{2}+(L\to R)],\] \[I_{4}(q^{2}) =-\frac{\lambda_{q}}{\sqrt{2}q^{4}}\text{Re}(A_{\parallel}^{L}A_ {0}^{L*}+(L\to R)],\] \[I_{5}(q^{2}) =\frac{\sqrt{2}\lambda_{q}^{1/2}}{q^{2}}\left[\text{Re}(A_{0}^{L} A_{\perp}^{L*}-(L\to R))-\frac{m_{1}^{2}-m_{2}^{2}}{q^{2}}\text{Re}(A_{t}^{L}A_{ \parallel}^{L*}+(L\to R))\right],\] \[I_{6}^{s}(q^{2}) =-\frac{2\lambda_{q}^{1/2}}{q^{2}}[\text{Re}(A_{\parallel}^{L}A_ {\perp}^{L*}-(L\to R))],\] \[I_{6}^{c}(q^{2}) =-\frac{4\lambda_{q}^{1/2}}{q^{2}}\frac{m_{1}^{2}-m_{2}^{2}}{q^{2} }\text{Re}(A_{0}^{L}A_{t}^{L*}+(L\to R)),\] \[I_{7}(q^{2}) =-\frac{\sqrt{2}\lambda_{q}^{1/2}}{q^{2}}\left[\text{Im}(A_{0}^{L }A_{\parallel}^{L*}-(L\to R))+\frac{m_{1}^{2}-m_{2}^{2}}{q^{2}}\text{Im}(A_{ \perp}^{L}A_{t}^{L*}+(L\to R))\right],\] \[I_{8}(q^{2}) =\frac{\lambda_{q}}{\sqrt{2}q^{4}}\text{Im}(A_{0}^{L}A_{\perp}^{ L*}+(L\to R)),\] \[I_{9}(q^{2}) =-\frac{\lambda_{q}}{q^{4}}\text{Im}(A_{\perp}^{L}A_{\parallel}^{ L*}+A_{\perp}^{R}A_{\parallel}^{R*}),\] (B3) ## Appendix C Required parameters of \(B\to T(K_{2}^{*},f_{2}^{\prime})\ell\ell^{\prime}\) The \(q^{2}\) parameters of \(B\to T\ell\ell^{\prime}\) are given as follows \[A(q^{2}) = \frac{3}{4}\left\{\frac{1}{4}\left[\left(1+\frac{m_{+}^{2}}{q^{2}} \right)\beta_{-}^{2}+\left(1+\frac{m_{-}^{2}}{q^{2}}\right)\beta_{+}^{2}\right] \left(|A_{L}^{\parallel}|^{2}+|A_{L}^{\perp}|^{2}+(L\to R)\right)\right. \tag{124}\] \[\left.+\frac{1}{2}\left(\beta_{+}^{2}+\beta_{-}^{2}\right)\left( |A_{L}^{0}|^{2}+|A_{R}^{0}|^{2}\right)\right.\] \[\left.+\frac{4m_{1}m_{2}}{q^{2}}\text{Re}\left[A_{R}^{0}A_{L}^{0* }+A_{R}^{\parallel}A_{L}^{\parallel*}+A_{R}^{\perp}A_{L}^{\perp*}-A_{L}^{t}A_{ R}^{t*}\right]\right.\] \[\left.+\frac{1}{2}\left(\beta_{-}^{2}+\beta_{+}^{2}-2\beta_{-}^ {2}\beta_{+}^{2}\right)\left(|A_{L}^{t}|^{2}+|A_{R}^{t}|^{2}\right)+\frac{1}{ 2}\left(|A_{SP}|^{2}\beta_{-}^{2}+|A_{S}|^{2}\beta_{+}^{2}\right)\right.\] \[\left.+\frac{2m_{-}}{\sqrt{q^{2}}}\beta_{+}^{2}\text{Re}\left[A_ {S}(A_{L}^{t}+A_{R}^{t})^{*}\right]-\frac{2m_{+}}{\sqrt{q^{2}}}\beta_{-}^{2} \text{Re}\left[A_{SP}(A_{L}^{t}-A_{R}^{t})^{*}\right]\right\},\] \[B(q^{2}) = \frac{3}{2}\beta_{-}\beta_{+}\left\{\text{Re}\left[A_{L}^{\perp* }A_{L}^{\parallel}-(L\to R)\right]+\frac{m_{+}m_{-}}{q^{2}}\text{Re}\left[A_{ L}^{0*}A_{L}^{t}+(L\to R)\right]\right.\] (125) \[\left.+\frac{m_{+}}{\sqrt{q^{2}}}\text{Re}\left[A_{S}^{*}(A_{L}^ {0}+A_{R}^{0})\right]-\frac{m_{-}}{\sqrt{q^{2}}}\text{Re}\left[A_{SP}^{*}(A_{ L}^{0}-A_{R}^{0})\right]\right\},\] \[C(q^{2}) = \frac{3}{8}\beta_{+}^{2}\beta_{-}^{2}\left\{\left(|A_{L}^{ \parallel}|^{2}+|A_{L}^{\perp}|^{2}-2|A_{L}^{0}|^{2}\right)+(L\to R)\right\} \tag{126}\] Here \(m_{\pm}=(m_{1}\pm m_{2})\), \(\beta_{\pm}=\sqrt{1-\frac{(m_{\ell}\pm m_{\ell^{\prime}})^{2}}{q^{2}}}\) and the expressions of transversity amplitudes \(A\)'s and the lepton helicity amplitudes are given in Appendix C.1 and 1 respectively. The polarization \(\epsilon^{\mu\nu}(n)\) of tensor meson \(K_{2}^{*}\), which has four momentum \((k_{0},0,0,\vec{k})\), can be written in terms of the spin-1 polarization vectors [52] \[\epsilon_{\mu\nu}(\pm 2) = \epsilon_{\mu}(\pm 1)\epsilon_{\nu}(\pm 1),\] \[\epsilon_{\mu\nu}(\pm 1) = \frac{1}{\sqrt{2}}\left[\epsilon_{\nu}(\pm)\epsilon_{\nu}(0)+ \epsilon_{\nu}(\pm)\epsilon_{\mu}(0)\right],\] \[\epsilon_{\mu\nu}(0) = \frac{1}{\sqrt{6}}\left[\epsilon_{\mu}(+)\epsilon_{\nu}(-)+ \epsilon_{\nu}(+)\epsilon_{\mu}(-)\right]+\sqrt{\frac{2}{3}}\epsilon_{\mu}(0) \epsilon_{\nu}(0), \tag{127}\] where the spin-1 polarization vectors are defined as \[\epsilon_{\mu}(0)=\frac{1}{m_{K_{2}^{*}}}\left(\vec{k}_{z},0,0,k_{0}\right)\,, \quad\epsilon_{\mu}(\pm)=\frac{1}{\sqrt{2}}\left(0,1,\pm i,0\right) \tag{128}\] In the study of \(B\to T(K_{2}^{*},f_{2}^{\prime})\ell_{1}\ell_{2}\) decay channel, it has two leptons in the final state so the \(n=\pm 2\) helicity states of the \(K_{2}^{*}\) is not realized. So a new polarization vector is introduced [53] \[\epsilon_{T\mu}(h)=\frac{\epsilon_{\mu\nu}p^{\nu}}{m_{B}} \tag{129}\] The explicit expressions of polarization vectors are \[\epsilon_{T\mu}(\pm 1) = \frac{1}{m_{B}}\frac{1}{\sqrt{2}}\epsilon(0).p\epsilon_{\mu}(\pm)= \frac{\sqrt{\lambda}}{\sqrt{8}m_{B}m_{K_{2}^{*}}}\epsilon_{\mu}(\pm), \tag{100}\] \[\epsilon_{T\mu}(0) = \frac{1}{m_{B}}\sqrt{\frac{2}{3}}\epsilon(0).p\epsilon_{\mu}(0)= \frac{\sqrt{\lambda}}{\sqrt{6}m_{B}m_{K_{2}^{*}}}\epsilon_{\mu}(0), \tag{101}\] where \(\lambda(m_{B}^{2},m_{K_{2}^{*}}^{2},q^{2})=m_{B}^{4}+m_{K_{2}^{*}}^{4}+q^{4}-2 (m_{B}^{2}m_{K_{2}^{*}}^{2}+m_{B}^{2}q^{2}+m_{K_{2}^{*}}^{2}q^{2})\) is the usual Kallen function. On the other hand, the virtual gauge boson can have three types of polarization states, longitudinal, transverse and time-like, which have following components \[\epsilon_{V}^{\mu}(0)=\frac{1}{\sqrt{q^{2}}}(-|\vec{q}_{z}|,0,0,-q_{0})\,, \quad\epsilon_{V}^{\mu}(\pm)=\frac{1}{\sqrt{2}}(0,1,\pm i,0)\,\quad\epsilon_{V}^{\mu}(t)=\frac{1}{\sqrt{q^{2}}}(q_{0},0,0,q_{z}) \tag{102}\] where \(q^{\mu}=(q_{0},0,0,q_{z})\) is four momentum of gauge boson. ### Transversity Amplitudes The vector and axial-vector transversity amplitudes can be expressed as \[A_{0L,R} = N\frac{\sqrt{\lambda}}{\sqrt{6}m_{B}m_{K_{2}^{*}}}\frac{1}{2m_{ K_{2}^{*}}\sqrt{q^{2}}}\left[(C_{V-}\mp C_{A-})\left[(m_{B}^{2}-m_{K_{2}^{*}}^{2}-q ^{2})(m_{B}+m_{K_{2}^{*}})A_{1}-\frac{\lambda}{m_{B}+m_{K_{2}^{*}}}A_{2} \right]\right],\] \[A_{\perp L,R} = -\sqrt{2}N\frac{\sqrt{\lambda}}{\sqrt{8}m_{B}m_{K_{2}^{*}}}\left[ (C_{V+}\mp C_{A+})\frac{\sqrt{\lambda}V}{m_{B}+m_{K_{2}^{*}}}\right],\] \[A_{\parallel L,R} = \sqrt{2}N\frac{\sqrt{\lambda}}{\sqrt{8}m_{B}m_{K_{2}^{*}}}\left[ (C_{V-}\mp C_{A-})(m_{B}+m_{K_{2}^{*}})A_{1}\right],\] \[A_{Lt} = N\frac{\sqrt{\lambda}}{\sqrt{q^{2}}\sqrt{6}m_{B}m_{K_{2}^{*}}} \left[\sqrt{\lambda}(C_{V-}-C_{A-})A_{0}\right],\] \[A_{Rt} = N\frac{\sqrt{\lambda}}{\sqrt{q^{2}}\sqrt{6}m_{B}m_{K_{2}^{*}}} \left[\sqrt{\lambda}(C_{V-}+C_{A-})A_{0}\right], \tag{103}\] where \(C_{V\pm}=(C_{V}\pm C_{V}^{\prime})\), and \(C_{A\pm}=(C_{A}\pm C_{A}^{\prime})\). The transversity amplitudes for scalar, pseudoscalar interactions can be written as \[A_{S} = 2N\frac{\sqrt{\lambda}}{\sqrt{6}m_{B}m_{K_{2}^{*}}}\left[\sqrt{ \lambda}(C_{S}-C_{S^{\prime}})A_{0}\right],\] \[A_{SP} = 2N\frac{\sqrt{\lambda}}{\sqrt{6}m_{B}m_{K_{2}^{*}}}\left[\sqrt{ \lambda}(C_{P}-C_{P^{\prime}})A_{0}\right]. \tag{104}\] The normalization constant \(N\) is given by \[N=\left[\frac{G_{F}^{2}\alpha_{e}^{2}}{3\cdot 2^{10}\pi^{5}m_{B}^{3}}|V_{tb}V_{ts}^{ *}|^{2}q^{2}\beta_{+}\beta_{-}\lambda(m_{B}^{2},m_{K_{2}^{*}}^{2},q^{2})^{1/2} \mathcal{B}(K_{2}^{*}\to K\pi)\right]^{1\over 2}. \tag{12}\]
2308.16139
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedback
Jianning Li, Zongwei Zhou, Jiancheng Yang, Antonio Pepe, Christina Gsaxner, Gijs Luijten, Chongyu Qu, Tiezheng Zhang, Xiaoxi Chen, Wenxuan Li, Marek Wodzinski, Paul Friedrich, Kangxian Xie, Yuan Jin, Narmada Ambigapathy, Enrico Nasca, Naida Solak, Gian Marco Melito, Viet Duc Vu, Afaque R. Memon, Christopher Schlachta, Sandrine De Ribaupierre, Rajnikant Patel, Roy Eagleson, Xiaojun Chen, Heinrich Mächler, Jan Stefan Kirschke, Ezequiel de la Rosa, Patrick Ferdinand Christ, Hongwei Bran Li, David G. Ellis, Michele R. Aizenberg, Sergios Gatidis, Thomas Küstner, Nadya Shusharina, Nicholas Heller, Vincent Andrearczyk, Adrien Depeursinge, Mathieu Hatt, Anjany Sekuboyina, Maximilian Löffler, Hans Liebl, Reuben Dorent, Tom Vercauteren, Jonathan Shapey, Aaron Kujawa, Stefan Cornelissen, Patrick Langenhuizen, Achraf Ben-Hamadou, Ahmed Rekik, Sergi Pujades, Edmond Boyer, Federico Bolelli, Costantino Grana, Luca Lumetti, Hamidreza Salehi, Jun Ma, Yao Zhang, Ramtin Gharleghi, Susann Beier, Arcot Sowmya, Eduardo A. Garza-Villarreal, Thania Balducci, Diego Angeles-Valdez, Roberto Souza, Leticia Rittner, Richard Frayne, Yuanfeng Ji, Vincenzo Ferrari, Soumick Chatterjee, Florian Dubost, Stefanie Schreiber, Hendrik Mattern, Oliver Speck, Daniel Haehn, Christoph John, Andreas Nürnberger, João Pedrosa, Carlos Ferreira, Guilherme Aresta, António Cunha, Aurélio Campilho, Yannick Suter, Jose Garcia, Alain Lalande, Vicky Vandenbossche, Aline Van Oevelen, Kate Duquesne, Hamza Mekhzoum, Jef Vandemeulebroucke, Emmanuel Audenaert, Claudia Krebs, Timo van Leeuwen, Evie Vereecke, Hauke Heidemeyer, Rainer Röhrig, Frank Hölzle, Vahid Badeli, Kathrin Krieger, Matthias Gunzer, Jianxu Chen, Timo van Meegdenburg, Amin Dada, Miriam Balzer, Jana Fragemann, Frederic Jonske, Moritz Rempe, Stanislav Malorodov, Fin H. Bahnsen, Constantin Seibold, Alexander Jaus, Zdravko Marinov, Paul F. Jaeger, Rainer Stiefelhagen, Ana Sofia Santos, Mariana Lindo, André Ferreira, Victor Alves, Michael Kamp, Amr Abourayya, Felix Nensa, Fabian Hörst, Alexander Brehmer, Lukas Heine, Yannik Hanusrichter, Martin Weßling, Marcel Dudda, Lars E. Podleska, Matthias A. Fink, Julius Keyl, Konstantinos Tserpes, Moon-Sung Kim, Shireen Elhabian, Hans Lamecker, Dženan Zukić, Beatriz Paniagua, Christian Wachinger, Martin Urschler, Luc Duong, Jakob Wasserthal, Peter F. Hoyer, Oliver Basu, Thomas Maal, Max J. H. Witjes, Gregor Schiele, Ti-chiun Chang, Seyed-Ahmad Ahmadi, Ping Luo, Bjoern Menze, Mauricio Reyes, Thomas M. Deserno, Christos Davatzikos, Behrus Puladi, Pascal Fua, Alan L. Yuille, Jens Kleesiek, Jan Egger
2023-08-30T16:52:20Z
http://arxiv.org/abs/2308.16139v5
# MedShapeNet - A Large-Scale Dataset of 3D Medical Shapes for Computer Vision ###### Abstract We present _MedShapeNet_, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D surgical instrument models. Prior to the deep learning era, the broad application of statistical shape models (SSMs) in medical image analysis is evidence that _shapes_ have been commonly used to describe medical data. Nowadays, however, state-of-the-art (SOTA) deep learning algorithms in medical imaging are predominantly voxel-based. In computer vision, on the contrary, _shapes_ (including, voxel occupancy grids, meshes, point clouds and implicit surface models) are preferred data representations in 3D, as seen from the numerous shape-related publications in premier vision conferences, such as _the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, as well as the increasing popularity of _ShapeNet_ (about 51,300 models) and _Princeton ModelNet_ (127,915 models) in computer vision research. _MedShapeNet_ is created as an alternative to these commonly used shape benchmarks to facilitate the translation of data-driven vision algorithms to medical applications, and it extends the opportunities to adapt SOTA vision algorithms to solve critical medical problems. Besides, the majority of the medical shapes in _MedShapeNet_ are modeled directly on the imaging data of real patients, and therefore it complements well existing shape benchmarks consisting of computer-aided design (CAD) models. _MedShapeNet_ currently includes more than 100,000 medical shapes, and provides annotations in the form of paired data. It is therefore also a freely available repository of 3D models for extended reality (virtual reality - VR, augmented reality - AR, mixed reality - MR) and medical 3D printing. This white paper describes in detail the motivations behind _MedShapeNet_, the shape acquisition procedures, the use cases, as well as the usage of the online shape search portal: [https://medshapenet.kiim.nrw/](https://medshapenet.kiim.nrw/). 3D Medical Shapes, ShapeNet, Benchmark, Anatomy Education, Shapeomics, Deep learning, Augmented Reality, Virtual Reality, Mixed Reality, Extended Reality, Diminished Reality, Medical Visualization, 3D Printing, Stereolithography, Face Reconstruction, Medical Data Sharing, Data Privacy ## 1 Introduction The success of deep learning in so many fields of applications [1, 2, 3] is in not small part due to the availability of large, high-quality datasets [4], such as _ImageNet_[5], _CIFAR_[6], and _a2d2_[7]. In computer vision, _J. Li, G. Luijten, N. Ambigapathy, E. Nasca, A. Dada, M. Balzer, J. Frangemann, F. Jonske, M. Rempe, A. Abourayya, S. Malorodov, F. H. Bahnsen, C. Seibold, A. S. Santos, M. Lindo, A. Ferreira, F. Nensa, F. Hofst, A. Rehmer, L. Heine, J. Keyl, M.-S. Kim, M. Kamp, J. Klasewski and J. Egger are with the Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AoR), Ginradstrasse 2, 45131 Essen, Germany. E-mails: [email protected]_, [email protected] Princeton ModelNet [8], ShapeNet [9], etc., are the de facto benchmarks for numerous fundamental vision problems, such as 3D shape classification/retrieval, completion, reconstruction and segmentation [10, 11, 12, 13, 14, 15, 16]. Shape describes the geometries of 3D objects and is one of the most basic concepts in computer vision. Common 3D shape representations include point clouds, voxel grids, meshes and implicit surface models (signed distance functions), which follow different data structures, cater for different algorithms but are convertible to each other [17]. * _Maximilian Loffler is with the Universitatsklinikum Freiburg, Hugstetter Strasse 55, 79106 Freiburg, Germany._ * _Hans Lield is with the Department of Neuroradiology, Klinikum Redts der lssr, Ismaninger Str. 22, 81675 Munich, Germany._ * _P. F. Christ, H. B. Li and B. Menze are with the Department of Quantitative Biomedicine, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland._ * _E. Bellelli, C. Grana and L. Lunetti are with the University of Modena and Reggio Emilia, Department of Engineering "Enzo Ferrari", Via Viavarelli 10, 41125, Modena, Italy._ * _J. Ma is with the with the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON M5S 1A8 Canada; Peter Munk Cardiac Centre, University Health Network, 585 University Ave, Toronto, ON G3C 2N2, Canada; Victor Institute, 661 University Ave Suite 710, Toronto, ON M5G 1M1, Canada._ * _Y. Zhang is with the Shanghai AI Laboratory, Yunjin Road, Shanghai, 20032, People's Republic of China._ * _R. Charlegli and S. Beier are with the School of Mechanical and Manufacturing Engineering, UNSW, Sydney, 2052, NSW, Australia._ * _A. Soumum is with the School of Computer Science and Engineering, UNSW, Sydney, 2052, NSW, Australia._ * _R. Souza is with the Advanced Imaging and Artificial Intelligence Lab, Electrical and Software Engineering Department, and the Hotchkiss Brain Institute, University of Calgary, Calgary, Canada._ * _L. Rititrer is with the Medical Image Computing Lab, School of Electrical and Computer Engineering (FEEC), University of Campinas, Campinas, Brazil._ * _R. Frame is with the Radiology and Clinical Neurosciences Departments, the Hotchkiss Brain Institute, University of Calgary, Calgary, Canada, and the Seaman Family MR Research Centre, Foothills Medical Center, Calgary, Canada._ * _T.-C. Chang is with Merck, Rahaway, NJ 07065, USA._ * Blue Tower, Einsteinstrasse 172, 81675 Munich, Germany._ * _Y. li and P. Luo are with the University of Hongkong, Pok Fu Lam, Hong Kong, People's Republic of China._ * _H. Saldi is with the Department of Artificial Intelligence in Medical Sciences, Faculty of Advanced Technologies in Medicine, Iran University Of Medical Sciences, Tehran, Iran._ * _J. Potrosa, C. Ferreira, A. Cunh and A. Campillo are with the Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal; J. Pedrosa, C. Ferreira and A. Campillo are also with Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal. A. Cunh is also with Universidade of This os Montes and Alto Douve (UITAD), Vila Real, Portugal._ * _G. Arcata is with the Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria._ * _Y. Suter and M. Reyes are with ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland. M. Reyes is also with the Department of Radiation Oncology, University Hospital Bern, University of Bern, Switzerland._ * _J. Garcia is with the Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Malicine, University of Pennsylvania._ * University Hospital of Dijon, 1 Bilal Jamne d'Arc, BP 77908, 21079 Dijon Cedex, France._ * _E. Audenaert is with the Department of Human Structure and Repair, Ghent University, Cornell Hayamaslam 10, 9000 Ghent, Belgium._ These shape representations differ substantially from * 2350 Health Sciences Mall, University of British Columbia, Vancouver, British Columbia, V6T 1Z3 Canada._ * _E. Vercade and T. V. Leaven are with the Department of Development & Regeneration, KLI Leuven Campus Kula, Etienne Sabdelam 53, 8500 Kortrijk, Belgium._ * _M.-S. Kim and F. Nenna are with the Institute of Diagnostic and International Radiology and Neuroradiology, University Hospital Essen (AoR), Hudfandstrasse 55, 45147 Essen, Germany._ * _J. Kleesik is with German Cancer Consortium (DKTK), Partner Site Essen, Hridandstrasse 55, 45147 Essen, Germany, and the Department of Physics, TU Dortmund University, August-Schmidt-Str. 4, 44227 Dortmund, Germany._ * _M. Kamp, F. Horst, M.-S. Kim, J. Kleesik and J. Egger are with the Cancer Research Center Colgne Essen (CCCE), University Medicine Essen (AoR), Hudfandstrasse 55, 45147 Essen, Germany._ * _M. Kamp and A. Abourstaya are with the Institute for Neuroinformatics, Ruhr University Bochum, Germany. M. Kamp is also with the Department of Data Science & AI, Monash University, Australia._ Fig. 1: Example shapes in _MedShapeNet_, including various bones (e.g., skulls, ribs and vertebrae), organs (e.g., brain, lung, heart, liver), vessels (e.g., aortic vessel tree and pulmonary artery) and muscles. the medical imaging data (computed tomography, magnetic resonance imaging, positron emission tomography, ultra sound, X-ray) commonly used in clinical research. As a result, the transferability of state-of-the-art (SOTA) vision algorithms to medical/clinical problems is limited, since vision methods developed on general 3D shapes are not directly transferable to volumetric, gray-scale medical data. Therefore, the community needs a large, high-quality shape database for medical imaging. With _MedShapeNet_, we provide a large-scale dataset of 3D medical shapes, i.e., voxel occupancy grid (VOR), mesh and point representations of human anatomies (e.g., liver, heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ by itself is therefore not only a unique dataset for medical imaging but also an ideal alternative and complement to the common shape benchmarks, like heart, lung, kidney, vertebrae, rib) - formats that advanced vision algorithms are compatible with [18] but are under-represented in current medical imaging research. While _ShapeNet_ is comprised of 3D computer-aided design (CAD) models of real-world objects (e.g., _plane_, _car_, _chair_, _desk_), the medical shapes from _MedShapeNet_ are directly extracted from the imaging data of real patients (e.g., Figure 1). _MedShapeNet_ is with the Institute of Mathematical and Theory in Electrical Engineering, Graz University of Technology, 8010 Graz, Austria. _K. Krieger, M. Gunzer and J. Chen are with the Leibniz-Institut fur Analytische Wissenschaften-ISA-S.-V._, 441439 Dortmund, Germany. _M. Gunzer is with the Institut for Experimental Immunology and Imaging, University Hospital, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University Hospitals, University, Hospitals, University Hospitals, University, _ShapeNet_[9], for computer vision research, such as domain adaptation (CAD \(\rightarrow\) real-world) [19]. _MedShapeNet_ makes an effort to bridge the gap between the medical imaging and computer vision community, and to promote the translation of vision algorithms to medical applications. The benefits are reciprocal: it makes it easier for vision researchers to work on medical applications and encourages medical researchers to revisit and adopt shape-based methods from computer vision for medical problems. The _MICCAI_ society, a leading professional organization in medical image computing and computer assisted intervention, has initiated a special interest group in _Shape in Medical Imaging_ (_ShapeMI_, [https://shapemi.github.io/](https://shapemi.github.io/)), suggesting the significance of the role shape-based methods play in this field. Table I provides a non-inclusive list of organizations/events that focus on promoting shape methods for medical applications. _MedShapeNet_ includes diverse anatomical shapes and can facilitate the development and evaluation of data-driven, shape-based methods for a variety of medical as well as vision problems. On the one hand, numerous existing medical problems can be solved using shape-based methods. A typical example is cranial implant design [20, 21, 22, 23, 24, 25, 26], which is commonly formulated as a shape completion problem and solved using well established completion methods from computer vision [27, 28, 29, 30, 31]. The same concept can be conveniently extended to the design of other bone grafts (e.g., ribs, spine) and even artificial organs (e.g., liver, heart, kidney) for 3D bio-printing. Another representative example is statistical shape modeling (SSM), which has long been employed for medical image segmentation [32, 33] and anatomy modeling [34, 35, 36, 37, 38, 39, 40, 41] by the community. Shape priors and/or geometric constraints of various anatomies (e.g., aorta, skull) can also be derived from _MedShapeNet_ for downstream segmentation and reconstructive tasks [42, 43, 44, 45, 46, 47]. Last but not least, _MedShapeNet_ offers opportunities to explore shape-based methods for problems that are traditionally solved based on gray-scale medical images, such as disease diagnosis. Switching to medical shapes allows one to exploit more computationally efficient and geometry-oriented methods, such as sparse convolutional neural networks [48], for the medical diagnostic problems. On the other hand, anatomical shapes are also commonly used for general computer vision research aimed at (primarily) non-medical applications, such as facial modeling [49, 50] and internal anatomy (e.g., skeleton, organs) inference [51, 52]. _MedShapeNet_ also contains pathological anatomies, such as tumours brains, kidneys and livers (Figure 3), as well as brains from patients with cognitive impairment (e.g., Alzheimer's disease) or substance use disorder (e.g., alcohol, use disorder - AUD, cocaine use disorder - CUD). Machine learning models can be trained for automatic abnormality detection using such shape data. Through statistical analysis and comparison, geometric differences between normal and pathological anatomies can be quantified, which facilitates automatic diagnostics and the discovery of geometric biomarkers [53, 54]. _MedShapeNet_ can also be used for anatomy education, as it provides the 3D models of a variety of human anatomies, both normal and pathological, that can be 3D printed or used digitally in an extend reality, such as augmented reality (AR), environment [55, 56]. _MedShapeNet_ also benefits researchers who want to study the shape variations of a certain anatomy, but do not have access to the 3D scans and lack the resources to create the segmentations manually. The manuscript is organized as follows. Section 2 discusses the shape and voxel features in medical imaging, and the motivation of this project. Section 3 introduces the different sources from which the shape data in _MedShapeNet_ are derived. Section 4 presents several interesting use cases of _MedShapeNet_, and demonstrates how _MedShapeNet_ can be used in real-world applications in computer vision, medical imaging and augmented reality. Section 5 introduces the online interface of _MedShapeNet_ and how to use it. Section 6 concludes the manuscript and discusses the future work. ## 2 Shape and Voxel Features Shapes describe objects' geometries, provide a foundation for computer vision, and serve as a computationally efficient Fig. 2: The _predictive maps_ overlaid onto patients’ MRI scans. The _predictive maps_ are color-coded to indicate high or low probability of tumor infiltration. way to represent images despite not capturing voxel features like gray-scale medical images. Even though the main motivation behind _MedShapeNet_ is to emphasize the importance of shape characteristics, such as jaggedness, volume, elongation, etc., over voxel features, and to show that voxel features are redundant for certain tasks, learning algorithms might require additional (voxel) information to construct a decision boundary in some situations. For example, liver and brain tumors can have a noticeable impact on the morphology and/or volume of the corresponding organ (Figure 3), so that learning algorithms can easily distinguish between healthy and tumorous organs based on these shape features alone. However, for pathologies that do not induce (obvious) morphological changes, such as neurodegenerative diseases (e.g., mild cognitive impairment or Alzheimer's disease), shape-related features might not be discriminative enough for learning algorithms to converge during training. In the latter case, adding additional voxel features is beneficial. Refer to Section 4.2.4 for preliminary experimental evidence of these assumptions. Another example where voxel features are essential is when accurate spatial location is necessary, such as during precision tumor therapy. In [57], the authors show that spatial _predictive maps_ that indicate areas of early tumor (glioblastoma) recurrence and infiltration can be derived from preoperative MRIs, and used for targeted radiotherapy [58]. The _predictive maps_ are generated via a voxel-wise classification of the gray-scale tumor voxels. As shown in Figure 2, the _predictive map_ shows the spatial pseudo-probability of tumor infiltration. Areas with high probability have higher risks of tumor recurrence after resection. How to optimally combine voxel features with shapes is an interesting topic requiring further investigation. With _MedShapeNet_, one can investigate (1) to what degree a pathological condition, such as tumor, Alzheimer's disease (AD) and substance use disorder (SUD) can be captured by the shape features of the organs affected (e.g., the brain), determined by the convergence of a classifier when trained on shape features alone; (2) what shape features are the most discriminative of a pathology and how to calculate them [59]; (3) how to effectively integrate voxel features into shapes when shape features alone are not sufficient; and (4) whether there are associations between voxel and shape features. In the example of [57], one can ask whether the high infiltration voxels induce morphological changes to the tumor (boundaries) correspondingly. To answer these questions and support future research on this endeavor, _MedShapeNet_ links the'source of shapes' i.e., the original medical images with its shape collections, so that the voxel information of a specific shape can be retrieved whenever needed. The following section describes the'source of shapes' in detail. ## 3 Sources of Shapes The anatomical shapes in _MedShapeNet_ are converted from binary segmentation masks (voxel occupancy grids) of organs, bones, vessels, muscles, etc., using _Marching Cubes_[105]. We collect the segmentation masks from different sources, where the segmentation masks are either generated automatically by a segmentation network (e.g., in the case of _TotalSegmentator_) or manually, as those of the ground truth in the training set of a public medical image segmentation challenge [106, 107, 108]. Some of the masks are from our own datasets. Table II summarizes the data sources, such as _TotalSegmentator_[60], _MIG500+_[61], the _Human Connectome Projects_ (HCP) [62] and the aortic vessel tree (AVT) dataset [64]. Miscellaneous sources include the _Skull-stripped MRI Glioblastoma Multiforme (GBM) Dataset_[65] and the _Medical Augmented Reality Facial Data Collection_[63], as shown in Figure 1. Note that different sources could contain the same anatomy. For example, both the _TotalSegmentator_ and VerSe [67] datasets include vertebrae. The anatomical shapes in _MedShapeNet_ are provided as meshes (.stl), points and voxel occupancy grids to cater for different vision algorithms. **Privacy and ethics considerations**: The _MedShapeNet_ database is created exclusively for research and educational purposes. The majority of the source datasets are Creative Commons (CC)- or CC BY 4.0-licensed (Refer to Table II for data licenses). Publicly sharing medical data is encouraged but regulated at the same time due to potential privacy concerns [109, 110]. _MedShapeNet_ does not include gray-scale medical images, which contain patient-specific information, such as racial identity, that can be inferred using an identity recognition network [111, 112]. Training on shape data encourages a machine learning model to focus on learning discriminative geometric features rather than learning irrelevant patients' identities, which may undermine the robustness and trustworthiness of the \begin{table} \begin{tabular}{l l l} \hline \hline Sources (link) & Description & Category \\ \hline _Zuse Institute Berlin (Zlb)_[\(\mathcal{Z}\), \(\mathcal{Z}\)] & shape-informed medical image segmentation and shape priors in medical imaging & research group \\ _Shapest_[\(\mathcal{G}\)] & shape processing/analysis/learning in medical imaging & MICCAI workshop \\ _SIG_[\(\mathcal{G}\)] & shape modeling and analysis in medical imaging & MICCAI interest group (SIG) \\ _AutoImplement I, II_[\(\mathcal{G}\), \(\mathcal{G}\)] & skull shape reconstruction and completion & MICCAI challenge \\ _Wisis_[\(\mathcal{G}\)] & women in Shape Analysis, shape modeling & professional organization \\ _STACOM_[\(\mathcal{Z}\)] & statistical atlases and computational models of the heart & MICCAI workshop \\ _SAMIA_[\(\mathcal{G}\)] & shape analysis in medical image analysis & book \\ _CIBC_[\(\mathcal{G}\)] & image and geometric analysis & research group \\ _GeoMala_[\(\mathcal{G}\)] & geometric deep learning in medical image analysis & MICCAI-endorsed workshop \\ _IEEE TMI_[\(\mathcal{G}\)] & geometric deep learning in medical imaging & journal special issue \\ _PMLR_[\(\mathcal{G}\)] & geometric deep learning in medical image analysis & proceedings \\ _Elsevier_[\(\mathcal{G}\)] & Riemannian geometric statistics in medical image analysis & book \\ _Springer_[\(\mathcal{G}\)] & geometric methods in bio-medical image processing & proceedings \\ \hline \hline \end{tabular} \end{table} TABLE I: A Non-inclusive List of Organizations/Events Featuring Shape Methods for Medical Applications machine learning model and lead to identity-driven bias. Publicly sharing gray-scale head CT and MRI scans bears the risk of exposing the facial profiles of the patients [113]. _MedShapeNet_ removes the gray values in head CTs and MRIs and shares only the skulls, making the reidentification of the patients more difficult. Existing facial models (Figure 1) are CC BY 4.0-licensed [63]. The original study was approved by the ethics committee, and participants also provided their informed consent. The _HCP_ database is not CC-licensed but its use terms permit the redistribution of the original and derived data. _MedShapeNet_ only shares the binarized version of the brains extracted from the original HCP MRIs, as seen in Figure 1. ### _TotalSegmentator_ The _TotalSegmentator_ dataset from Wasserthal et al. [60] includes over 1000 CT scans and the corresponding segmentations of 104 anatomical structures covering the whole body, which are generated automatically by a nnUNet-based segmentation network [114] and have been used, for example, to improve disease diagnosis by correlating organ volumes with disease occurrences in humans [115]. ### _Human Connectome Projects (HCP)_ The _1200 Subjects Data Release_ from the _Human Connectome Projects_ (HCP) includes 1113 structural 3T head MRI scans of healthy young adults. From each MRI scan, the segmentation masks of the skull and the brain are extracted using the _Cortical Surface Extraction_ script provided by _BrainSuite_ ([http://brainsuite.org/](http://brainsuite.org/)). Due to the highly complex brain geometries, the size of a brain mesh converted from a segmentation mask exceeds one Gigabyte. Considering the limited space for storing the shape data, we downsized the brain masks by a factor of 1.6 before converting them to meshes. This course of action reduces the size of each brain shape to 200 _MB_ - 500 _MB_ at the cost of reduced shape quality. A example of such brain shape is shown in Figure 1. ### _Muig500+_ The _MUig500+_ dataset contains the binary segmentation masks and meshes of 500 healthy human skulls and 29 craniectomy skulls with surgical defects [61]. The skull masks are segmented from head CT scans by thresholding. ### _SkullBreak/SkullFix_ The _SkullBreak/SkullFix_ dataset includes the binary segmentation masks of healthy human skulls and the corresponding skulls with artificial defects. The binary skull masks are segmented from head CT scans from the _CQ500_ dataset ([http://headctstudy.qure.ai/dataset](http://headctstudy.qure.ai/dataset)), using thresholding, similar to _MUig500+_[61]. ### _Avt_ The aortic vessel tree (_AVT_) dataset [64] contains 56 computed tomography angiography (CTA) scans of healthy aorta as well as the segmentation masks of the corresponding aortic vessel trees, including the aorta, aortic arch, branch and iliac arteries, as shown in Figure 1. \begin{table} \begin{tabular}{l l l} \hline \hline Sources & Description & URLs & Dataset License \\ \hline _TotalSegmentator_[60] & various anatomical structures & [https://doi.org/10.5281/zenodo.6802613](https://doi.org/10.5281/zenodo.6802613) & **CC BY 4.0** \\ _MIC500+_[61] & healthy and craniectomy CT skulls & [https://doi.org/10.6084/m9.figshare.9616319](https://doi.org/10.6084/m9.figshare.9616319) & **CC BY 4.0** \\ _HCP_[62] & paired brain-skull extracted from MRIs & [https://humanconnectome.org/](https://humanconnectome.org/) & Data Use Terms (\({}^{2}\) \\ Facial Models [63] & facial models for augmented reality & [https://doi.org/10.6084/m9.figshare.885007.v2](https://doi.org/10.6084/m9.figshare.885007.v2) & **CC BY 4.0** \\ AVT [64] & aortic vessel trees & [https://doi.org/10.6084/m9.figshare.14805362](https://doi.org/10.6084/m9.figshare.14805362) & **CC BY 4.0** \\ MRI [65] & brain and GBM extracted from MRIs & [https://doi.org/10.6084/m9.figshare.74385.v2](https://doi.org/10.6084/m9.figshare.74385.v2) & **CC BY 4.0** \\ SkullFix [66] & complete and artificially defected skulls & [https://autoupmlml2021.grand-challenge.org/Dataset/](https://autoupmlml2021.grand-challenge.org/Dataset/) & **CC BY 4.0** \\ SkullBreak [66] & complete and artificially defected skulls & [https://doi.org/10.6084/m9.figshare.1461307.v1](https://doi.org/10.6084/m9.figshare.1461307.v1) & **CC BY 4.0** \\ Verge [67] & large scale verbet segmentation & [https://github.com/anjany/vese](https://github.com/anjany/vese) & **CC BY 4.0** \\ KitTS21 [68] & kidney and kidney tumor segmentation & [https://github.com/healther/kiki21](https://github.com/healther/kiki21) & **MIT** \\ BarTS [69, 70], [71] & brain tumor segmentation & [https://www.synapse.org/89/synapse.yn2704644/wiki/](https://www.synapse.org/89/synapse.yn2704644/wiki/) & - \\ JDETeethSog [72, 73] & 3D Tenth Scan Segmentation & [https://github.com/abethamadom/3DfTechSeg22_challenge](https://github.com/abethamadom/3DfTechSeg22_challenge) & **CC BY NC ND 4.0** \\ HECTXR [74, 75] & head and neck tumor segmentation & [https://redeethg.grand-challenge.org/](https://redeethg.grand-challenge.org/) & - \\ Crasonason [76, 77] & brain tumor and COchle segmentation & [https://medeethg.org/record/60/20/2](https://medeethg.org/record/60/20/2) & **CC BY 4.0** \\ LITS [78] & liver tumor segmentation & [https://compeeths.coalda.org/competitions/17094](https://compeeths.coalda.org/competitions/17094) & - \\ ISLES2 [79] & ischemic stroke lesion segmentation & [https://kisse22.grand-challenge.org/](https://kisse22.grand-challenge.org/) & **CC-BY-4.0** \\ GLISRT [80, 81, 82] & brain structures & [https://doi.org/10.7937/TCIA.7905-2Q20](https://doi.org/10.7937/TCIA.7905-2Q20) & **TCIa Restricted (\({}^{2}\) \\ autoPET [83, 84, 85] & whole-body segmentations & [https://autoupef.grand-challenge.org/](https://autoupef.grand-challenge.org/) & **CC BY 4.0** \\ ADoMemet-1K [87, 89] & abdomen organs & [https://github.com/JunJnl/AllophoneCT-1K](https://github.com/JunJnl/AllophoneCT-1K) & - \\ FLARE [87, 88, 90] & 13 Adbome organs & [https://flare22.grand-challenge.org/](https://flare22.grand-challenge.org/) & - \\ Tootharify [91, 92] & inferior alveolar canal & [https://tootharifyablengles.github.io/](https://tootharifyablengles.github.io/) & **CC BY SA** \\ AGOC [93, 94] & normal and diseased coronary arteries & - & - \\ Calary-amplinas [95] & brain structure segmentations & - & \\ SUOMKE [96] & healthy and (coocaine use disorder) CUD brains & [https://openneuro.org/datasets/6003546/versions/1.12](https://openneuro.org/datasets/6003546/versions/1.12). & - \\ AMOS [97] & abdominal multi organs in CT and MRI & [https://zenodo.org/record/71557256.VOOCCOnSzdM](https://zenodo.org/record/71557256.VOOCCOnSzdM) & - \\ LNIDE [98, 99] & lung nodules & [https://indb.grand-challenge.org/](https://indb.grand-challenge.org/) & **CC BY NC ND 4.0** \\ PROMS [100] & prostate MRI segmentation & [https://zenodo.org/record/8014021/17/](https://zenodo.org/record/8014021/17/) & - \\ TCGA-GBM [71] & glioblastoma & [https://www.nature.com/articles/sdata2017117](https://www.nature.com/articles/sdata2017117) & - \\ EMOTEC [101, 102] & normal and pathological (infarction) myocardium & [https://enide.com/something-content](https://enide.com/something-content) & **CC BY NC SA 4.0** \\ CD-HC [103] & multiple organ segmentation & [https://doi.org/10.6084/m9.figshare.13055663](https://doi.org/10.6084/m9.figshare.13055663) & **CC 1.0** \\ LUMIERE [104] & longitudinal glioblastoma & [https://doi.org/10.6084/m9.figshare.5904905.v1](https://doi.org/10.6084/m9.figshare.5904905.v1) & **CC BY NC** \\ \hline \hline \end{tabular} \end{table} TABLE II: Summary of the Sources of the Anatomical Shapes in _MedShapeNet_ ### _VerSe_ The _large scale vertebrae segmentation (VerSe)_ challenge [67, 116] provides the segmentation masks of vertebrae from around 210 subjects [117, 118]. 2745 verteb shapes are generated from the challenge dataset. ### _Asoca_ The _automated segmentation of coronary arteries_ (ASOCA) challenge provides the manual segmentations of 20 normal and 20 diseased coronary arteries [94]. ### _3DTeethSeg_ Automated teeth localization, segmentation, and labeling from intra-oral 3D scans are crucial tasks in modern dentistry, significantly improving dental diagnostics, treatment planning, and population-based studies on oral health. Before initiating any orthodontic or restorative treatment planning, it is essential for a CAD system to accurately segment and label each instance of teeth in the 3D dental scan. This eliminates the need of time-consuming manual adjustments by the dentist. To address this need, the _3D Teeth Scan Segmentation and Labeling Challenge (3DTeethSeg)_[72, 73] was organized in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022. This challenge provides the upper and lower intra-oral 3D scans of 900 subjects, along with the corresponding manual annotations for teeth segmentation and labeling tasks. The data annotation was performed in collaboration with clinical evaluators with more than 10 years of expertise in orthodonistry, dental surgery, and endodontics. A preliminary benchmark of state-of-the-art methods for the challenge can be found in [72]. ### _LNDb_ The data from the _automatic lung cancer patient management_ (LNDb) challenge [98, 99] comprises lung nodule segmentations performed by five radiologists on low-dose computed tomography images within the scope of lung cancer screening. A total of 861 lung nodule segmentation masks are publicly available, corresponding to 625 individual nodules segmented on 204 CTs. Radiologists were asked to independently screen each CT and identify all pulmonary nodules and segment those with an in-plane dimension larger than or equal to 3mm. No consensus or review between radiologists was performed, meaning that there is a variable number of segmentations per nodule (between 1 and 3). ### _Emidec_ The Emidec (automatic Evaluation of Myocardial Infarction from Delayed-Enhancement Cardiac MRI) dataset is composed of 150 exams with delayed enhancement-MRI (or DE-MRI) images in short axis orientation covering the left ventricle from normal cases or patients with myocardial infarction, with the contouring of the myocardium and diseased areas (if present) from experts in the domains [101, 102]. The database is composed of the imaging Fig. 3: Example pathological shapes in _MedShapeNet_, including tumorous kidney (paired), brain (with real and synthetic tumors), liver and head & neck, as well as diseased coronary arteries. For illustration purpose, the opacity of some shapes is reduced to reveal the underlying tumors. We can study the effects of tumors on the morphological changes of an anatomy (e.g., brain) using such pathological data. exam and the associated clinical information. The targeted cohort is any patient admitted in a cardiac emergency department with symptoms of a heart attack. Indeed, DET MRI is a method of choice to evaluate the extent of myocardial infarction, and by extension, to assess viable tissues after an injury. The images are acquired roughly 10 minutes after the injection of a gadolinium-based contrast agent, and then the fibrotic area appears bright in T1-weighted DE-MRI whereas normal tissue appears dark. There is an unbalanced distribution between normal (1/3) and pathological (2/3) cases, corresponding roughly to real life in an MRI department. This dataset was available as part of the Emidec challenge organized in conjunction with the STACOM workshop during the MICCAI conference in 2020 [57]. Even if the data are freely available for research topic, the owner stays the University Hospital of Dijon (France) ### _ToothFairy_ Dental implant placement within the jawbone is a routinely executed surgical procedure, which can become complex due to the local presence of the Inferior Alveolar Nerve (IAN) crossing the homonymous osseous structure (the Inferior Alveolar Canal, IAC in short). In particular, the nerve is in close relation to the roots of molars, and its position must thus be carefully detailed before the surgical removal. As avoiding contact with the IAN is a primary concern during these operations, segmentation plays a key role in surgical preparations. With the goal of pushing the development of deep learning frameworks to automatically segment the IAC, the _ToothFairy_ dataset has been released by "ToothFairy: A Cone-beam Computed Tomography Segmentation Challenge" [92] organized within MICCAI 2023. ToothFairy extends the previously released Maxillo dataset [91, 119, 120], and it comprises \(443\) dental scans, captured using the NewTom/NTVGiMK4 CBCT scanner, operating at \(3\,\)mA and \(110\,\)kV, with a voxel size of \(0.3\,\)mm\({}^{3}\). The scans have been acquired with an intra-slice distance of \(0.3\,\)mm, yielding volumes with shapes ranging from \((148,265,312)\) to \((169,342,370)\) across the Z, Y, and X axes, respectively. The voxel values, represented in Hounsfield Units (HU), span from \(-1000\) to \(5264\). The dataset includes 2D sparse annotations for all \(443\) volumes, while only a subset of \(153\) volumes features detailed 3D voxel-level annotations of the IAC. The ground-truth annotations of the IAC have been produced by a team of five experienced maxillofacial surgeons using an ad-hoc developed tool that leverages different computer vision techniques to assist the user during the annotation [121, 122]. An additional test-set of 50 CBCT volumes has been acquired using a standard CBCT scanning protocol (i-CAT, 3D Imaging System, Imaging Sciences International Inc, Hatfield, PA, USA) in "Extended Field" modus (FOV: 16cm diameter/\(22\) cm height; scan time: \(2\times 20s\); voxel size: 0.4 mm). These data represent the ToothFairy challenge evaluation dataset and, in this case, only the ground-truth annotations are made available. ### _Hecktor_ The training set of the _HEAD and nCK TumOR segmentation and outcome prediction_ (HECKTOR) challenge [74, 75] comprises 524 3D FDG-PET/CT images from seven hospitals with manual primary tumor and metastatic lymph nodes contours. The data originates from FDG-PET and low-dose non-contrast-enhanced CT images (acquired with combined PET/CT scanners) of the H&N region of patients with oropharyngeal H&N cancer. The training set of the HECKTOR challenge is used for _MedShapeNet_. ### _AutoPET_ Similar to _TotalSegmentor_, whole-body segmentations are extracted from the PET/CT dataset provided by the AutoPet challenge [84, 86], using an semi-supervised segmentation network [123, 83, 124]. The autoPET dataset itself comes from cancer patients and also includes the manual segmentations of whole-body tumor lesions. It should be noted that the morphologies of some of the anatomies might be affected due to the existence of tumors. ### _Calgary-Campinas_ The Calgary-Campinas (CC) dataset [95] consists of T1 magnetic resonance imaging (MRI) volumes acquired in 359 presumed healthy subjects on scanners from three different vendors (GE, Philips, and Siemens) and at two magnetic field strengths (1.5 T and 3 T). Data were obtained using T1-weighted 3D imaging sequences (3D MP-RAGE (Philips, Siemens), and a comparable T1-weighted spoiled gradient echo sequence (GE)) designed to produce high-quality anatomical data with 1 mm3 voxels. Age and gender for all subjects were known (176 M: 183 F, 53.5 +/ -7.8 years, min:18 years, max: 80 years), however information about subject ethnicity was not available. Probabilistic brain masks were obtained by combining the output of eight automated brain segmentation algorithms [125, 126, 127, 128, 130, 131, 132] using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithms [133]. The quality of the brain masks was validated against 12 manual brain segmentations obtained in a stratified manner across vendor, magnetic field, and subject sex combinations. The CC dataset has been used to investigate brain extraction models [134, 135], domain shift and adaptation in brain MRI [136, 137], as well as MRI reconstruction [138, 139]. ### _Amos_ The AMOS dataset [97], both diverse and robust, includes 500 CT and 100 MRI images gathered from a variety of scanners and locations. It covers 15 distinct categories of abdominal organs: the spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, pancreas, right adrenal gland, left adrenal gland, duodenum, bladder, and prostate/uterus. The images were predominantly collected from patient examinations involving abdominal tumors or other abnormalities. ### _AbdomenCT-1K_ and Flare The AbdomenCT-1K dataset includes the manual segmentations of the liver, kidney, spleen, and pancreas from over 1000 CT scans [87]. A subset of the dataset was used in the _fast and low-resource semi-supervised abdominal _organ segmentation_ (FLARE) challenge, which provides the manual segmentation of 13 abdomen organs, including the liver, spleen, pancreas, right kidney, left kidney, stomach, gallbladder, esophagus, aorta, inferior vena cava, right adrenal gland, left adrenal gland, and duodenum [88, 89]. Note that some of the CT scans are acquired from cancer patients. Tumors can affect the morphologies of these organs. ### _Isles_ The _ischemic stroke lesion segmentation_ (ISLES) challenge [79] provides a dataset comprising of 250 brain MRIs along with binary masks depicting stroke infarctions. The dataset encompasses diverse brain lesions in terms of volume, location, and stroke pattern. The manually delineated segmentation masks are derived by refining pre-segmentations obtained using a 3D UNet [140]. ### _Synthetic Anatomical Shapes_ Generative adversarial networks (GANs) are capable of generating realistic 3D data [141]. Besides real anatomical shapes, _MedShapeNet_ also includes synthetic shapes generated by GANs, which can be used for augmenting the dataset in deep learning-based tasks. In _MedShapeNet_, we use GANs to generate synthetic tumors for 27390 real brains, as shown in Figure 3. These synthetic brain masks can be used in combination with the original tumor labels to train a tumor segmentation network. ### _Medical Instrument_ Besides anatomical shapes, _MedShapeNet_ also contains the 3D models of medical instruments used primarily in oral and cranio-maxillofacial surgeries, such as the drill bits, scalpel and chisel, as shown in Figure 4. The 3D instrument models are obtained by scanning the corresponding instruments manually using two structured-light-based 3D scanners, namely, Autoscan Inspec (Shining 3D Corporation, Hangzhou, Zhejiang, China) and Artec Leo (Artec3D, Senningerberg, Canton Luxembourg, Luxembourg). The initial scans are post-processed (e.g., noise removal) using proprietary software, _Ultrascan version 2.0.0.7_ and _Artec Studio 17 Professional_, before they are incorporated into the database. These instrument models can be used for surgical tool tracking (detection, classification) in augmented reality (AR) and mixed reality (MR) [55] for medical education and research. They can also be used in virtual reality (VR) applications. ### _Pathological Shapes_ To increase the variability of the shape collections, _MedShapeNet_ contains not only normal/healthy anatomical shapes, such as the kidneys from _TotalSegmentor_ and the brains from \(HCP\), but also pathological ones, which are derived from patients diagnosed with a specific pathological condition, such as tumor (liver, kidney, etc) and CUD (SUDMEX CONN, Table II). Figure 3 shows the tumorous kidneys, brains, livers and head & neck, as well as diseased coronary arteries from different sources. We also use generative adversarial networks (GANs) to generate synthetic brain tumors, as shown in Figure 3. ## 4 Annotation and Use Cases _MedShapeNet_ provides annotations in the form of paired data. Large, high-quality, paired data are valuable assets in computer vision research [51, 142], as they facilitate supervised training of machine learning models and Fig. 4: Illustration of 3D models of medical instruments used in oral and cranio-maxillofacial surgeries. The 3D models are obtained using structured light 3D scanners (Artec Leo from Artec3D and AutoScan Inspec from Shining 3D). Instrument models can be retrieved by the search query _instrument_ via the _MedShapeNet_ web interface. Image taken from [https://xtlab.kiim.nw/](https://xtlab.kiim.nw/). promise SOTA results. For example, Yu, J. et al. [142] curated a dataset, _CelebV-Text_, containing facial text-video pairs, which can be used for text-driven generation of face-centric videos. Similarly, Xing, J. et al. [143] used BWI [144] and VOCASET [145], datasets containing paired audio (e.g., speech)-visual (e.g., facial expressions/motions) sequences, for speech-driven 3D facial animation. Keller, M., et al. [51] constructed a dataset containing body surface-skeleton pairs extracted from 2000 X-ray absorptiometry (DXA) scans. A regressor was trained to infer the inside skeleton given the outside body surface of humans in various shapes and poses. In these examples, the input are the texts, audios and body surfaces, while the ground truth, a.k.a. annotations, are the corresponding videos, 3D facial models and skeletons. In [52], the authors constructed a paired pose-organ dataset and trained a deep model on it to infer the deformation of internal organs from patients' poses. The pose parameters were derived from whole-body skin segmentations of the CT dataset, while the organ deformations were calculated from the 3D models of the corresponding internal organs. In _MedShapeNet_, _pairedness_ is defined as having two composites (anatomical shapes and/or meta information) coming from the same subject, and one of them is used as input and the other is used as the ground truth. The most basic paired data in _MedShapeNet_ consist of the shapes and their corresponding anatomical categories, such as 'liver', 'heart', 'idney', and 'lung', which can be used to train a classifier for anatomical shape categorization. Synthetic shapes are marked with '_synthetic', to distinguish them from shapes obtained from real imaging data. ### _Benchmarks Derived from MedShapeNet_ Benchmark datasets for various interesting shape-based applications can be derived from _MedShapeNet_ in the form of paired data, which facilitate supervised learning of a mapping relationship, i.e., paired data can be used as input and ground truth for training a deep neural network. Based on their direct applications, we roughly group all potential benchmarks into three categories: _discriminative_, _reconstructive_ and _variational_. The following discusses the three categories of benchmarks in detail. Table III shows a non-inclusive list of benchmarks (paired data) that can be derived from _MedShapeNet_. In Section 4.2, we present in specific four of the benchmarks and their corresponding use cases. #### 4.1.1 Discriminative Benchmarks The paired data are comprised of the patients' meta information, such as pathologies, medical histories, and the corresponding anatomical shapes. An example of such paired data would be the liver shapes from healthy subjects and patients diagnosed with liver cancer. The health status (i.e., healthy, cancerous) is extracted from the patients' meta information, while the live shapes are derived from the corresponding segmentation masks. These benchmarks are mainly used for diagnostic tasks, in which a classifier is trained to discriminate cancerous livers from healthy ones based on liver shapes. Diagnosis (screening) of a pathological condition, such as cancer, is usually based on gray-scale medical images. Nevertheless, with the _Discriminative_ benchmarks, one can investigate the possibility of discriminating between pathological and healthy subjects using only the shape of the affected Fig. 5: Examples of paired anatomical shapes in _MedShapeNet_. (A) Paired skins, muscles, fat, different tissues, organs and bones derived from whole-body PET-CT segmentations. (B) Paired abdominal anatomies (from the FLAIR and AMOS challenges, respectively), including liver, spleen, pancreas, right kidney, left kidney, stomach, gallbladder, esophagus, aorta, inferior vena cava, right adrenal gland, left adrenal gland, and duodenum. (C) Paired internal anatomies and body surfaces derived from the _TotalSegmentor_ dataset. Note: different anatomies have different labels in the segmentations. However, for illustration consistency, we use the same color (gray) for different anatomical shapes organ(s). Furthermore, analogous to 3D shape classification for shape retrieval, a classifier can be trained to classify the shapes into different anatomical categories. #### 4.1.2 Reconstructive Benchmarks The paired data are comprised of different anatomical shapes derived from the whole-body segmentations of a patient. These benchmarks are usually used in reconstructive tasks, where the 3D shapes of an anatomy need to be reconstructed under the geometric constraint of existing ones. Numerous novel applications can be developed using such paired data. For example, given paired skull-face shapes (Figure 5), we can train a regressor to reconstruct human faces from the skeletal remains, specifically the skulls, to automate forensic facial reconstruction [146], which is considered a tedious, expensive and highly subjective procedure in archaeological research and criminal investigation; given paired skin-fat shapes derived from whole-body segmentations (Figure 5), a machine learning model can be trained to predict the spatial distribution of body fat, an important health risk indicator, from body surfaces (i.e., skins) [147]; similarly, we can also infer other internal body compositions (e.g., skeletons, organs) from a person's body surface and vice versa, or infer the 3D shape a missing internal organ given its surrounding anatomies. New reconstructions are expected to be naturally aligned with given anatomies (i.e., the input). Such a naturalness criterion is automatically enforced by training on the paired data derived from the same subject. Therefore, these benchmarks are also potentially useful for applications where _realism_ is desired e.g., animation. #### 4.1.3 Variational Benchmarks _Variational benchmarks_ are usually used for conditional reconstruction of 3D anatomical shapes. Besides the geometric constraints and the _naturalness_ criterion mentioned above, new reconstructions are expected to have an additional attribute, such as age, gender and pathology, which can be extracted from the patients' meta information as in the _Discriminative Benchmarks_. For example, it is possible to reconstruct multiple faces of different ages from the same skull, by including the meta information _age_ as a supervising factor during training. Similarly, it is also possible to impose a pathological condition, such as tumor, on healthy anatomies or model the morphological changes of an anatomy during disease progression [148]. Variational auto-encoder (VAE) [149] and GANs are commonly used for such conditional reconstructive tasks. ### _Use Cases of MedShapeNet_ In this section, we describe five real-world use cases of _MedShapeNet_, including (1) a forensic facial reconstructor, which reconstructs soft facial structures from the underlying skull; (2) an anatomy completor [150], which reconstructs the 3D shapes of anatomies that are missing in the input; (3) a skull reconstructor, which reconstruct the full skull structures when the skull is damaged, e.g., when (part of) the cranium or facial bones are missing; (4) a brain shape classifier that detects tumorous brains and (5) anatomy education in AR/MR. We show that problems (1-3) can be solved under a shape completion/inpainting framework, an active area of research in computer vision [151, 152, 153, 154, 155], where the 3D head models, the complete set of anatomies and the full skulls are regarded as the ground truth, while the skulls, the incomplete anatomy set (in which one or several anatomies are missing) and the damaged skulls are the input, respectively. Convolutional neural networks are trained to learn the respective mappings. We derived such paired skull-face and anatomy datasets from whole body segmentations as described in Section 4. Damaged skulls can be generated by removing part of the bone structures from full skulls [24, 26]. Note that this section only aims at demonstrating how _MedShapeNet_ can be used to solve vision/medical problems, rather than presenting SOTA results for each problem. To build upon the preliminary investigation, please refer to the codes and pretrained models that are publicly released at [https://github.com/Jianningli/medshapenet-feedback](https://github.com/Jianningli/medshapenet-feedback). 2.1 Forensic Facial Reconstruction refers to the process of restoring a persons facial features from the underlying skull. It is a common practice in archaeological research and criminal investigation, where the identity of an ancient person or victim needs to be determined from the remains [146]. Forensic facial reconstruction is usually carried out \begin{table} \begin{tabular}{l l|l l l l} \hline \hline \multicolumn{2}{c}{_Discriminative Benchmarks_} & \multicolumn{2}{c}{_Reconstructive Benchmarks_} & \multicolumn{2}{c}{_Variational Benchmarks_} \\ \hline input (shape) & ground truth (meta) & input (shape) & ground truth (shape) & input (shape) & ground truth (shape+meta) \\ \hline liver & tumor & skull & face & face & face + age \\ kidney & tumor & ribs+sips & torso organs & face & face + AUD \\ brain & tumor & skin & body fat & face & face + CID \\ brain & Alzheimer’s disease (AD) & full skeleton & skin & brain & brain + AD \\ brain & AUD & & & & \\ brain & CVD & & & & \\ face & AUD & & & & \\ face & age & & & & \\ brain & age & & & & \\ coronary artery & coronary artery disease (CAD) & & & & \\ myocardium & infarction & & & & \\ shapes & anatomical categories & & & & \\ \hline \hline \end{tabular} \end{table} TABLE III: A Non-inclusive List of Benchmark Datasets That Can be Derived from _MedShapeNet_ manually by a designer or sculptor, which is highly time-consuming and subjective. To automate this process, a facial reconstructor can be trained using paired skull-face data derived from the whole-body PET-CT segmentations in MedShapeNet, as seen in Figure 5. Figure 6 (A) shows how the paired skull-face data can be extracted from the whole-body segmentations in _MedShapeNet_. An input skull that is not included in training, the prediction from the facial reconstructor and the ground truth are also illustrated. We can see that the prediction and the ground truth bear sufficient resemblance for identification purposes. #### 4.2.2 Multi-class Anatomy Completion An anatomy completer learns the spatial and geometric relationship among different anatomics of the same person. Given a set of anatomics, the anatomy completer detects and then reconstructs the ones that are missing. Twelve organs are derived from the whole-body segmentations of TotalSegmentor, including the lung, heart, spleen, stomach, pancreas, spine, rib cage, liver, kidney, aorta, a pair of autochthonous muscles, and the pulmonary artery. Random anatomies are removed from them to create multiple incomplete anatomy sets, as shown in Figure 6 (B). A convolutional denoising auto-encoder is trained to learn a _many-to-one_ mapping between the incomplete sets and the 12 anatomies. Figure 6 (B) also illustrates an input and the corresponding prediction in 3D and 2D coronary views. The completer reconstructs the 3D shapes of the missing anatomies in different classes, which geometrically and spatially fit existing ones. The multi-class anatomy completer is potentially helpful in creating pseudo labels for whole-body segmentation, where it generates initial segmentation masks for the anatomies that have not been annotated in a whole-body CT scan. Refer to [150] for implementation details of the anatomy completer. #### 4.2.3 Skull Reconstruction The task aims to reconstruct a full skull when the skull is damaged on the facial area, as seen in Figure 6 (C). Damaged skulls can be generated by erasing (part of) the facial voxels from full skulls, and a machine learning model can be trained on such paired skulls i.e., damaged and the corresponding full skull, to restore the erased voxels. Refer to [156] for implementation details of the skull reconstruction model. Damaged skulls can also be generated by erasing voxels around the cranium, and the same model can be trained for automatic cranial implant design [24, 26]. #### 4.2.4 Screening and Classification of Brain Tumors Conventional data-driven methods for the screening and classification of brain tumors are usually based on gray-scale MRIs [157, 158, 159]. The input of the classifier can be either the whole or skull-stripped MRI scans [160]. In this use case, we train a convolutional neural network (CNN)-based classifier using instead only the brain shapes represented as binary voxel grids, to discriminate between tumorous and healthy brains. The classifier has shown good convergence and generalizability, achieving over 80% accuracy on the training and test set, respectively. The experiment demonstrates that the existence of tumors are reflected on the brain morphologies that can be captured by a standard CNN-based classifier, and that voxel features from gray-scale MRIs are redundant for the tumor detection task. Similar results are observed when the classifier is trained to distinguish between male and female brain shapes. It is shown that the volume differences between tumorous _versus_ non-tumorous, and male _versus_ female brains are statistically significant (t-test) - a shape-related feature that could have been learnt by the classifier to make decisions. It remains to be investigated whether the conclusion holds true for the stratification of different tumor subtypes. Nevertheless, the classifier cannot converge properly when trained to discriminate brain shapes extracted from healthy subjects and CUD or AD patients, indicating that these brain pathologies are not well reflected on shape features. As discussed in Section 2, how to extract more Fig. 6: Benchmarks for various vision applications can be derived from _MedShapeNet_, such as (A) forensic facial reconstruction, (B) anatomical shape reconstruction, and (C) skull reconstruction. discriminative shape features or incorporate voxel features into the training process when shape features alone are insufficient require future investigation. #### 4.2.5 Anatomy Education in Extend Reality (XR) & 3D Printing Besides data-driven research, _MedShapeNet_ can also benefit a variety of AR/MR/VR applications that require 3D anatomical models [161]. A typical use case is AR-based anatomy education, which, different from conventional teaching methods, relies on virtual anatomical models [162]. In _MedShapeNet_, these 3D models are freely available to users and can be conveniently obtained using the online interface of _MedShapeNet_ (to be discussed in Section 5). In Figure 7 (A), a whole-body model is displayed using the _Microsoft HoloLens_ AR glasses. The whole-body model can be dissembled into individual anatomies, which can be moved, zoomed in/out and rotated and in the virtual environment, allowing students to learn the shape and relative position of an anatomical structure. Figure 7 (B) and Figure 7 (C) show the the manipulation of the heart and the kidney in the first-person and third-person views, from the perspective of a teacher. In this regard, the models may also be interesting for the upcoming Apple Vision Pro [163]. The shapes could even be used for Diminished Reality (DR) [164], e.g., for anatomy education [165]. Wherever necessary, these virtual models can also be converted into physical models via 3D printing. Figure 7 (D) and Figure 7 (E) show a 3D-printed facial phantom and a virtual skull model registered to the phantom. The virtual tumor models are also displayed on top of the registered models to show their relative position inside the skull of the patient. ### _Potential Negative Impact_ To avoid potentially harmful societal impact, computer vision research involving human-derived data should be conducted with care. Since _MedShapeNet_ is designed specifically for research at the junction of computer vision and medicine, proper ethics guidelines should be followed throughout methodology development and experimental design. For example, publicly sharing neuroimaging data bears high privacy risks and should be regulated, since they contain patients' facial profiles [166, 167]. A study shows that participants who are anonymously involved in a clinical trial can be identified by matching the faces reconstructed from their head MRI scans with photographs on social media, with the help of a face recognition software [113]. Therefore, besides removing patients' meta information before releasing neuroimaging data, defacing is also commonly practiced [168, 169]. Nevertheless, as demonstrated by the forensic facial reconstruction example described in Section 4.2.1, the facial profiles of the patients can still be reconstructed from skulls, when the entire facial structures are absent. Further removing the facial bones from skulls cannot completely resolve the issue either, as we have shown in our previous study that a machine learning model can reconstruct the original skulls even when the skulls are damaged (e.g., part of the bones on a skull are missing) [156], as seen in Figure 6 (C) and Section 4.2.3. Facial profiles can still be restored by first repairing the damaged skull using a skull reconstruction model discussed in Section 4.2.3 and then applying the facial reconstructor to the reconstructed skull, according to Figure 6 (C, A). _MedShapeNet_ facilitates the training of face/skull reconstruction models for anyone with a basic command of machine learning, but at the same time makes it more difficult to protect patients' privacy when it comes to sharing neuroimaging data. Another double-edged use case of _MedShapeNet_ is to train a machine learning model to identify drug or alcohol consumption/addiction based on facial features. Users can easily retrieve the facial models of SUD and normal cohorts from _MedShapeNet_ and train a binary classifier on them. The application benefits early detection and intervention of SUD, but may be abused for discrimination in unauthorized situations. Furthermore, since _MedShapeNet_ preserves the correspondence between the shapes and the source datasets, patients' meta information, such as age, race, gender, Fig. 7: A use case of _MedShapeNet_ in AR-based anatomy education. (A) a whole-body model from _MedShapeNet_ dissembled into individual anatomies. (B, C) anatomy manipulation in first- and third-person views. (D, E) a 3D-printed facial phantom and the corresponding skull and tumors. medical history, etc., if available in the source datasets, can be mapped to each shape model, which facilitates the learning of some controversial mapping relationships. For example, the ethnic identity or medical history could potentially be predicted based on a person's skull or facial profiles by training a classifier. It is therefore the responsibility of the researchers to weigh the social benefits against the potential negative societal impacts while developing models using _MelShapeNet_. ## 5 A Web Interface for _MedShapeNet_ A user-friendly, easy-to-use web API facilitates convenient access to the shape data within _MedShapeNet_, and makes it easier for researchers to use the database in their research. Inspired by the web API of the well-known _ShapeNet_ ([https://shapenet.org/](https://shapenet.org/)), we developed a web-based interface for _MedShapeNet_, which can be visited at [https://medshapenet-ikim.streamlit.app/](https://medshapenet-ikim.streamlit.app/). Users can search, download and inspect in a 3D viewer an individual shape, or batch download an entire category of anatomies. Desired shapes can be retrieved by the corresponding anatomy classes, such as 'heart', 'brain', 'hip', 'liver', as shown in Figure 8 (A, C). The names of the shapes matching the search query will be displayed in a drop-down menu. The corresponding shape will be displayed in a 3D viewer underneath the search box after clicking on one of the search results (Figure 8 (B)). An overview of currently available medical shapes, their categories and download links is also shown on the main page of the interface (Figure 8 (E)). The size of the overall database amounts to several terabytes (TB), which substantially exceeds the free space quota of most server providers, including _Streamlit_. We solve the problem by separating the shape storage (_scieb_) from the _Streamlit_ server running the web interface, to reduce the cost of storing large quantities of data on servers. ### _Search Queries_ The _MedShapeNet_ web interface returns shapes of choice by matching users' queries with the anatomy classes provided in the names of the shape files. Table IV shows a list of possible queries that will return at least one result in the _MedShapeNet_ web interface. The search query for anatomies whose names contain multiple words is composed of the individual words and underscores that connect the words (e.g., atrium_left, gluteus_medius_left, inferior_vena_cava, lung_upper_lobe). No results will be displayed if the search query does not match any existing file names. Users can search by anatomy (e.g., liver) or pathology (e.g., tumor). Fig. 8: Main panels of the _MedShapeNet_ web interface. A, C: choosing an anatomy category ’liver’. B: selecting an anatomy instance $12273_liver_mlig_1.stl’ and displaying it in an interactive 3D viewer. D: downloading the entire _MedShapeNet_ database. E: an overview of currently available medical shapes, their categories and download links. ### _User Feedback_ We use GitHub to manage the communication among users, developers, and contributors of _MedShapeNet_. It provides a mechanism for researchers to contribute shapes, provide feedbacks (e.g., report corrupted shapes, suggest improvements) and showcase their own research/applications utilizing _MedShapeNet_. As an incentive, shape contributors can be credited as collaborators of the _MedShapeNet_ project and their research can be featured on the GitHub page upon request. Detailed contribution guidelines is available at [https://github.com/Jianningli/medshapenet-feedback](https://github.com/Jianningli/medshapenet-feedback). A quality check will be performed before incorporating new shapes into _MedShapeNet_, to avoid introducing corrupted data and discrepancies. Since the shape data come from different sources, a consistency check will also be conducted to ensure that the shape data with the same class annotations correspond to the exact same anatomical part. ## 6 Discussion and Conclusion High-quality, annotated datasets are valuable assets for data-driven research. We created _MedShapeNet_, with the firm belief that, in the near future, it will become a commonly referenced resource in the computer vision and medical imaging community. The construction of _MedShapeNet_ is an ongoing effort and requires continuous contributions from the community, since the majority of its shape collections are acquired from data sources not owned by us. _MedShapeNet_ also relies on the community to refine its shape collection and define more interesting use cases at the junction of computer vision and medical imaging (refer to Section 5.2). In this white paper, we have introduced the initial efforts we have taken to construct _MedShapeNet_, most importantly by (1) bringing together the community for data contribution (most of the co-authors have contributed a source dataset for the shape collection); (2) deriving benchmark datasets for several interesting applications (Section 4.2), and open-sourcing them to support future research on the respective directions; (3) constructing an online interface to facilitate searching and downloading shapes of choice (Section 5); and (4) bringing up several interesting shape-related research topics that are worthy of future investigation (Section 2) and discussing the precautions that should be taken to comply with the ethics guidelines (Section 4.3). Furthermore, compared to vision datasets, large medical datasets are much more difficult to curate due to the sensitive, distributed and scarce nature of medical images. As a result, the medical imaging community has only recently started catching up with the development of vision algorithms that can exploit large datasets, with more and more medical researchers becoming open to data-sharing in recent years. Thus, _MedShapeNet_ has the potential to bridge the gap between the vision and medical imaging community, by providing a versatile dataset that both vision and medical researchers are accustomed to. Last but not least, _MedShapeNet_ is a freely available 3D repository for extended reality research and applications. For future development of _MedShapeNet_, we will primarily focus on the following aspects: * **Increase the size and diversity of the shape collection:** we will collect more shapes, especially pathological ones (e.g., glioblastoma, aorta with aneurysm) to further enrich _MedShapeNet_, and engage more researchers from the community to join the initiative. * **Promote _MedShapeNet_:** we will disseminate _MedShapeNet_ more actively in the research community of computer vision and medical imaging, by presenting it in conferences, symposia, seminars and classrooms (teaching), and organizing hackweeks/workshops/challenges. * **Define new benchmarks and establish more use cases:** we, together with the community, will derive more benchmark datasets from _MedShapeNet_ and explore interesting use cases based on them. * **Improve the shape search portal:** we will improve the online portal of _MedShapeNet_ by iteratively refining the shape search functionality and improving the user interface for a better user experience. * **Provide more shape annotations:** we will extract more meta information from the source datasets and incorporate them into the corresponding shape data as annotations. * **Redesign the naming convention of the shapes:** we will design a more inclusive and compact naming convention for the shapes, from which essential information, such as anatomy categories, source datasets, pathologies, etc., can be deduced. ## Acknowledgments This work was supported by the REACT-EU project KITE (Plattform fur KI-Translation Essen, EFRE-0801977, [https://kite.ikim.nrw/](https://kite.ikim.nrw/)), FWF enFaced 2.0 (KLI 1044, [https://enfaced2.ikim.nrw/](https://enfaced2.ikim.nrw/)), AutoImplant ([https://autoimplant.ikim.nrw/](https://autoimplant.ikim.nrw/)) and _NUM 2.0" (FKZ: 01KX2121). Behrus Puladi was funded by the Medical Faculty of RWTH Aachen University as part of the Clinician Scientist \begin{table} \begin{tabular}{l l l l l l l} \hline CT & mri & brain & skull & brain & vertebrae & stomach \\ bladder & bowel & rib & sacrum & bowel & scapula & lung \\ heart & ventricle & atrium & kidney & illposoas & iliac & artery \\ gland & gluteus & femur & esophagus & autochthonous & colon & aorta \\ trachea & hip & pancreas & vein & bowel & clavicula & myocardium \\ humerus & vena\_cava & duodenum & face & vessel\_tree & glioblastoma & cranial\_defect \\ \hline \end{tabular} \end{table} TABLE IV: Valid Search Queries for the _MedShapeNet_ Web Interface Program. In addition, we acknowledge the National Natural Science Foundation of China (81971709; M-0019; 82011530141). The work of J. Chen was supported by the Bundesministerium fur Bildung und Forschung (BMBF, Ref. 161L0272). The work of ISAS was supported by the "Ministerium fur Kultur und Wissenschaft des Landes Nordrhein-Westfalen" and "Der Regierende Burgermeister von Berlin, Senatskanziel Wissenschaft und Forschung". Furthermore, we acknowledge the _Center for Virtual and Extended Reality in Medicine_ (ZvRM, [https://zvrm.ume.de/](https://zvrm.ume.de/)) of the University Hospital Essen. The CT-ORG dataset was obtained from the Cancer Imaging Archive (TCIA). CT-ORG was supported in part by grants from the National Cancer Institute, 1U01CA190214 and 1U01CA187947. We thank all those who have contributed to the _MedShapeNet_ collection (directly or indirectly).
2303.03646
Correlated Impact Dynamics in Science
Science progresses by building upon previous discoveries. It is commonly believed that the impact of scientific papers, as measured by citations, is positively correlated with the impact of past discoveries built upon. However, analyzing over 30 million papers and nearly a billion citations across multiple disciplines, we find that there is a long-term positive citation correlation, but a negative short-term correlation. We demonstrate that the key to resolving this paradox lies in a new concept, called "capacity", which captures the amount of originality remaining for a paper. We find there is an intimate link between capacity and impact dynamics that appears universal across the diverse fields we studied. The uncovered capacity measure not only explains the correlated impact dynamics across the sciences but also improves our understanding and predictions of high-impact discoveries.
Jiazhen Liu, Tamang Kunal, Dashun Wang, Chaoming Song
2023-03-07T04:26:45Z
http://arxiv.org/abs/2303.03646v1
# Correlated impact dynamics in science ###### Abstract Science progresses by building upon previous discoveries. It is commonly believed that the impact of scientific papers, as measured by citations, is positively correlated with the impact of past discoveries built upon. However, analyzing over 30 million papers and nearly a billion citations across multiple disciplines, we find that there is a long-term positive citation correlation, but a negative short-term correlation. We demonstrate that the key to resolving this paradox lies in a new concept, called "capacity", which captures the amount of originality remaining for a paper. We find there is an intimate link between capacity and impact dynamics that appears universal across the diverse fields we studied. The uncovered capacity measure not only explains the correlated impact dynamics across the sciences but also improves our understanding and predictions of high-impact discoveries. Isaac Newton's famous phrase, "If I have seen further than others, it is by standing on the shoulders of giants" [1], highlights the strong connection between a new discovery and the past advances that it builds upon. The correlation between scientific impact and previous knowledge was initially studied by Price [2] through the construction of citation networks. Subsequently, qualitative research established that the impact of new scientific innovations is closely tied to the impact of prior knowledge [3, 4]. Recent large-scale empirical data analysis has statistically shown that papers with highly-cited references are more likely to receive high long-term citations [5, 6]. In these studies, one of the most commonly used measures of impact is the number of citations a scientific work receives years after publication [7, 8, 9]. On the other hand, recent development in network science provides a complementary insight into the impact of scientific works through the observation of the rich-club phenomenon in citation networks [10]. These studies suggest that new scientific papers connected to highly-cited older papers will have a greater impact themselves [11, 12]. This phenomenon, known as degree assortativity, has been widely documented [13]. Evidence supports the idea that the impact of prior literature plays a significant role in determining the impact of a paper, with those referencing highly-cited works more likely to receive high citations themselves [5]. Overall, the evidence supports Newton's hypothesis, suggesting that scientific works influenced by past "giants" are more likely to have a lasting impact. Nonetheless, it has long been recognized that the long-term impact of references is not the sole determining factor for a paper's impact. Merton suggested that pioneering ideas are often inspired by recent information [14]. This notion is supported by the observation that researchers only consider the immediate impact of inspiring works when pursuing new ideas, reflecting the current response of the community. Predicting the long-term impact of work requires foresight into the future development of the field. Many breakthrough discoveries are not immediately well received, a phenomenon referred to as the "Sleeping Beauty" effect [15, 16, 17]. A recent study showed that scientific impact is strongly influenced by the promptness of innovation, consistent with Merton's theory [5]. However, existing studies verified a positive impact relationship between a paper and its references based on long-term citations. To the best of our knowledge, a direct test of Newton's hypothesis while conducting new research is still lacking in the literature. This raises an intriguing question: when a new innovation arises, what is the short-term impact correlation between the new innovation and old studies? ## Results To answer this fundamental question, we collected data from the Web of Science (WOS) Thomson Reuters, which contains over 30 million papers and 830,259,176 citations from 1970 to 2014 (see Supplementary Material (SM) Data section). We perform two complementary tests. The first test was designed to investigate the relevance of a lagged citation-based metric used in previous studies, that is, the cumulative citations a paper receives on a long-term basis. To quantify the long-term impact, we measure the number of citations to a paper and its references 30 years after publication. This metric captures the established scientific impact of existing works from a retrospective perspective. Fig. 1a-c plots the long-term impact of the paper \(c^{\infty}\) and its references \(c^{\infty}_{\text{ref}}\) in different fields of science, showing that the long-term impacts of papers and their references are positively correlated. The second test was designed to explore the correlation of the impact when the new research was conducted. While our data do not indicate the exact implementation time, we use the publication time as a conservative measure. To quantify the immediate impact of a paper's references when the paper was just published, we measure its references' average citations \(c^{*}_{\text{ref}}\), reflecting the degree of innovation of this paper inspired by its related existing studies. To investigate the'short-term' citation correlation, by fixing the long-term impact \(c^{\infty}_{\text{ref}}\) of references, we measure the relationship between the long-term impact \(c^{\infty}\) of a paper and the immediate impact \(c^{*}_{\text{ref}}\) of its references (Fig. 1d-f). If Newton's hypothesis were also valid in this case, one would expect \(c^{*}_{\text{ref}}\) to be positively correlated. Counterintuitively, we observe that \(c^{\infty}\) decreases with \(c^{*}_{\text{ref}}\) (Fig.1d-f) across different domains and \(c^{\infty}_{\text{ref}}\). In particular, for high-cited papers (\(c^{\infty}\geq 60\)), \(c^{*}_{\text{ref}}\) is notably small (\(c^{*}_{\text{ref}}\leq 4\)). This finding implies a rather surprising fact, most impactful innovations with large \(c^{\infty}\) were built upon those ideas with a relatively small immediate impact \(c^{*}_{\text{ref}}\). The above two tests lead to seemingly contradictory results, creating a paradox of a long-term positive citation correlation but a simultaneous short-term negative correlation. To understand this paradox better, we demonstrate a well-known example in condensed state physics [16]. In 1955, Goodenough developed a fundamental theory to predict magnetism in transition-metal oxides [18]. This work did not attract much attention from the condensed stated physics field until the 1990s. Due to the development of experimental techniques, several studies [19; 20; 21] began to focus on Goodenough's extraordinary work. At the time of this experimental work (the mid-1990s), Goodenough's work had fewer than 15 citations, i.e., \(c^{*}_{\text{ref}}<15\). Both Goodenough's papers and those of his followers ended up being highly cited and had a huge impact on physics and materials science. This example explains the observed paradox: these experimental works, compared with the thousands of other follow-up papers that cite Goodenough's work, recognize the importance of Goodenough's predictions at a time when the quest for high-temperature superconductivity is still in its very early stages. This observation is consistent with the finding in Fig. 1a-f: the breakthrough is rooted in its insights of foreseeing a fruitful innovation inspired by a future-impactful (large \(c^{\infty}_{\text{ref}}\)) but yet-to-be-recognized discovery (small \(c^{*}_{\text{ref}}\)), i.e. future giants. This proposes a new paradigm of scientific impact correlation that differs from the original Newton's hypothesis. To investigate this new paradigm quantitatively, we introduce a novel metric that captures both positive long-term and negative short-term correlations simultaneously. We define a paper's _capacity_ \[\phi=\frac{\langle\Delta c_{\text{ref}}\rangle}{\langle c^{\infty}_{\text{ ref}}\rangle}=1-\frac{\langle c^{*}_{\text{ref}}\rangle}{\langle c^{\infty}_{ \text{ref}}\rangle}, \tag{1}\] being the normalized \(\Delta c_{\text{ref}}\) averaged all references. Intuitively, the capacity \(\phi\) quantifies the remaining fraction of originality for the paper. By definition, \(\phi\) increases with \(c^{\infty}_{\text{ref}}\) whereas decreases with \(c^{*}_{\text{ref}}\), in line with the observation in Fig. 1. It is worth pointing out that the capacity \(\phi\) is exclusive. When a paper acquires a large capacity \(\phi\) from an existing idea, the subsequent followers of this prior work can only achieve smaller ones, implying underlying competition among new papers that are inspired by the same existing ideas. We will show below that the \(capacity\)\(\phi\) encodes all information about correlated impact dynamics. To explore the relationship between a paper's long-term impact and its capacity received from its references, we study "hit" papers whose citations are in the top 5 percentile in the dataset [6]. Figure 2a-c presents the probability of finding 'hit' papers with capacity \(\phi\) across four different subjects. We discover that the probability of finding a 'hit' paper increases rapidly with capacity \(\phi\) across all the subjects. Papers with a large capacity, i.e., \(0.9\leq\phi\leq 1\), display a hit rate of around \(15\) out of \(100\) papers, which is about triple the background rate of \(5\) out of \(100\). On the contrary, when the capacity \(\phi\) of a paper is relatively small, we find significantly lower hit rates across all the subjects. Papers with capacity \(\phi<0.6\) show hit rates of around \(4\) out of \(100\) papers, lower than the background rate. Thus, our findings indicate that a paper with a large capacity is more likely to be impactful in the future. Indeed, based on the definition of capacity, a paper with higher capacity acquires more originality and novelty from the prior ideas, leading to greater importance and higher future impact. We further consider different definitions of impactful scientific works (see SM S2 Empirical Results), i.e., "hit" papers, finding the same patterns we observe in Fig.2a-c. We further measure the long-term impact of papers as a function of capacity. Fig.2d shows the average long-term impact increase double-exponentially with capacity across three different subjects, satisfying \[\ln c^{\infty}=Ae^{\alpha\phi}+B, \tag{2}\] where parameters \(A\), \(B\), and \(\alpha\) are constants depending only on the fields. In particular, the scaling factor \(\alpha\) captures the slope of the solid line in figure 2d, indicating a measure of the strength of the correlation between long-term impact and capacity. Comparing the \(\alpha\) of three different subjects, we find the \(\alpha\) of physics has the largest correlation \(\alpha=1.6\), indicating the impact of physics papers relies strongly on existing works, whereas biology and chemistry have a relatively weaker correlation with \(\alpha\approx 1\) (Figure 2d). The discoveries of Fig.2 suggest the universality of the correlation between scientific impact and capacity, implying that the long-term impact of new papers can be predicted by capacity \(\phi\). We next show that a model based on the discovered correlation naturally leads to the empirically observed scientific impact. A current study [7] has discovered that the time evolution of the cumulative citation, \(c^{t}\), the number of citations the paper acquires after \(t\) years publication, follows a unique function \(c^{t}=m[e^{\lambda\Phi(\frac{\ln t-\mu}{\sigma})}-1]\), where \(\Phi(x)=(2\pi)^{-1/2}\int_{-\infty}^{x}e^{-y^{2}/2}\mathrm{d}y\). While \(\mu\) and \(\sigma\) control the shaping of \(c^{t}\), the long-term impacts of papers also depend on the fitness parameter \(\lambda\) and average citation \(m\). Incorporating Eq. (1) and (2) with \(c^{t}\) allows us to predict papers' long-term impacts \(c^{\infty}\) with their references' long-term impacts \(c^{\infty}_{\mathrm{ref}}\) and instant impact \(c^{*}_{\mathrm{ref}}\). Indeed, as Fig. 1a-f demonstrates, our theoretical predictions (solid lines) precisely agree with the empirical observations (scatters), indicating that the correlated impact dynamic is fully captured by the capacity \(\phi\). The capacity also provides a comprehensive explanation of the previous observation on the correlation between scientific impacts and publication immediacy \(\tau=t-t_{ref}\), capturing the published time difference between a paper and its references [5]. We plot \(c^{\infty}\) as a function of publication immediacy \(\tau\) (Fig. 1h-i). By accelerating publication immediacy, we observe a downward trend in the long-term impact of papers, similar to Fig.1d-f. Indeed, as the number of citations received by a paper grows with time, the immediate impact \(c^{*}_{\mathrm{ref}}\) also increases with publication immediacy \(\tau\). In other words, the \(c^{*}_{\mathrm{ref}}\) not only measures the immediate impact of references but also accounts for the fading novelty of the past scientific literature. The theoretical prediction further confirms the above statement, finding perfect matches between the prediction and empirical data. Generally, the capacity \(\phi\) accounts for the correlated impact dynamics by correlating a paper's long-term impact with the fading novelty and ultimate impact of its references. The excellent agreement between theoretical predictions and empirical observations implies that the capacity \(\phi\) defined in Eq. (1) plays an important role in predicting the impact correlation between papers and their references. To further validate the proposed model, we investigate the correlation by plotting the papers' impact \(c^{\infty}\) as a function of their references' long-term impact \(c^{\infty}_{\mathrm{ref}}\) and publication immediacy \(\tau\) for empirical data (Fig. 3a-c). Moreover, our model allows us to predict the correlation of impact between papers and their references (Fig. 3d-f), finding surprising agreements between the empirical measurements and modeling predictions. These strikingly similar patterns suggest that a paper's long-term impact is strongly correlated with the prior works by the capacity \(\phi\) (Eq.(1)). Inspired by the success of capturing the correlated impacts of papers and their references, one may wonder whether the capacity can be used to discover the breakthrough papers, i.e., giants. The breakthrough papers, characterized by groundbreaking achievements and unparalleled profound impacts, are considered to be different from the ordinary impactful papers [22]. A long-standing problem of the science of science is to understand and discover breakthrough papers, e.g., Nobel-Prize-winning papers, in their early stage [23, 24]. Our findings offer a potential solution. In Fig. 3, both empirical observations and theoretical predictions suggest that the most influential papers appear in the same region, characterized by significant reference long-term impact \(c^{\infty}_{\mathrm{ref}}\) and small publication immediacy \(\tau\), results in a large capacity \(\phi\) based on Eq. 1. Considering the extremely far-reaching impacts of breakthrough papers, this finding suggests that the most groundbreaking scientific works are probably characterized by significantly large capacity. To further confirm our thought, we collect \(74\), \(73\), and \(64\) Nobel-Prize-winning papers published between 1970-2014 as the representatives of breakthrough papers [22] for biology, chemistry, and physics, respectively. Figure 4a-c plots the complementary cumulative distribution function \(CCDF\) of capacity \(\phi\), \(C_{>}(\phi)\), that measures the proportion of papers with its capacity larger than \(\phi\). We find that the Nobel-Prize-winning papers have a significantly larger capacity \(\phi\) than normal papers. These findings imply that breakthrough papers with groundbreaking impact are determined mainly by their capacity \(\phi\). Indeed, breakthrough research not only has a broad impact, it often breaks new ground, i.e., the pioneering work. Hence, it is required to recognize the area of great potential for innovation at a very early stage, leading to an extremely high capacity. ## Discussion Our study challenges the common belief that there is a positive relationship between new discoveries and the impact of existing works on which they are based. In contrast, we found a negative correlation with immediate impact, that is, the impact of references when a paper is published, despite a positive correlation between long-term impact. This is because the innovation has a limited amount of capacity, and fades off after more new works recognize and further develop these new ideas. Therefore, it implies that not only the significance of the underlying ideas but also the promptness of recognizing their importance are equally important for subsequent innovations. We show that "capacity", measuring the proportion of remaining originality in existing ideas, encapsulates all information about correlated impact dynamics and captures both positive long-term and negative short-term citation correlations. We discovered a universal relationship between a paper's impact and its references' capacity, and develop a theory that accurately predicts a paper's impact. Our model provides a generic mechanism for understanding the emergence of degree correlations in complex networks and sheds new light on complex network dynamics. Furthermore, our findings have implications for identifying scientific breakthroughs and discovering potentially groundbreaking papers at an early stage. It has been challenging to differentiate breakthrough papers from ordinary impactful papers using only citations. For instance, a highly-cited paper might receive a large number of citations, but it might not be a pioneering work in its field. Hence, citations alone are insufficient in characterizing breakthrough papers. Our proposed metric, \(capacity\), may serve as a potential solution to differentiate between breakthrough papers and regular papers. Our results show that Nobel Prize-winning papers, as examples of breakthrough works, have significantly higher capacity compared to regular papers. This indicates that groundbreaking innovations are characterized by extremely high \(capacity\). On the other hand, our study only considers the correlated impact dynamics on paper citation networks. Previous research has shown that the size of a research team has an influence on the likelihood of a breakthrough discovery [25]. Additionally, the impact and long-term citation of a paper have been found to be related to the author's reputation and productivity [26, 27]. There is evidence of author "hot streaks" where a high concentration of impactful papers are produced around a specific topic [28]. Further research is needed to explore the combined effect of our proposed metric \(capacity\) and these relevant factors on breakthrough innovations. ## Method ### Data Description And Processing #### Web Of Science Data We use the Web of Science (WOS) dataset from 1970-2014. It consists of 43,661,391 publications and 800M citations among them. Figure S1 plots the number of papers with time, showing exponential growth in line with the previous findings [2] (see Supplementary Material). To classify these papers into Biology, Chemistry, Math, and Physics subjects, we start with four sets of papers based on the WOS journal categories of keywords 'bio-', 'chem-','math', and 'physics', each forming a core of the corresponding subject. For each subject, we further include all papers that either cite or are cited by any papers in the core. In the end, we obtain four citation networks with 13M, 9M, 3M, and 7M nodes for biology, chemistry, math, and physics, respectively. Table S1 summarizes the basic statistics of these four networks, each being a sub-graph of the whole WOS citation network (see Supplementary Material). Note that there are overlaps among these sub-graphs because of multidisciplinary papers. Figure S2 plots the Venn diagram of four networks, annotating interdisciplinary overlaps (see Supplementary Material). We find there are large overlaps between Biology & Chemistry and Chemistry & Physics, whereas Math has the smallest proportion of papers that belong to multi-discilines. #### Nobel-Prize-winning Papers We use the Nobel Prize-winning dataset collected by Li et.al. [22]. The dataset contains 230, 277, and 236 papers awarded the Nobel Prize between 1901-2016 in Chemistry, Medicine, and Physics, respectively. During 1970-2014, the dataset consisted of 77, 77, and 64 papers for each category. The WOS dataset covers the vast majority of these papers, containing 74, 73, and 64 papers, respectively. Table S2 summarizes a breakdown of these Nobel-Prize-winning papers into four subjects and multidisciplinary subjects created in Section S1 (see Supplementary Material). We find that a large overlap exists between Biology and Chemistry.
2308.01323
Evaluation of network-guided random forest for disease gene discovery
Gene network information is believed to be beneficial for disease module and pathway identification, but has not been explicitly utilized in the standard random forest (RF) algorithm for gene expression data analysis. We investigate the performance of a network-guided RF where the network information is summarized into a sampling probability of predictor variables which is further used in the construction of the RF. Our results suggest that network-guided RF does not provide better disease prediction than the standard RF. In terms of disease gene discovery, if disease genes form module(s), network-guided RF identifies them more accurately. In addition, when disease status is independent from genes in the given network, spurious gene selection results can occur when using network information, especially on hub genes. Our empirical analysis on two balanced microarray and RNA-Seq breast cancer datasets from The Cancer Genome Atlas (TCGA) for classification of progesterone receptor (PR) status also demonstrates that network-guided RF can identify genes from PGR-related pathways, which leads to a better connected module of identified genes.
Jianchang Hu, Silke Szymczak
2023-08-02T09:34:49Z
http://arxiv.org/abs/2308.01323v1
# Evaluation of network-guided random forest for disease gene discovery ###### Abstract **Motivation:** Gene network information is believed to be beneficial for disease module and pathway identification, but has not been explicitly utilized in the standard random forest (RF) algorithm for gene expression data analysis. **Results:** We investigate the performance of a network-guided RF where the network information is summarized into a sampling probability of predictor variables which is further used in the construction of the RF. Our results suggest that network-guided RF does not provide better disease prediction than the standard RF. In terms of disease gene discovery, if disease genes form module(s), network-guided RF identifies them more accurately. In addition, when disease status is independent from genes in the given network, spurious gene selection results can occur when using network information, especially on hub genes. Our empirical analysis on two balanced microarray and RNA-Seq breast cancer datasets from The Cancer Genome Atlas (TCGA) for classification of progesterone receptor (PR) status also demonstrates that network-guided RF can identify genes from PGR-related pathways, which leads to a better connected module of identified genes. **Availability:** [https://github.com/imbs-hl/networkRF](https://github.com/imbs-hl/networkRF) **Contact:** [email protected] Corresponding author **Keywords** Gene expression, Protein-protein interaction, RNA-Seq, Weighted random forest Introduction Gene expression analysis can quantify dynamic expression patterns under different biological conditions and thus help identify genes associated with complex diseases [1]. These biomarkers might improve patient risk prediction and can foster understanding of underlying molecular pathmechanisms. However, in the classic approach genes are often analyzed individually. Given the functional interdependencies between the molecular components, a complex disease such as cancer is rarely a consequence of an abnormality in a single gene [2, 3]. Therefore, it would be beneficial to incorporate molecular interactions into analysis, where the interactions can be summarized in the form of molecular networks [4]. Examples include protein-protein interaction (PPI) networks and gene regulatory networks where each node of these networks represents a gene and the edges between nodes reflect their interactions (see [3] for a review on different types of biological networks). In this regard, gene network information is believed to be beneficial for disease module and pathway identification where a disease module or pathway is expected to be a local cluster of highly connected genes within the network. To identify disease associated genes, many statistical and machine learning approaches have been developed [1]. One popular machine learning method is the random forest (RF) algorithm [5]. It is a nonparametric approach that can accommodate different types of phenotypes including categorical or quantitative phenotypes and survival times [6]. Moreover, it can work with predictors of various scales or distributions and is suited for applications in high-dimensional settings such as transcriptomics data where the number of predictors can be larger than the number of observations [7, 8]. With the so-called variable importance measures, the algorithm can also highlight the relevance of each predictor to the prediction of phenotype [5]. This is especially handy for disease gene discovery where genes associated with disease phenotypes can be identified. However, as with many other methods for transcriptome analysis, the standard RF algorithm only uses gene expression data for constructing the prediction model and identifying disease genes. The gene network information has not been explicitly utilized. Therefore, in this paper, we investigate a network-guided RF where the network information is summarized into a sampling probability of predictors which is further used in the construction of RF. This sampling probability can be considered as prior knowledge on the importance of each predictor on model construction. In the standard RF, this sampling probability of predictors is a uniform probability to reflect that we do not impose any prior knowledge on the importance of predictors. However, it is possible that we have certain prior knowledge or belief on the importance of predictors from external sources such as gene network. For instance, hub genes in the gene network tend to play more important roles in disease progression [3]. Therefore, we may want to increase the usage frequency of these genes in the construction of RF. This can be achieved by modifying the sampling probability of predictors. The strategy to modify the sampling probability of predictor variables in RF to prioritize certain predictors is not unknown in the bioinformatics literature. The Enriched RF [9] uses the false discory rate (FDR) adjusted p-values from marginal association tests of each predictor to construct such sampling probabilities. The variable importance-weighted RF [10] first constructs a standard RF and obtains the variable importance measures of all predictors. The estimated variable importance measures are then normalized to be the sampling probabilities of predictors in the second-round RF construction. The network-guided RF also adopts this strategy. Different from the Enriched RF and variable importance-weighted RF, it always considers network information as part of the prior knowledge to prioritize predictors. One example of network-guided RF is given by Wang and Liu (2018) [11] where they construct a variant of the random survival forest to build a better prediction model by selecting genes that show a great ability to predict the survival endpoint. In their RF construction, the sampling probability of predictors is based on p-values of marginal association tests and gene network information. Furthermore, the aforementioned approaches are constructed for a better predictive model instead of more accurate variable selection. Especially for the RF in Wang and Liu (2018), it is only applied on experimental data, no simulation studies have been designed to investigate its prediction and variable selection performance. Therefore, it is necessary to evaluate the network-guided RF from the disease gene identification perspective. Specifically, we would like to understand _when the network information is beneficial for identifying disease genes and modules_. We conduct simulation studies to evaluate the performance of network-guided RF in terms of disease gene and module identification as well as prediction accuracy. We analyze the effect of incorporating information from marginal association tests or networks separately and together. A module-based synthetic gene network is first simulated and RNA-Seq data are then simulated based on the synthetic network. We further apply the method utilizing a protein-protein interaction network to identify genes relevant for classifying progesterone receptor (PR) status in breast cancer patients. We consider PR status of breast cancer because it has been extensively studied in the genomic literature and several molecular pathways are well known [12]. We use two independent breast cancer datasets from The Cancer Genome Atlas (TCGA) that were generated using microarray and RNA-Seq, respectively. We consider the concordance between these results as a validation for the discovery. ## 2 Materials and methods ### Network-guided RF In this paper, we work with binary phenotypes such as disease status, but the network-guided RF can be adopted for other types of responses as well. For a binary classification problem, the standard RF consists of an ensemble of binary classification and regression trees (CARTs) [13] where each tree is built from a bootstrapped version of the training data. Each tree is grown via the principle of recursive partition where starting from the root node, the same node splitting procedure is applied recursively until certain stopping rules are met. The node splitting procedure consists of selecting a splitting variable and determining the splitting rule. To select the splitting variable, a pre-determined number of candidate splitting variables are randomly selected from all predictors. Each of these randomly sampled predictors is then investigated to search for the best splitting variable and to determine the splitting rule. The guiding principle for node splitting is to minimize the impurity of response values in each node, which is often measured by the Gini index for classification problem. The network-guided RF is a variant of the standard RF to incorporate gene network information into RF construction for gene expression analysis. It achieves this by modifying the sampling probability of predictor variables during the node splitting procedure. In the standard RF, the node splitting procedure starts with randomly sampling a subset of predictor variables to be investigated for node splitting at a given node. The sampling probability is uniform, meaning that all predictors are equally likely to be selected as candidate splitting variables. However, with a given gene network, we have information on their topological importance. For instance, hub genes, which are connected to many other genes in the network, may have a higher importance. Hence, we can modify this sampling probability to reflect such network-based importance information and to prioritize the use of these genes in the construction of RF in the hope that it can help us identify disease modules and genes more efficiently. The network-guided RF takes this approach to incorporate the network information into the construction of RF. In the network-guided RF, the creation of this sampling probability with a given gene network is based on the directed random walk (DRW) algorithm [14]. The core idea of the algorithm is to simulate a random walker on the given network which starts at a source node and, at each step, with probability of \(1-r\) it moves from the current node to a randomly selected neighboring node, or with a restart probability of \(r\) it goes back to the source node. After a number of steps, the probability distribution of the random walker being at each node in the network will reach an equilibrium and this stabilized distribution thus reflects the topological importance of genes with respect to the initial source node in the given gene network. Mathematically, let \(A\) be the row-normalized adjacency matrix of the given gene network, where an adjacency matrix of a given network is a square matrix with the size being the same as the number of nodes in the network and with elements \[A_{ij}=\mathcal{I}(\text{there exists an edge between node $i$ and $j$}),\ \forall i\neq j,\] where \(\mathcal{I}(\cdot)\) is the indicator function. The DRW iterations are given by \[\pi_{t+1}=(1-r)A^{\top}\pi_{t}+r\pi_{0}, \tag{1}\] where \(\pi_{t}\) is a vector whose \(i\)th element is the probability of the random walker being at node \(i\) at iteration \(t\), and \(\pi_{0}\) is the initial distribution over the network, and \(r\) is the pre-determined restart probability. After a number of iterations, becomes stable and can be considered as converged to an equilibrium distribution \(\pi^{*}\). If \(r=0\), then there is no restart and this algorithm reduces to the standard random walk over the network and the resulting equilibrium probability of each node is independent of the initial distribution \(\pi_{0}\), and is often proportional to the degree of the node, i.e., the more connections the node has to other nodes, the higher chance for the random walker to stay at the node. When non-zero restart probability \(r\) is adopted, the resulting equilibrium deviates from the aforementioned equilibrium distribution of standard random walk and is influenced by the initial distribution \(\pi_{0}\). In particular, as in [11], if \(\pi_{0}\) is constructed by assigning \(-\log(p_{i})\) as its \(i\)th element where \(p_{i}\) is the \(p\)-value of marginal association test of gene \(i\) and normalizing it to a unit vector, then in the equilibrium distribution, genes that have large degrees in the network, have significant \(p\)-values and are closer to the previous two categories of genes will have higher probabilities. In short, the construction of network-guided RF can be summarized as follows. 1. For a given gene network and any other external information, use the external information to construct an initial distribution \(\pi_{0}\) over all genes and run the DRW algorithm on the given network with iterations according to equation (1) to obtain the (approximate) equilibrium distribution \(\pi^{*}\). 2. Draw ntree bootstrap samples, i.e. draw with replacement, from the gene expression dataset used for training. 3. For each bootstrap sample, grow a CART-based decision tree. During the tree construction process, at each node, mtry genes are randomly selected according to distribution \(\pi^{*}\) as candidate splitting variables. The default value of mtry is \(\sqrt{p}\) where \(p\) is the total number of genes in the training set. 4. Grow each tree to full size until pre-determined stopping rules are met (e.g., minimum node size or complete purity of the node). 5. The nodes in the final layer of a tree are used for prediction of new observations. To make prediction with the RF, an observation goes through every decision tree in the forest and the final prediction for the observation is made based on aggregating results from all decision trees in the forest. ### Selection of important genes The so-called variable importance measure can be obtained from the network-guided RF. Specifically, we consider the permutation-based variable importance measure [5] to evaluate the importance of genes. Based on the variable importance, the most important genes are selected using an approach similar to the one in [11]. Starting with all genes in the dataset, at each step, the network-guided RF is constructed and the 10% lowest ranking genes are discarded. The remaining 90% of genes are used to construct the network-guided RF at the next step until a pre-determined minimum number of genes is retained. This selection procedure shares similar idea with recursive feature elimination (RFE) [15] and the only difference is that the number of genes to be retained is pre-determined in the current approach while this number is based on the prediction performance of the model in RFE. These selected genes are then considered as most important and relevant genes for the disease phenotype. ### Simulation study Here we describe our simulation study to evaluate the network-guided RFs. The description is structured according to the ADEMP scheme [16]. #### 2.3.1 Aim The simulation study is conducted to systematically evaluate the network-guided RFs on disease classification and disease gene identification accuracy under various scenarios. #### 2.3.2 Data generation We generate synthetic gene expression data along with its underlying network structure using the R package SeqNet Version 1.1.3 [17]. The network with a given number of genes is first randomly generated and then kept fixed for all scenarios. The network follows a module-based construction where several small modules are first generated. Then these modules are connected in a way that the degrees of all genes follow a power law which is observed in many biological networks [18]. With a synthetic network, the simulator then generates data from a Gaussian graphical model and convert those values into RNA-seq gene expression data. The marginal distribution of expression for each gene is calibrated from a reference TCGA breast cancer RNA-seq dataset. More details of the data generation procedure can be found in [17]. The binary disease status is generated from a logistic regression model. In particular, the phenotype follows a Bernoulli distribution and the log-odds is modeled by the following equation. \[\text{log-odds}=\beta_{0}+\sum_{i\in D}\beta_{i}X_{i}, \tag{2}\] where set \(D\) denotes the indices of disease genes, \(X_{i}\) is the standardized gene expression data of disease gene \(i\) with mean 0 and standard deviation 1, and \(\beta_{0}\) is the intercept and \(\beta_{i}\)'s are regression coefficients to reflect the effect sizes of disease genes. In our simulation, we set \(\beta_{0}=0\). We consider various scenarios in the simulation and they are summarized in Table 1. We have the Null case where no gene in the network is relevant for the disease status. This scenario gives us the opportunity to investigate the performance of network-guided RFs in terms of false selection of genes. In scenario RanEqu, disease genes are randomly distributed in the network and they have the same effect size, i.e., \(\beta_{i}=\beta,\forall i\in D\). In contrast, scenarios ModEqu and ModTopo consider a disease module where all disease genes come from one topological module in the network. The difference between these two scenarios is the way to assign effect sizes to these disease genes. In ModEqu all disease genes have the same effect size while in ModTopo, there is a main disease gene randomly selected within the module and the effect sizes of all other disease genes are then based upon the topological closeness to the main disease gene. Figure 1 gives an illustration of the disease modules and the assignment of effect sizes in these two scenarios. Scenarios TwoModEqu and TwoModTopo further extend these disease module-based scenarios to allow two different non-overlapping disease modules. The number of disease genes in RanEqu and each disease module is approximately the first quartile of the sizes of all modules in the network generated by SeqNet which in our simulations is around 12-20 genes while the total number of genes is \(p\in\{1000,3000\}\). We consider three different average effect size \(\beta=\{0.5,1,2\}\). For scenarios with equal effect sizes, \(\beta_{i}=\beta\) for all \(i\in D\). For disease modules with a main disease gene, the DRW algorithm with initial distribution concentrated only on the main disease gene (i.e., the distribution has probability one on the main disease gene and zero elsewhere) is first run to obtain an equilibrium distribution over the module. Then the effect sizes are calculated as \(\beta_{i}=\pi_{i}^{*}*\beta*|M|\) where \(\pi_{i}^{*}\) is the equilibrium probability of gene \(i\) in the disease module from the DRW algorithm and \(|M|\) gives the number of genes in the disease module \(M\). In this way, the coefficients in equation (2) always add up to \(\beta*|D|\) and thus, \(\beta\) is the average effect size. For each scenario and combination of \((p,\beta)\), we simulate 100 independent replications with 2000 independent samples. Because we set \(\beta_{0}=0\) in the logistic model in equation (2), the case-to-control ratio is on average 1. In each replication, the whole dataset is further equally split into training and testing sets with each having 1000 samples. \begin{table} \begin{tabular}{l l l} \hline Scenario & Number of disease & Distribution of effect sizes of genes \\ & modules & within disease modules \\ \hline Null & 0 & None \\ RanEqu & 0 & Uniform \\ ModEqu & 1 & Uniform \\ ModTopo & 1 & Connectivity-based \\ TwoModEqu & 2 & Uniform \\ TwoModTopo & 2 & Connectivity-based \\ \hline \end{tabular} \end{table} Table 1: Summary of different scenarios in simulation studies. Scenarios are ordered by the number of disease modules. Information on how the effect sizes of disease genes distribute within a disease module is also given. #### 2.3.3 Estimand On the training set, each method estimates the importance of all genes and thus gives a ranking of genes. We calculate the permutation importance measure in all RF constructions. Based on the ranking, top \(|D|\) most important genes are identified and they are further used to build a RF model for prediction. In addition, we also record the number of times each gene is selected as one of the top genes to assess the disease gene identification performance. #### 2.3.4 Methods to be evaluated We compare the following methods. 1. Oracle: standard RF with only disease genes. This serves as the benchmark for all other methods. 2. Standard RF: standard RF with all genes in the dataset. 3. Marginal-P: sampling probability of genes in RF construction is based on the \(p\)-values of marginal two-sample \(t\)-tests per gene, i.e., the initial distribution \(\pi_{0}\) used in [11] as we discussed in the previous section. 4. Network-guided RFs: Figure 1: Illustration of disease modules with and without main disease gene. The size of the bubble reflects the effect size of the gene. When there is no main disease gene (as shown on the left), all disease genes have the same effect size. When there is a main disease gene (as shown on the right), the effect size of each disease gene is proportional to its closeness to the main disease gene within the module; the closer to the main disease gene, the larger the effect size. * Topology: sampling probability of genes is based on the network structure only to reflect the gene network topology, i.e., \(r=0\) in the DRW algorithm. * Sampling probability of genes is based on marginal association information and network topology. If the \(p\)-values of marginal two-sample \(t\)-tests are used, it is denoted as Network-P. If the FDR-adjusted \(p\)-values of marginal two-sample \(t\)-tests are used, it is denoted as Network-Q. We fix the restart probability in the DRW algorithm to be \(r=0.3\) as in [11]. All RF implementations are based on the R package ranger Version 0.14.1 [19]. In the simulation, we set the number of trees to be 1000 and mtry to be the default value of ranger (i.e., at each node, we randomly sample \(\sqrt{p}\) genes as candidate splitting variables where \(p\) is the total number of genes in the training dataset). An R package networkRF has been prepared to include codes for all methods and simulation scenarios considered in our study. More information on the R package can be found in the section Data availability. #### 2.3.5 Performance measure The performance of all methods is evaluated in the following three aspects: * Disease prediction: In each replication, each method builds a RF predictive model. This model is applied to the testing set to obtain the misclassification rate. The average misclassification rate is calculated based on all 100 replications. * False selection: In the Null case, the frequency of each gene being selected as important gene over all 100 replications is recorded to assess the false selection performance of all methods. * Sensitivity to select disease genes: In scenarios where there are disease genes, the average proportion of disease genes being selected by each method is reported as the performance measure for disease gene selection. ### Experimental datasets For the experimental data, we use two preprocessed and curated TCGA breast cancer gene expression datasets generated with two different technologies (microarray and RNA-sequencing) for prediction of progesterone receptor (PR) status that are obtained from the Bioconductor package curatedTCGAData[20]. Only data from primary tumors are used. We remove replicates and only keep the first observation and remove genes with more than 50% zero counts. We retain only white female patients and create two non-overlapping and balanced datasets. We have 283 and 284 patients for microarray and RNA-Seq datasets, respectively. Both datasets include 193 PR positive patients. Gene expression values are standardized to have mean 0 and standard deviation 1 per gene in each data set separately (same as in [21]). Besides the gene expression data, we also use PPI network information from the STRING database [22] via the R package STRINGdb[23]. We use STRING version 11.5 for human species and adopt default quality control on the interactions by the package. We further remove proteins with duplicated STRING ID and duplicated mapped gene ID. In total, we retain 14167 common genes in both datasets along with the corresponding PPI network consisting of 1,290,904 interactions. Please refer to the Data availability section for the access to the experimental datasets. In the experimental data analysis, we do not include method Oracle because the disease genes are unknown. We also do not include method Network-Q for comparison because it considers similar information sources as Network-P. Therefore, we have in total four methods for comparison, namely standard RF, Marginal-P, Topology and Network-P where the first two methods do not use the PPI information while the last two incorporate network information into RF construction. On both microarray and RNA-Seq datasets, each method constructs a RF with 10,000 trees (the same number of trees used in [21] and [24] on similar datasets) and retains top 200 important genes and uses them to build a RF predictive model. Given the results in [24], we think this number is a reasonable choice for our datasets. The models built with microarray data are then tested on RNA-Seq data and vice versa to give the prediction accuracy. ## 3 Results ### Simulation results #### 3.1.1 Prediction accuracy We first look at the prediction accuracy of all methods under the different simulated scenarios. As shown in Figure 2, in the Null case, all methods are essentially performing random guess, so the misclassification rates are 0.5 as expected. In all other scenarios, Oracle has the best accuracy because it uses only the disease genes to build the predictive model, thus it serves as the benchmark for prediction accuracy. When there exist disease genes, all methods have higher prediction accuracy as average effect size increases as expected. In addition, the prediction accuracy of all methods is not significantly affected by the different total number of genes, showing the advantage of RF-based methods in high dimensional settings. When disease genes are not in any module as in the RanEqu scenario, network-guided RFs are clearly outperformed by methods without using network information. In fact, standard RF has the best accuracy in this scenario. When disease genes are from disease module(s), the network-guided RFs show competetive prediction accuracies. This is especially true for Network-P where network information combined with \(p\)-values of marginal tests are used. The misclassification error of Network-P is among the lowest in most module-based scenarios, but the improvement over standard RF and Marginal-P is not substantial. The results for Network-Q, which also considers both network and marginal association information, are unstable, the method has comparable accuracy when there is only one disease module, but performs undesirably when there are two disease modules. The Topology method where only network topology information is used performs consistently undesirable with the largest error in most scenarios. Overall, from the prediction perspective, network information does not add much value. This is not completely surprising because genes within a module are usually correlated with each other. Using all disease genes in module-based scenarios to construct predictive model may not be necessary. This is also Figure 2: Prediction performance of all methods in all simulation scenarios. The performance is measured by average misclassification rate calculated on the testing set over 100 repetitions. In each scenario, three average effect sizes are considered to represent the cases for weak, median and strong signals. The upper panel gives the results for \(p=1000\) total number of genes which is the same as the number of training samples, and the lower panel gives the results for \(p=3000\) total number of genes to represent the case of high-dimensional setting. manifested by the comparison with Oracle. Combined with the variable selection results in the next subsection, we can see that several methods can achieve near-optimal prediction accuracy in these scenarios without using all disease genes. #### 3.1.2 Disease gene identification Now we look at disease gene identification and start with the Null case because this provides a clue on the number of falsly selected genes. In Figure 3, we record the number of genes that have been consistently selected as important genes by each method in 100 repetitions. In Null case no gene should be relevant to the phenotype, so we expect that no gene should be consistently ranked at the top. However, as we can clearly see in Figure 3, network-guided RFs have a tendency to consistently rank a subset of genes as the top important genes. A closer look at these genes reveals that this subset of genes concentrates on hub genes which are highly connected genes in the network. This is largely because the sampling probability prioritizes the use of hub genes in the RF construction. This increased usage frequency artificially creates the spurious importance when no gene is relevant to the phenotype. Figure 3: Number of genes being consistently selected as important genes by each method in the null case. We demonstrate this consistency by counting the number of repetitions within 100 repetitions that a given gene is selected as important genes. The plot shows counts of false selection at several consistency threshold level. However, from our simulation, the good news is that when the total number of genes in the dataset increases, this phenomenon of consistent false selection becomes less severe. Next we look at the sensitivity of all methods to select disease genes. Because the Oracle by design only uses disease genes, its sensitivity is always 1. Figure 4 shows that when average effect size increases, all methods improves on the sensitivity in general. When the disease genes are not in modules as in the scenario RanEqu, methods without using network information outperform network-guided RFs by a great margin. This is because network-guided RFs assign weights based on the network topology which in this case is irrelevant. Furthermore, in this scenario, Topology is always the w Figure 4: Sensitivity to select disease genes of all methods in all simulation scenarios. The performance is measured by average proportion of disease genes selected as important genes by each method over 100 repetitions. In each scenario, three average effect sizes are considered to represent the cases for weak, median and strong signals. The upper panel gives the results for \(p=1000\) total number of genes which is the same as the number of training samples, and the lower panel gives the results for \(p=3000\) total number of genes to represent the case of high-dimensional setting. weights on hub genes which are not necessarily disease genes. When disease genes form module(s), the selection accuracy improves a lot, especially for Network-P. In most scenarios with disease modules, Network-P is one of the top performers. Network-Q again shows unstable performance. This could due to the adjustment it makes to the \(p\)-values which leads the method to put too much weight on only a small number of genes. The consequence is that, for instance, when there are two disease modules, the method has a higher chance to miss one disease module or can only capture the main disease genes within a disease module. It is also interesting to observe that even though the difference between standard RF and Marginal-P is not huge, incorporating marginal association information does help to achieve a slightly better sensitivity in most cases. ### Breast cancer datasets results Now we look at the results for experimental data analysis. We start with the prediction performance of all methods which are summarized in Table 2. The numbers reported are the misclassification errors obtained on the corresponding data with the model trained with the dataset from the other technology. For instance, the standard RF trained by RNA-Seq data achieves 13.03% misclassification rate when it is applied to microarray data for testing. Recall that the proportion of positive PR status is 0.682 and 0.680 in microarray and RNA-Seq dataset respectively, meaning that all methods are substantially better than random guessing. We also find that models trained with RNA-Seq data enjoy better prediction performance on microarray data than the other way around. While all methods have similar prediction accuracy, network-guided RFs do provide slightly better misclassification rates on both datasets. Next we look at the identified genes, and the results here are only descriptive. We first look at the common genes that are ranked at the top by all four methods on both datasets. The UpSet diagram in Figure 5 shows the common genes that are ranked at the top by all four methods on both TCGA breast cancer microarray and RNA-Seq datasets in the experimental data analysis. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Microarray & RNA-Seq \\ \hline Standard RF & 13.03\% & 14.13\% \\ Marginal-P & 12.68\% & 13.43\% \\ Topology & 11.97\% & 13.07\% \\ Network-P & 12.32\% & 12.37\% \\ \hline \hline \end{tabular} \end{table} Table 2: Prediction performance of all methods on the experimental datasets. The reported numbers are the misclassification errors on the corresponding data in the header with the model trained on the dataset with the other technology. We can see that in total there are 21 genes that are considered by all methods on both datasets as important genes. The induced network of these 21 genes is shown in Figure 6. The network consists of a connected module of 13 genes and 8 isolated genes. As shown in the figure, the module contains the core PR status related progesterone receptor (_PGR_) gene. In addition, gene _PGR_ is known to be related to estrogen receptor-mediated (ESR-mediated) signaling and Gene expression (Transcription) pathways. In this induced network, we can find genes Estrogen Receptor 1 (_ESR1_), GATA Binding Protein 3 (_GATA3_) and Growth Regulating Estrogen Receptor Binding 1 (_GREB1_) from ESR-mediated signaling pathway and Androgen Receptor (_AR_) gene from Gene expression (Transcription) pathway. Gene _SCUBE2_, short for Signal Peptide, CUB Domain And EGF Like Domain Figure 5: Common top genes selected by each method on both TCGA breast cancer microarray and RNA-Seq datasets. ontaining 2, has also been shown to play an important role in breast cancer progression, especially for the triple-negative breast cancer subtype [25]. Next we investigate the common top genes selected by network-guided RFs and by methods not using network information separately to see the differences that network information may bring. The selected genes (including the afor-mentioned 21 genes) and their induced networks are shown in Figure 7. We highlight these identified genes by two different colors. Genes in blue are those 21 genes shown in Figure 6. Genes in red are those identified only by each approach on both datasets. When we look at the network of top genes identified by network-guided RFs, we observe that genes Epidermal Growth Factor Receptor (_EGFR_) and Insulin Like Growth Factor 1 Receptor (_IGF1R_) are added to the core module shown in Figure 6. Gene _EGFR_ belongs to Gene expression (Transcription) pathway and gene _IGF1R_ is in the ESR-mediated signaling pathway. Furthermore, the inclusion of _EGFR_ also builds a link between the core module to the Microtubule Associated Protein Tau (_MAPT_) gene which is among the top important genes selected by all methods on both datasets. In Figure 6, gene _MAPT_ is an isolated Figure 6: Induced network of 21 common top genes selected by all 4 methods on both microarray and RNA-Seq datasets. Figure 7: (a) Induced networks of common top genes selected by network-guided RFs on both datasets. (b) Induced networks of common top genes selected by methods not using network information on both datasets. The genes in blue are those shown in Figure 6 and genes in red are additional genes selected by each approach. node, but network-guided RFs include _EGFR_ into the set of important genes which further links to the gene _MAPT_ and strengthens the module. In contrast, the methods without using network information do not identify any other genes from two PGR-related pathways. They select Gastrin Releasing Peptide Receptor (_GRPR_) gene which has recently been found to be associated with the estrogen receptor (ER) positivity [26] and gene Serpin Family A Member 6 (_SERPINA6_) which is identified as a marker of resistance to neoadjuvant chemotherapy in HER2-negative breast cancer [27] (HER2 is short for Human epidermal growth factor receptor 2). More interestingly, the inclusion of gene Sushi Domain Containing 3 (_SUSD3_) builds links between gene FYVE, RhoGEF And PH Domain Containing 3 (_FGD3_) and the core module. Gene _SUSD3_ was shown to be highly expressed in ER\(\alpha\)-positive breast cancer tumors [28]. On the contrary, even though the inclusion of gene Dynein Axonemal Assembly Factor 4 (_DYX1C1_, also referred as _DNAAF4_) does creates links between existing genes to result in a more connected module, this gene is more well-known for its association with deficits in reading and spelling ability [29]. Apart from these genes, many isolated genes are included and the strengthening of the core module is very limited. From this comparison, we can see that network information can be used in disease gene discovery and the information is likely to lead to a small amount of additional associated genes that can strengthen the core disease module. While if network information is not used, more isolated genes may be selected which could lead to challenges in understanding their contributions to the disease phenotype. ## 4 Discussion In this paper, we systematically evaluate the performance of network-guided RF in terms of variable selection and prediction. The network-guided RF incorporates external network information of predictors into RF construction which is believed to be beneficial to disease gene and module identification. Our results suggest that for disease prediction, network information does not add much value. In terms of disease gene discovery, when disease genes are randomly distributed within the network, network information only deteriorates the gene selection, but if they form disease module(s), network-guided RF can identify disease genes and module(s) more accurately. We also find that when disease status is independent from genes in the given network, spurious gene selection results can occur when using network information, especially on hub genes. This phenomenon needs serious attention because hub genes often connect to many pathways. False selection of these genes may lead to intuitive but false conclusions. In the experimental data analysis, our results indicate that network-guided RF may lead to relevant gene identification which can further strengthen the core disease module. This demonstrates the potential gain on disease gene and module discovery. However, a major limitation of the network-guided RF approach is that the gene selection procedure depends on a manually selected threshold for defining the top most important genes. It would be important to develop an automated variable selection procedure for RFs where the sampling probability of predictor variables is not uniform. Unlike in the commonly adopted procedures such as Boruta [30] or Vita [31], with a non-uniform sampling probability, the (approximate) null distribution of variable importance measures is currently unknown and needs further study in order to automate the variable selection procedures. We would also like to point out that it is even harder to consider shadow variables as in the AIR [32], because introducing more predictor variables will inevitably change the sampling probability, thus may affect not only the null distribution construction but also the estimation of the importance measure itself. In the construction of RF, the sampling probability of predictors can be considered as prior knowledge on the importance of predictors. From Bayesian perspective, the sampling probability used in the standard RF is a non-informative prior while the one based on network information is informative. Our simulation studies, if viewed from a Bayesian perspective, demonstrates the result of potential prior-data conflict, especially in the scenario where disease genes are randomly distributed in the network. Even though the practical relevance of this particular scenario may be limited, from methodology development perspective, it would be interesting to consider a more robust prior such that when there is any disease module, the network information can provide enough information to benefit the detection and when there is no disease module, such information should not hinder the data-driven detection, especially not leading to consistent false selection. Our approach by using microarray and RNA-Seq datasets for validation in the experimental analysis may not be ideal. As pointed out in [33] and [34], the concordance between datasets coming from these two technologies may not be perfect and could be affected by a range of chemical treatment conditions. Therefore, it would be beneficial to apply network-guided RF on a large RNA-Seq dataset with carefully constructed networks. We are aware that other approaches to incorporate network information have been published. Guan et al. (2018) [35] provide a general framework for incorporating prior knowledge into RF construction for biomarker discovery where network information may be considered. Zhao et al. (2020) [36] use network information in the feature engineering step and construct standard RF with these created features. Similarly, Adnan et al. (2019) [37] propose to use network edges as features in RF construction to obtain a better predictive model. Thus, it would be interesting to conduct a comparative study to compare different approaches in terms of disease gene discovery. We will leave this work for a future project. ## Acknowledgements This work was supported by the German Federal Ministry of Education and Research (BMBF) funded e:Med Programme on systems medicine [grant 01ZX1510 (ComorbSysMed) to SSzy]. The results published here are in part based on data generated by the TCGA Research Network: [http://cancergenome.nih.gov/](http://cancergenome.nih.gov/). ## Conflict of interests The authors declare no conflict of interests. ## Data availability An R package networkRF implementing the network-guided RF approach and providing functions for simulation is available at GitHub ([https://github.com/imbsh/networkRF](https://github.com/imbsh/networkRF)). The processed and quality controlled experimental datasets are also included in the R package.
2303.04129
Foundation Models for Decision Making: Problems, Methods, and Opportunities
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks. When such models are deployed in real world environments, they inevitably interface with other entities and agents. For example, language models are often used to interact with human beings through dialogue, and visual perception models are used to autonomously navigate neighborhood streets. In response to these developments, new paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning. These paradigms leverage the existence of ever-larger datasets curated for multimodal, multitask, and generalist interaction. Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems that can interact effectively across a diverse range of applications such as dialogue, autonomous driving, healthcare, education, and robotics. In this manuscript, we examine the scope of foundation models for decision making, and provide conceptual tools and technical background for understanding the problem space and exploring new research directions. We review recent approaches that ground foundation models in practical decision making applications through a variety of methods such as prompting, conditional generative modeling, planning, optimal control, and reinforcement learning, and discuss common challenges and open problems in the field.
Sherry Yang, Ofir Nachum, Yilun Du, Jason Wei, Pieter Abbeel, Dale Schuurmans
2023-03-07T18:44:07Z
http://arxiv.org/abs/2303.04129v1
# Foundation Models for Decision Making: Problems, Methods, and Opportunities ###### Abstract Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks. When such models are deployed in real world environments, they inevitably interface with other entities and agents. For example, language models are often used to interact with human beings through dialogue, and visual perception models are used to autonomously navigate neighborhood streets. In response to these developments, new paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning. These paradigms leverage the existence of ever-larger datasets curated for multimodal, multitask, and generalist interaction. Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems that can interact effectively across a diverse range of applications such as dialogue, autonomous driving, healthcare, education, and robotics. In this manuscript, we examine the scope of foundation models for decision making, and provide conceptual tools and technical background for understanding the problem space and exploring new research directions. We review recent approaches that ground foundation models in practical decision making applications through a variety of methods such as prompting, conditional generative modeling, planning, optimal control, and reinforcement learning, and discuss common challenges and open problems in the field. Figure 1: Overview of foundation models for decision making. Foundation models pretrained on broad data are adapted to accomplish specific tasks by interacting with external entities and receiving feedback. ###### Contents * 1 Introduction * 1.1 Structure of This Report * 2 Preliminaries * 2.1 Sequential Decision Making Preliminaries * 2.2 Example Scenarios * 2.2.1.1 The \(\mathcal{O}(\mathbb{R})\)-\(\mathcal{O}(\ ## 1. Introduction Foundation models pretrained on broad datasets via self-supervised learning have demonstrated exceptional abilities in knowledge transfer to diverse downstream tasks (Bommasani et al., 2021). As such models continue to be applied to more complex problems that involve long-term reasoning (Wei et al., 2022), control (Brohan et al., 2022), search (Strohman et al., 2005), and planning (Huang et al., 2022), or are deployed in applications such as dialogue, autonomous driving, healthcare, and robotics, they are expected to interface with external entities and agents. For example, in dialogue a language model converses with a human over multiple turns; in robotics a perception-control model executes actions in a real-world environment. These scenarios present new challenges for foundation models, including (1) how to learn from feedback given by an external entity (e.g., human rating of conversation quality), (2) how to adapt to modalities not commonly covered by large language or vision datasets (e.g., robot actions), and (3) how to perform long-term reasoning and planning over the future. Such questions have traditionally been at the core of sequential decision making (Sutton and Barto, 2018), encompassing areas such as reinforcement learning, imitation learning, planning, search, and optimal control. Contrary to the paradigm of foundation models, where broad datasets with billions of images and text tokens are used during pretraining, prior work on sequential decision making has largely focused on task-specific or _tabula rasa_ settings with limited prior knowledge (Silver et al., 2017). Despite a seemingly disadvantageous setup, research in sequential decision making has achieved significant progress in surpassing human performance on tasks such as playing board games (Tesauro, 1994) and Atari video games (Mnih et al., 2013), as well as operating robots to complete navigation (Pomerleau, 1988) and manipulation tasks (Kalashnikov et al., 2018; Akkaya et al., 2019). Nevertheless, since these methods learn to solve a task from scratch without broad knowledge from vision, language, or other datasets, they generally struggle with generalization and sample efficiency, e.g., requiring 7 GPU days of interactive game-play to solve a single Atari game (Agarwal et al., 2022). Intuitively, broad datasets similar to those used for foundation models should also be beneficial for sequential decision making models. For example, there are countless articles and videos on the Internet about how to play Atari games. Similarly, there is a wealth of knowledge about properties of objects and scenes that would be useful to a robot, or about human wants and emotions that could improve a dialogue model. While research on foundation models and sequential decision making has largely been disjoint due to distinct applications and foci, there is increasing activity at the intersection of these communities. On the foundation models side, with the discovery of emergent properties of large language models, target applications have graduated from simple zero or few-shot vision and language tasks to problems that now involve long-term reasoning (Srivastava et al., 2022; Wei et al., 2022; Lewkowycz et al., 2022) or multiple interactions (OpenAI, 2022). Conversely, in the sequential decision making communities, researchers inspired by the success of large scale vision and language models have begun to curate ever-larger datasets for learning multimodel, multitask, and generalist interactive agents (Agarwal et al., 2020; Szot et al., 2021; Fan et al., 2022; Brohan et al., 2022; Reed et al., 2022; Lee et al., 2022). Further blurring the lines between the two fields, some recent work has investigated the use of pretrained foundation models such as CLIP (Radford et al., 2021) and ViT (Dosovitskiy et al., 2020) to bootstrap the training of interactive agents for visual environments (Khandelwal et al., 2022; Tao et al., 2022), while other work has investigated foundation models as dialogue agents optimized by reinforcement learning with human feedback (Ouyang et al., 2022), and other work has adapted large language models to interact with external tools such as search engines (Komeili et al., 2021; Thoppilan et al., 2022; Lazaridou et al., 2022; Shuster et al., 2022; Yao et al., 2022), calculators (Cobbe et al., 2021; Thoppilan et al., 2022), translators (Thoppilan et al., 2022), MuJoCo simulators (Liu et al., 2022), and program interpreters (Gao et al., 2022). Our premise in this report is that research on foundation models and interactive decision making can be mutually beneficial if considered jointly. On one hand, adaptation of foundation models to tasks that involve external entities can benefit from incorporating feedback interactively and performing long-term planning. On the other hand, sequential decision making can leverage world knowledge from foundation models to solve tasks faster and generalize better. With the aim of spurring further research at the intersection of these two fields, we scope the problem space of _foundation models for decision making_. We provide technical tools for understanding current research in the space, review remaining challenges and open problems, and speculate on potential solutions and promising approaches to overcome these challenges. ### Structure of This Report This report is divided into 5 major sections. In Section 2, we review the relevant background and notations of sequential decision making, and present a few example scenarios where foundation models and decision making are better considered jointly. The subsequent three sections are organized around how foundation models can characterize different components of a decision making system. In Section 3, we discuss how foundation models can serve as generative models of _behavior_ (e.g., skill discovery) and generative models of the _environment_ (e.g., for conducting model-based rollouts). In Section 4, we discuss how foundation models can serve as representation learners of states, actions, rewards, and transition dynamics (e.g., plug-and-play vision-language models, model-based representation learning). In Section 5, we discuss how language foundation models can serve as interactive agents and environments, enabling new problems and applications to be considered under a sequential decision making framework (language model reasoning, dialogue, tool use). Finally in Section 6, we outline open problems and challenges, and propose potential solutions (e.g., how to leverage broad data, how to structure environments, and what aspects of foundation models and decision making can be improved). ## 2. Preliminaries In this section, we review relevant background on sequential decision making, and present example scenarios to illustrate when and why it is better to consider foundation models and decision making jointly. ### Sequential Decision Making Preliminaries Unlike vision and language domains, where a foundation model is usually trained (and adapted) only once, sequential decision making focuses on learning from _interactive_ experience. We outline this formalism and introduce common algorithms for sequential decision making. #### 2.1.1. Interacting with an Environment Sequential decision making problems are most often formalized in terms of a Markov decision process (MDP) (Puterman, 1994), which is defined as a tuple \(\mathcal{M}\coloneqq\langle S,A,\mathcal{R},\mathcal{T},\mu,\gamma\rangle\) consisting of a state space \(S\), an action space \(A\), a reward function \(\mathcal{R}:S\times A\rightarrow\Delta(\mathbb{R})\),+ a transition function \(\mathcal{T}:S\times A\rightarrow\Delta(S)\), an initial state distribution \(\mu\in\Delta(S)\), and a discount factor \(\gamma\in[0,1)\). A policy \(\pi:S\rightarrow\Delta(A)\) interacts with the environment starting at an initial state \(s_{0}\sim\mu\). At each timestep \(t\geq 0\), an action \(a_{t}\sim\pi(s_{t})\) is sampled and applied to the environment, after which the environment transitions into the next state \(s_{t+1}\sim\mathcal{T}(s_{t},a_{t})\) while producing a scalar reward \(r_{t}\sim\mathcal{R}(s_{t},a_{t})\).3 Footnote 3: We will focus on fully observable MDPs in this article, though an MDP can be extended to a partially observable MDP (POMDP) by introducing an observation space \(\mathcal{O}\), an emission function \(\mathcal{E}:S\to\mathcal{O}\), and the restriction that policies can only depend on observations and previous actions. After \(\pi\) interacts with \(\mathcal{M}\) for \(H\) timesteps (\(H\) can be infinite), an episode (trajectory) is produced \(\tau\coloneqq\{(s_{0},a_{0},r_{0}),(s_{1},a_{1},r_{1}),\ldots,(s_{H},a_{H},r_{H })\}\). We use \(\tau_{t}\) to denote the tuple \((s_{t},a_{t},r_{t})\), \(\tau_{<t}\) to denote a sub-episode up to timestep \(t\), \(\tau_{\geq t}\) to denote a sub-episode starting from timestep \(t\) and ending at \(H\), \(\tau_{t:t+h}\) to denote a sub-episode from timestep \(t\) to \(t+h\), and \(\tau_{s}\) or \(\tau_{a}\) to denote only the state or action portion of a trajectory. The return associated with episode \(\tau\) is defined as the total discounted sum of rewards \(R(\tau)\coloneqq\sum_{t=0}^{H}\gamma^{t}r_{t}\). The _trajectory distribution_ of a policy \(p_{\pi}(\tau)\) is determined by \[p_{\pi}(\tau)=\mu(s_{0})\Pi_{t=0}^{H}\pi(a_{t}|s_{t})\mathcal{R}(s_{t},a_{t}) \mathcal{T}(s_{t+1}|s_{t},a_{t}). \tag{1}\] Trajectories generated by one or multiple policies can be collected in an offline dataset \(\mathcal{D}_{\text{RL}}=\{\tau\}\). We distinguish \(\mathcal{D}_{\text{RL}}\) from a typical vision or language dataset \(\mathcal{D}\); \(\tau\sim\mathcal{D}_{\text{RL}}\) is an _interactive_ trajectory involving actions and rewards whereas \(x\sim\mathcal{D}\) is a _static_ image or a text sequence. Nevertheless, foundation model techniques developed for \(\mathcal{D}\) can also be apply to \(\mathcal{D}_{\text{RL}}\). #### 2.1.2. Imitation Learning. In standard imitation learning, \(\mathcal{R}\), \(\mathcal{T}\), and \(\mu\) are unknown to the agent. Learning solely takes place from a fixed dataset of demonstrations \(\mathcal{D}_{\text{RL}}^{*}=\{(s,a)\}\) previously collected by an expert policy \(\pi^{*}\) interacting with \(\mathcal{M}\) through \(a\sim\pi^{*}(s)\). The goal of imitation learning is to train \(\pi\) on \(\mathcal{D}_{\text{RL}}^{*}\) so that \(\pi\) closely approximates \(\pi^{*}\) according to some metric, such as the Kullback-Leibler (KL) divergence between the trajectory distributions \(D_{\text{KL}}(p_{\pi^{*}}(\tau)\|p_{\pi}(\tau))\). **Behavioral cloning (BC).** Learning from expert demonstrations leads to the common framing of imitation learning as supervised learning of state to action mappings. Under this framing, behavioral cloning (BC) (Pomerleau, 1989) proposes to learn \(\pi\) by minimizing \[\mathcal{L}_{\text{BC}}(\pi)\coloneqq\mathbb{E}_{(s,a)\sim\mathcal{D}_{\text{ RL}}^{*}}[-\log\pi(a|s)]. \tag{2}\] Equation 2 can be viewed as the classification loss (discrete actions) or regression loss (continuous actions) of state to action mappings, connecting BC to supervised learning in vision and language. #### 2.1.3. Reinforcement Learning. Standard reinforcement learning (Sutton and Barto, 2018) aims to maximize the expected returns of a policy through trial-and-error interaction with the environment: \[J(\pi)\coloneqq\mathbb{E}\left[\ \sum_{t=0}^{H}\gamma^{t}r_{t}\big{|}\pi, \mathcal{M}\right]. \tag{3}\] **Policy-based methods.** One conceptually straightforward way to optimize Equation 3 is through policy gradient, which estimates the gradient of Equation 3 with respect to the policy \(\pi\), and maximizes \(J(\pi)\) directly via gradient ascent. The most commonly used gradient estimator has the form \[\nabla_{\theta}J(\pi_{\theta})=\mathbb{E}_{\tau\sim p_{\pi_{\theta}}(\tau)} \ \big{[}\sum_{t=0}^{H}\gamma^{t}\nabla_{\theta}\log\pi_{\theta}(a_{t}|s_{t})\hat {A}(s_{t},a_{t})\big{]}\,, \tag{4}\] where \(\hat{A}\) is some advantage function that can be separately estimated via Monte-Carlo returns from \(p_{\pi}(\tau)\)(Williams, 1992). The biggest drawback of policy gradient is sample inefficiency: since policy gradients are estimated from rollouts, the variance of the gradient estimate is often extreme. To mitigate high variance, various works such as PPO (Schulman et al., 2017) have proposed to improve policy updates through the use of appropriate geometry (Kakade, 2001; Peters et al., 2010; Schulman et al. 2015a] or through training a separate critic network to estimate \(\hat{A}\) to futher reduce variance at the cost of introducing bias [Sutton et al. 1999; Silver et al. 2014; Schulman et al. 2015b]. **Value-based methods.** Another family of reinforcement learning methods for optimizing Equation 3, such as Q-learning [Watkins and Dayan 1992], involves learning the optimal value function \(Q^{*}(s_{t},a_{t})\) by satisfying a set of Bellman _optimality_ constraints: \[Q^{*}(s_{t},a_{t})=r_{t}+\gamma\mathbb{E}_{s_{t+1}\sim\mathcal{T}(s_{t+1}|s_{ t},a_{t})}\left[\max_{a_{t+1}}Q^{*}(s_{t+1},a_{t+1})\right], \tag{5}\] after which an optimal policy can be extracted via \(\pi^{*}(\cdot|s_{t})=\arg_{a}\max Q^{*}(s_{t},a)\). Value-based methods are typically more sample efficient than policy-based methods [Gu et al. 2016], but tend to be unstable under function approximation [Sutton and Barto 2018]. At the intersection of policy and value based methods, Actor-Critic methods [Sutton et al. 1999] first learn \(Q^{\pi}(s_{t},a_{t})\) by satisfying the set of Bellman _expectation_ constraints: \[Q^{\pi}(s_{t},a_{t})=r_{t}+\gamma\mathbb{E}_{s_{t+1}\sim\mathcal{T}(s_{t+1}|s_ {t},a_{t}),a_{t+1}\sim\pi(s_{t+1})}\left[Q^{\pi}(s_{t+1},a_{t+1})\right], \tag{6}\] then plug \(\hat{A}(s_{t},a_{t})=Q^{\pi}(s_{t},a_{t})\) into the policy gradient objective, Equation 4, to update the policy. The intuition that the resulting policy learning will be both stable and sample efficient. **Off-policy and offline RL.** To further improve the sample efficiency of on-policy methods, a set of off-policy approaches have been proposed for both policy and value based RL [Lillicrap et al. 2015; Mnih et al. 2016; Nachum et al. 2017], where data from sources other than the current policy can be utilized for learning in conjunction with environment interaction. Offline RL [Levine et al. 2020] further considers the setting where an agent only has access to a fixed dataset of previous interactions \(\mathcal{D}_{\text{RL}}\), and no further environment access to \(\mathcal{T}\) or \(\mathcal{R}\) is available. To ensure the learned policy avoids out-of-distribution states and actions, offline RL methods often impose regularization via a divergence between the learned policy and the offline dataset [Wu et al. 2019] or on the learned value function [Kumar et al. 2020]. More recently, some works have explored using additional online access as a finetuning step after offline RL to improve sample efficiency [Nair et al. 2020; Xie et al. 2021; Ball et al. 2023]. Using foundation models for decision making differs from traditional offline RL (with or without online finetuning) in that the latter focuses on learning RL algorithms from task-specific RL datasets \(\mathcal{D}_{\text{RL}}\) (i.e., datasets with task-specific states, actions, and rewards), whereas the former focuses on self-supervised learning on diverse data (e.g., data from vision and language domains) followed by task-specific adaptation. #### 2.1.4. Planning, Search, and Optimal Control. Unlike the model-free RL algorithms outlined above, a broader set of approaches to sequential decision making (e.g., planning, search, optimization-based control, model-based RL) leverage explicit models of the environment. When the true environment dynamics are known (e.g., the rules of a Chess game) and simulation is cheap, planning and search algorithms, such as MCTS [Kocsis et al. 2006] that leverage an accurate simulator, can be highly effective [Silver et al. 2016]. When the environment can be characterized by precise dynamics, such as the constrained movements of a robot arm, approaches in optimal control--such as trajectory optimization [Von Stryk and Bulirsch 1992], shooting [Bock and Plitt 1984], collocation [Von Stryk 1993], and model predictive control [Camacho and Alba 2013]--have long been studied prior to the recent advances in deep learning. In deterministic scenarios, given an environment governed by a known dynamics function \(s_{t+1}=f(s_{t},a_{t})\), optimizing a sequence of actions \(a_{0:T}\) to execute in the environment corresponds to \[a_{0:T}=\arg\max_{a_{0:T}}J(s_{0},a_{0:T})=\arg\max_{a_{0:T}}\sum_{t=0}^{T}r(s_{t },a_{t})\ \ \text{subject to}\ s_{t+1}=f(s_{t},a_{t}). \tag{7}\] Model-based RL (Doya et al., 2002) considers the setting where the environment dynamics are unknown and have to be estimated from samples, after which techniques from search, planning, and optimal control (Doya et al., 2002; Deisenroth and Rasmussen, 2011; Tassa et al., 2012; Nagabandi et al., 2018; Kaiser et al., 2019) can be effectively applied given the learned dynamics model. ### Example Scenarios Before diving into the details of foundation models for decision making, we first discuss a few example scenarios where joint consideration of foundation models and decision making can be highly beneficial. Figure 2 illustrates additional examples where foundation models can interact with external entities (e.g., humans, tools, and simulated and physical worlds). **Learning dialogue agents with human feedback.** There has been an increasing demand for large language models to produce likable, factual, and grounded responses to human inquires. With a moderate amount of human feedback, via prompting or reward-based finetuning, langauge models have been able to perform increasingly more complex reasoning and dialogue tasks. Such feedback can be seen as the result of langauge model agents interacting with the external world (i.e., humans). Learning from interaction lies at the center of decision making, and reinforcement learning techniques such as policy gradient introduced in Section 2.1.3 have contributed significantly to the advances of dialogue systems (Ouyang et al., 2022). **The Internet as an environment**. While RL with human feedback has demonstrated tremendous empirical success in dialogue (Thoppilan et al., 2022; OpenAI, 2022), humans are by no means the only external entity that can provide feedback to improve foundation models through repeated interaction. For instance, the Internet can be viewed as an unbounded environment where an ideal policy should be able to identify the best queries and navigation steps to retrieve optimal answers in a minimal number of interactive steps. Since the Internet is both rich in information and cheap to interact with, it provides a compelling environment to explore decision making techniques. Foundation models are necessary for Internet-scale decision making, as interaction needs to be initiated in a reasonable way to ensure meaningful feedback is obtained for further learning. **Video generation as a universal policy.** A central difficulty in learning general-purpose robot agents is the incongruity between the state and action spaces of different environments. This Figure 2. Example scenarios of adapting foundation models to perform decision making tasks such as interacting with humans, tools, and the simulated and physical world. Actions generated by foundation models and feedback provided by the external entities often reoccur repeatedly in a loop. implies that, for example, data collected by different robots cutting an apple or videos of a human cutting an apple cannot be easily combined to train a generalist robot policy, despite the fact that the notions of "cutting" and "apple" are common between these scenarios. With ever-larger text-to-video foundation models being trained on Internet-scale data (Ho et al., 2022; Villegas et al., 2022), it is now possible to recast the problem of policy learning as a text-conditioned video generation problem, where the generation process encompasses both environment modeling and planning. Such a policy-as-video formulation allows a unified interface (i.e., images) for learning and generalization from broad data sources, environments, and tasks. ## 3. Foundation Models as Conditional Generative Models We now examine the first concrete use case of foundation models in decision making: probabilistic modeling of the trajectory distribution \(p(\tau)\) from an interactive dataset \(\tau\sim\mathcal{D}_{\text{RL}}\). Depending on what part of \(\tau\) is being modeled, foundation models can serve as conditional generative models of behaviors (i.e. actions) or the underlying world models (i.e., environment dynamics). Below, we first review different generative models and then discuss and explore how they can be used to represent behaviors and models of the environment. ### Generative Model Preliminaries Many foundation models can be characterized as modeling a (conditional) density \(p(x)\) on a large dataset of images or texts \(x\sim\mathcal{D}\). For example, \(x\) could be an image, a sequence of images, or a sequence of text tokens. Different foundation models differ in their factorizations of \(p(x)\). Below, we provide a brief overview of several generative models and their factorizations of \(p(x)\). #### 3.1.1. Latent Variable Models Latent variable models factorize the unknown data distribution of interest \(p(x)\) into a latent variable distribution and a conditional distribution: \[p(x)=\int p(z)p(x|z)dz, \tag{8}\] where the latent variable \(z\) can be both discrete or continuous. For the special cases when \(z\) is discrete and the sum is tractable, or \(z\) is continuous and the integral is tractable, one can simply calculate \(p(x)\) in closed form to support efficient maximum likelihood estimation on a given dataset. However, for the more general cases when the requisite sum or integral is intractable, techniques like VAEs (Kingma and Welling, 2013) are applied to optimize the evidence lower-bound (ELBO) of \(p(x)\) using a variational posterior \(q(z|x)\): \[\mathcal{L}_{\text{VAE}}(p,q)=\mathbb{E}_{x\sim\mathcal{D},x\sim q(z|x)}\left[ -\log p(x|z)\right]+\mathbb{E}_{x\sim\mathcal{D}}\left[D_{\text{KL}}\left(q( z|x)\|p(z)\right)\right]. \tag{9}\] As an extension of VAE, VQ-VAE (Van Den Oord et al., 2017) uses a codebook to discretize the continuous latent representation to learn a more compact, discrete representation of the data. #### 3.1.2. Autoregressive Sequence Models Autoregressive sequence models have been popularized by transformer-based language models (Vaswani et al., 2017; Brown et al., 2020). At their core, autoregressive models factorize any joint distribution over a sequence \(x=(x_{1},...x_{L})\) in an autoregressive manner: \[p(x)=\Pi_{\ell=1}^{L}p(x_{\ell}|x_{<\ell}). \tag{10}\] Under this factorization, estimating the density \(p(x)\) reduces to learning each conditional factor \(p(x_{t}|x_{<t})\) which can be parametrized by a transformer. \[\mathcal{L}_{\mathrm{LM}}(p)=\mathbb{E}_{x\sim\mathcal{D}}\left[\sum_{t=1}^{L}- \log p(x_{t}|x_{<t})\right]. \tag{11}\] #### 3.1.3. Diffusion Models. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Kingma et al., 2021) are a class of latent variable models that factorize the data distribution \(p(x)\) as a Markov chain of Gaussian transitions from a noise distribution of the same dimension: \[p(x)=\int p(x_{K})\Pi_{k=1}^{K}p(x_{k-1}|x_{k})dx_{1:K}, \tag{12}\] where \(p(x_{K})=\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(p(x_{k-1}|x_{k})\coloneqq\mathcal{N}(\mu(x_{k},k),\sigma(x_{k},k))\). The forward diffusion process corrupts \(x\) by iteratively adding Gaussian noise with a fixed variance schedule. The reverse process then achieves data generation by approximating the noise that corrupted \(x\) during the forward process. #### 3.1.4. Energy-Based Models. Energy-based models (LeCun et al., 2006; Du and Mordatch, 2019) are a class of models that represent data distributions \(p(x)\) by an unnormalized distribution parameterized by a learned energy function: \[p(x)=\frac{e^{-E(x)}}{Z}, \tag{13}\] where \(E\) is the energy function and \(Z=\int e^{-E(x)}dx\) is the partition function. To sample from the underlying distribution \(p(x)\), one typically runs an MCMC procedure such as Langevin dynamics to sample from the underlying distribution. ### Generative Models of Behavior The generative models introduced above have mostly been applied to text or image data \(x\sim\mathcal{D}\). Decision making, on the other hand, is concerned with task specific _interactive_ data \(\tau\sim\mathcal{D}_{\mathrm{RL}}\) that distinguishes state, action, and reward labels. We will see how different generative models can be adopted to model agent behaviors (this subsection) and environment dynamics (next subsection), as illustrated in Figure 3. #### 3.2.1. Foundation Models as Behavioral Priors. When the interactive data \(\mathcal{D}_{\mathrm{RL}}\) contains diverse behaviors such as "pick up objects", "move objects horizontally", or "place objects", these behaviors can be composed to complete tasks that were not present in \(\mathcal{D}_{\mathrm{RL}}\). Foundation models can be used to model such "behavioral priors" (also known as "skills" or "options"). In this approach, pretraining generally involves maximum likelihood estimation of actions conditioned on some trajectory level information. Different tractable approximations can be leveraged to optimize this underlying training objective. For instance, the VAE objective from Equation 9 can be directly instantiated, where the encoder \(q\) takes a trajectory \(\tau\) or some future goal as input and the decoder \(\pi\) produces the sequence of actions as outputs (Ajay et al., 2020; Lynch et al., 2020): \[\mathcal{L}_{\mathrm{VAE}}(\pi,q)=\mathbb{E}_{\tau\sim\mathcal{D}_{\mathrm{RL} },z\sim q(z|\tau)}\left[\sum_{t=0}^{H}-\log\pi(a_{t}|s_{t},z)\right]+\mathbb{ E}_{\tau\sim\mathcal{D}_{\mathrm{RL}}}\left[D_{\mathrm{KL}}(q(z|\tau)\|p(z|s_{0})) \right]. \tag{14}\] The posterior distribution \(q(z|\tau)\) can represent a diverse set of behavioral priors when \(\tau\) is drawn from a wide set of related tasks. Since the posterior depends on future information, the prior \(p(z|s_{0})\) is usually constrained to only depend on the past so that behaviors can be correctly sampled at test time. Similarly, the autoregressive sequence modeling objective from Equation 11 can also be instantiated to model behavioral priors (Shafiullah et al., 2022), resulting in a policy that can depend on the history of interaction \(\pi(a_{t}|s_{t},\tau_{<t})\). Such dependence is less common in Markovian environments, but has shown empirical benefits (Brohan et al., 2022). When the dataset consists of expert data \(\mathcal{D}_{\text{RL}}^{*}\), one can learn transformer-based BC policies by optimizing the sequence modeling objective where an autoregressive transformer encodes the history \((\tau_{<t},s_{t})\) and decodes the next action \(a_{t}\) as: \[\mathcal{L}_{\text{LM}}(\pi)=\mathbb{E}_{\tau-\mathcal{D}_{\text{RL}}^{*}}[ \sum_{t=0}^{H}-\log\pi(a_{t}|\tau_{<t},s_{t})]. \tag{15}\] An additional conditioning variable \(z\) that captures trajectory-level information such as the goal or return \(z(\tau)=R(\tau)\) has been introduced in goal or return conditioned supervised learning (Schmidhuber, 2019; Kumar et al., 2019; Brandfonbrener et al., 2022; Paster et al., 2022; Yang et al., 2022): \[\mathcal{L}_{\text{LM}}(\pi)=\mathbb{E}_{\tau-\mathcal{D}_{\text{RL}}}\left[ \sum_{t=0}^{H}-\log\pi(a_{t}|\tau_{<t},s_{t},z(\tau))\right]. \tag{16}\] When behavior generation is conditioned on high returns, intuitively, desirable behavior is encouraged (Chen et al., 2021). One can also utilize a diffusion model to model the conditional distribution of behaviors (Ajay et al., 2022) by maximizing the likelihood in Equation 12: \[\mathcal{L}_{\text{Diffusion}}(\pi)=\mathbb{E}_{\tau-\mathcal{D}_{\text{RL}}, k\sim K}\left[\sum_{t=0}^{H}-\log\pi(a_{t}^{k-1}|a_{t}^{k},s_{t},z(\tau))\right]. \tag{17}\] To extract desirable behavior from a diffusion model when conditioned on high reward, one can sample trajectories with high likelihood by using reward as classifier-free guidance (Ho and Salimans, 2022). Other conditional generative models that use normalizing flows (Singh et al., 2020), generative adversarial networks (Ho and Ermon, 2016), and energy-based models (Florence et al., 2022) have also been proposed for modeling behavioral priors from \(\mathcal{D}_{\text{RL}}\). Figure 3: Illustrations of how conditional generative models can model behaviors, improvements, environments, and long-term futures given a trajectory \(\tau\sim\mathcal{D}_{\text{RL}}\). Dark blue indicates transitions with higher rewards. Models of behavior (Decision Transformers (Lee et al., 2022)) and self-improvement (Algorithm Distillation (Laskin et al., 2022)) require near-expert data. Models of the world (Trajectory Transformer (Janner et al., 2021)) and long-term future (UniPi (Du et al., 2023)) generally require data with good coverage. #### 3.2.2. Generalist Agents Trained on Massive Behavior Datasets A key advantage to generative modeling of behaviors lies in scaling up; despite different tasks possessing different observations and rewards, there are often meaningful behaviors shared across tasks (e.g., "moving left" has similar meaning in navigation, game playing, and robot manipulation tasks). Inspired by the scaling success of transformers, generalist agents modeling sequences of diverse behaviors have been developed for simulated tasks (Shafiullah et al., 2022), over 40 Atari games (Lee et al., 2022), over 700 real-world robot tasks (Brohan et al., 2022), and over 600 distinct tasks with varying modalities, observations and action specifications (Reed et al., 2022). This has led to generalist agents that are able to play video games, caption images, chat, perform robot tasks, significantly better than specialist agents trained on single tasks. Such works have also demonstrated the benefit of scaling model parameters and the number of training tasks. While combining multiple task-specific datasets \(\mathcal{D}_{\text{RL}}\) into a large multi-task dataset as described above is one way to scale up behavior modeling, exploiting Internet-scale collections of text and video data \(\mathcal{D}\) is another viable approach to scaling effectively. Internet-scale text and video data is abundant in quantity but typically has limited action annotations compared to \(\mathcal{D}_{\text{RL}}\). Nevertheless, previous work has still incorporated such datasets. For instance, Gato (Reed et al., 2022) approaches this issue with universal tokenization, so that data with and without actions can be jointly trained using large sequence models. UniPi (Du et al., 2023) directly learns to predict robotic videos and trains a separate inverse model to infer actions from generated videos. Applying inverse dynamics models to label large video data (e.g., from YouTube) is also applicable to other domains such as self-driving cars (Zhang et al., 2022) and video game playing (Baker et al., 2022; Venuto et al., 2022). #### 3.2.3. Large Scale Online Learning An alternative approach to assuming access to large-scale behavior datasets, online access to massive online game simulators has enabled "large-scale" online RL models to be trained in games such as DoTA (Berner et al., 2019) and StarCraft (Vinyals et al., 2019) using policy gradient or actor-critic algorithms. Similarly, domain randomization (Tobin et al., 2017) has been proposed to leverage online access to diverse generated environments to help bridge the sim-to-real gap in robotics. These large scale online training schemes, however, have not been able to leverage foundation models. An important direction for future work is to explore how one can utilize and learn generative models similarly in massive online settings. #### 3.2.4. Generative Models of Exploration and Self-Improvement Generative models of behavior can also be extended to model meta-level processes, such as exploration and self-improvement, whenever the dataset itself \(\mathcal{D}_{\text{RL}}\) embodies exploratory and self-improving behavior (e.g., the replay buffer of a policy gradient agent trained from scratch) (Laskin et al., 2022). That is, unlike other meta-RL methods, which usually train in online settings by maximizing multi-episodic value functions (Wang et al., 2016; Duan et al., 2016), algorithm distillation imitates the action sequence of a multi-episodic improvement process from \(\mathcal{D}_{\text{RL}}\) by using a transformer-based sequence model inspired by the zero-shot ability of language models, and adapts to downstream tasks purely in-context without updating any network parameters. Similar to algorithm distillation, which prompts an agent with its prior learning experience, corrective re-prompting also treats long-horizon planning as an in-context learning problem, but uses corrective error information as prompts, essentially incorporating feedback from the environment as an auxiliary input to improve the executability of a derived plan (Raman et al., 2022). ### Generative Models of the World In addition to learning models of behaviors, generative models can also learn models of the world--i.e., the transition dynamics \(\mathcal{T}\) and the reward function \(\mathcal{R}\)--from the offline dataset \(\mathcal{D}_{\mathrm{RL}}\). Conditional generation from a world model is analogous to model-based rollouts, which can be used to improve a policy. #### 3.3.1. One-Step Prediction of Reward and Dynamics for Model-based Planning. One can view learning models of \(\mathcal{T}\) and \(\mathcal{R}\) as a generative modeling problem given trajectories from an offline dataset \(\tau\sim\mathcal{D}_{\mathrm{RL}}\). Since \(\mathcal{D}_{\mathrm{RL}}\) also contains actions from a behavior policy \(\pi\), then \(\pi\), \(\mathcal{T}\), and \(\mathcal{R}\) can be jointly modeled with a single generative procedure. Specifically, the joint distribution of a trajectory \(p(\tau)\) can be factored autoregressively into an environment component and a policy component, \[p(\tau)=\Pi_{t=0}^{H}p(s_{t},r_{t},a_{t}|\tau_{<t})=\Pi_{t=0}^{H}\mathcal{T}(s_ {t}|\tau_{<t})\cdot\pi(a_{t}|\tau_{<t},s_{t})\cdot\mathcal{R}(r_{t}|\tau_{<t}, s_{t},a_{t}), \tag{18}\] so that maximum likelihood estimation of \(p(\tau)\) using \(\mathcal{D}_{\mathrm{RL}}\) under this factorization naturally decomposes into learning the environment dynamics \(\mathcal{T},\mathcal{R}\) and the policy \(\pi\) that produced the dataset \(\mathcal{D}_{\mathrm{RL}}\). Unlike language models where words exist in a common discrete space, here the states, actions and rewards in \(\tau\) can all be expressed in different modalities, which poses challenges to sequentially modeling \(\tau\). As a workaround, the Trajectory Transformer (Janner et al., 2021) discretizes each dimension of states, actions, and rewards in a continuous control task before applying a GPT-style autoregressive model on the discretized tokens. Discretization is more challenging in image-based domains, where learning a latent representation of an image space and latent dynamics model is more common. Here one can introduce a per-step latent variable \(z_{t}\) into the sequence modeling objective in Equation 18: \[p(\tau)=\Pi_{t=0}^{H}\int_{z_{t}}\mathcal{T}_{\mathrm{enc}}(z_{t}|\tau_{<t}) \cdot\mathcal{T}_{\mathrm{dec}}(s_{t}|\tau_{<t},z_{t})\cdot\pi(a_{t}|\tau_{<t},z_{t})\cdot\mathcal{R}(r_{t}|\tau_{<t},z_{t},a_{t})dz_{t}, \tag{19}\] where \(\mathcal{T}_{\mathrm{enc}}(z_{t}|\tau_{<t})\) encodes the history into the next step's latent state, \(\mathcal{T}_{\mathrm{dec}}(s_{t}|\tau_{<t},z_{t})\) decodes the next step's observation, and the policy \(\pi\) and reward \(\mathcal{R}\) can take latent state \(z_{t}\) as input. Along this line, both Hafner et al. (2020) and Chen et al. (2022) apply a sequential VAE (Zhu et al., 2020) to optimize the ELBO of Equation 19, and parametrize the latent dynamics model using an RNN or transformer based state space model respectively. Similarly, (Micheli et al., 2022; Ozair et al., 2021; Seo et al., 2022, 2022) used VQ-VAE or masked autoencoders (MAE) to map image-based observations into discrete tokens before learning a transformer or latent state space dynamics model on the discretized observations. The various ways a learned world model can be used to infer a high quality policy have been method and task specific. For example, heuristic decoding such as return guided beam search and MCTS have been applied to policy optimization (Janner et al., 2021; Sun et al., 2022; Ozair et al., 2021). Separate actor and critic pairs have also been trained using rollouts from a latent world model (also referred to as "imagination") without requiring generating image-based observations (Racaniere et al., 2017; Hafner et al., 2019). A world model, when trained to predict observations and actions in the original input space, can also be used to generate additional training data for model-free RL (Sutton, 1990; Feinberg et al., 2018; Kaiser et al., 2019; Agarwal et al., 2020) under the Dyna framework (Sutton and Barto, 2018) or to generate additional input context to a policy (Du and Narasimhan, 2019). #### 3.3.2. Planning with Generative Models of Long-term Future Instead of autoregressively factoring \(\tau\) by time step as in Equation 18, one can also directly model the joint distribution of \(\tau\) across all time steps at once using a diffusion model (Du et al., 2019; Janner et al., 2022): \[p(\tau)=p(s_{0},a_{0},r_{0},\ldots,s_{H},a_{H},r_{H})=\int p(\tau_{K})\Pi_{k=1} ^{K}p(\tau_{k-1}|\tau_{k})d\tau_{1:K}. \tag{20}\] By learning a trajectory level generative model, planning can be more easily integrated with dynamics modelling by sampling from the composed distribution \[\tilde{p}(\tau)\propto p(\tau)z(\tau), \tag{21}\] where \(z(\tau)\) specifies the trajectory-level properties that one wishes to control. For instance, Janner et al. (2022) uses trajectory returns as \(z(\tau)\) to guide a reverse diffusion process towards sampling high-return trajectories. Ajay et al. (2022) further demonstrate that \(z(\tau)\) can represent different trajectory-level properties such as goals, skills, and dynamics constraints, where classifier-free guidance can be applied to conditionally sample trajectories that satisfy the desired properties. Going beyond low dimensional state action spaces, (Du et al., 2023) also show that diffusion models of long-term futures can also be applied to high-dimensional video data \(\tau\), using \(z(\tau)\) as text descriptions, effectively improving decision making with large-pretrained text-video foundation models. In addition to the benefit of flexible conditioning (e.g., on returns, goals, constraints, skills, texts), sampling from the composed distribution in Equation 21 holds the promise of accurate long horizon planning, since sampling an entire trajectory does not suffer from compounding error when rolling out single-step dynamics. Beyond diffusion models, EBMs can also be used to model the joint trajectory distributions \(p(\tau)\), including conditioning on latent trajectory properties \(z(\tau)\), which might provide a natural approach to satisfying multiple desirable properties, such as high return and safety (Du et al., 2020; Liu et al., 2022). ## 4. Foundation Models as Representation Learners In this section, we discuss foundation models for decision making that leverage representation learning for knowledge compression. On one hand, foundation models can extract representations from broad image and text data, \(\mathcal{D}\), resulting in a plug-and-play style of knowledge transfer to vision and language based decision making tasks. On the other hand, foundation models can also be used to support task-specific representation learning via task-specific objectives and interactive data, \(\mathcal{D}_{\text{RL}}\). ### Plug-and-Play Off-the-shelf foundation models pretrained on Internet-scale text and image data can be used as preprocessors or initializers for various perceptual components of decision making agents. For instance, when an agent's perception is based on images, contrastive learning (Chen et al., 2020) and masked autoencoding (He et al., 2022) can be directly applied to the agent's image observations, providing state representations that can be further finetuned by BC or RL objectives (Sermanet et al., 2018; Kostrikov et al., 2020; Laskin et al., 2020; Xiao et al., 2022). When agent actions can be characterized by natural language (e.g., "move to the left then pick up the cup"), pretrained language models can be used to generate higher-level plans for longer-horizon tasks, with the hope that language based descriptions of actions generalize better than low-level motor controls (Huang et al., 2022; Ahn et al., 2022; Wang et al., 2023; Driess et al., 2023). When agent observations consist of both images and text descriptions, vision-language captioning models can further enrich agent observations with language descriptions (Tam et al., 2022; Du et al., 2023; Driess et al., 2023). Vision-language models such as CLIP and PaLI (Chen et al., 2022) are further able to provide task feedback and reward information by aligning image and language modalities in the agent's observation and goal space (Huang et al., 2022; Mahmoudieh et al., 2022; Fan et al., 2022). Even in the case where an agent's states, actions, and rewards do not consist of images or text, pretrained language models, perhaps surprisingly, have still been found useful as policy initializers for offline RL (Reid et al., 2022), online RL (Li et al., 2022), and structured prediction tasks (Lu et al., 2021). Plug-and-play foundation models are generally more natural when the decision making task concerns real-world images or texts. Plug-and-play is less applicable to decision making tasks when there are idiosyncratic, domain specific state action spaces, which we will discuss in Section 4.3. We will further discuss the challenges of bridging general image and text data with task-specific decision making data in Section 6.1. ### Vision and Language as Task Specifiers An important special case of plug-and-play foundation models is to use text commands or visual inputs as task specifiers to learn more robust, general, and multi-task policies(Ahn et al., 2022; Huang et al., 2022; Brohan et al., 2022; Liu et al., 2022). For instance, a text description of "close the cabinet door" or a goal image with the cabinet door closed can serve as policy input to augment the current robot state. There are a few motivations behind this approach. First, using language and a goal image to specify a task provides richer information about the intended task rather than merely providing a scalar reward. Second, pretrained language models (equipped with prompting methods such as chain-of-thought) can decompose high-level tasks into lower-level instructions that are easier to execute (Ahn et al., 2022; Huang et al., 2022; Jiang et al., 2022; Team et al., 2021). Furthermore, pretrained vision-language models can enable language-conditioned agents to generalize to new instructions, scenes, and objects in navigation and manipulation tasks (Lynch and Sermanet, 2020; Hill et al., 2020; Hao et al., 2020; Majumdar et al., 2020; Nair et al., 2022; Jang et al., 2022; Ahn et al., 2022; Huang et al., 2022; Khandelwal et al., 2022; Shridhar et al., 2022; Guhur et al., 2022; Shah et al., 2022), which has been a key challenge in robotics prior to their introduction (Zhu et al., 2018). Using vision and language task specifiers to prompt for desirable agent behaviors requires additional data such as text descriptions or goal images of a given task (see challenges in Section 6.1). Moreover, prompting for desirable outcomes from a large language model has significant potential but is also an open problem in itself (Liu et al., 2023), whose complexity is exacerbated in decision making scenarios with external entities and world dynamics (see Section 6.4). ### Learning Representations for Sequential Decision Making Unlike vision-language foundation models that can learn from a broad data collection \(\mathcal{D}\) but lack the notion of decision making, foundation model techniques and architectures (as opposed to the Figure 4. Illustrations of different representation learning objectives such as model-based representations (Nachum and Yang, 2021), temporal contrastive learning (Oord et al., 2018), masked autoencoders (Devlin et al., 2018), and offline RL (Kumar et al., 2022), on a trajectory \(\tau\sim\mathcal{D}_{\text{RL}}\) specifically devised for sequential decision making. pretrained models themeselves) can be used to optimize objectives uniquely devised for sequential decision making on the basis of task-specific interactive data \(\mathcal{D}_{\text{RL}}\). Figure 4 visually illustrates these representation learning objectives. **Model-based representations.** Traditionally, representation learning for sequential decision making has been framed as learning a latent state or action space of an environment by "clustering" states and actions that yield similar transition dynamics [5, 12, 13, 14, 15, 16]. Similar to how foundation models can serve as generative models of world dynamics by maximizing \(p(\tau)\) in Equation 18, foundation models can also serve as representation learners of world dynamics under the following objective: \[p(\tau_{s,r})=\Pi_{t=0}^{H}p(s_{t+1},r_{t}|\tau_{<t},s_{t},a_{t})=\Pi_{t=0}^{H} \mathcal{T}(s_{t+1}|\tau_{<t},\phi(s_{t}),a_{t})\cdot\mathcal{R}(r_{t}|\tau_{< t},\phi(s_{t}),a_{t}). \tag{22}\] Using this factorization for maximum likelihood estimation of \(p(\tau_{s,r})\) using \(\mathcal{D}_{\text{RL}}\) naturally leads to learning state representations \(\phi(s)\) that "cluster" states with similar rewards and next state probabilities. One could also choose to maximize the likelihood of the next state _representations_ as opposed to the next raw state, i.e., \(\mathcal{T}(\phi(s_{t+1})|\tau_{<t},\phi(s_{t}),a_{t})\) resulting in a latent dynamics model [14]. Alternative learning objectives for \(\phi(s)\) can be derived depending on how \(\mathcal{T}(s_{t+1}|\tau_{<t},\phi(s_{t}),a_{t})\) is defined. For instance, \(\mathcal{T}\) may be defined as an energy-based model: \[\mathcal{T}(s_{t+1}|\tau_{<t},\phi(s_{t}),a_{t})\propto\exp\{\phi(s_{t+1})^{ \top}f(\phi(s_{t}),a_{t},\tau_{<t})\}, \tag{23}\] where \(f\) is a trainable function that maps \(\phi(s_{t}),a_{t},\tau_{<t}\) to the same embedding space as \(\phi\). While Equation 22 learns state representations by modeling the forward dynamics, one can also learn state representations based on an _inverse_ dynamics model [14, 15] by predicting \(a_{t}\) from \(\tau_{<t},s_{t},s_{t+1}\), thereby maximizing \[p(\tau_{a})=\Pi_{t=0}^{H}p(a_{t}|\tau_{<t},\phi(s_{t}),\phi(s_{t+1})). \tag{24}\] In addition to forward and inverse dynamics based representations, it is also possible to learn state representations derived from predicted value functions [13], curiosity metrics [13], or other MDP-based similarity metrics such as bisimulation properties deduced from Bellman backups [17, 14, 15]. The above representation learning objectives have mostly been considered under the Markovian setting, hence the dependence on \(\tau_{<t}\) is often dropped. Though the Markovian assumption makes large sequence models seem less relevant, these representation learning objectives benefit from sequence modeling architectures in image-based domains that are generally non-Markovian. **Temporal contrastive learning.** The model-based representation objectives above require strictly interleaved state-action-reward tuples in the training data \(\mathcal{D}_{\text{RL}}\), which can preclude more flexible representation learning techniques that consider broader data sources, \(\mathcal{D}\), such as YouTube videos (which can be thought of as state-only trajectories \(\tau_{s}\)). Temporal contrastive learning such as CPC [13], on the other hand, can model more flexible sequence-level representations, and has been applied to playing games by watching YouTube videos [1]. Specifically, in temporal contrastive learning, observations that are closer temporally (e.g., observations that belong to the same trajectory) are encouraged to have similar representations. Given a sub-trajectory \(\tau_{t:t+h}\), one can learn \(\phi(s)\) by minimizing a contrastive loss between \(\phi(s_{t})\) and \(\phi(s_{t+i})\): \[-\phi(s_{t+i})^{\top}W_{i}\phi(s_{t})+\log\mathbb{E}_{\rho}\left[\exp\{\phi( \tilde{s})^{\top}W_{i}\phi(s_{t})\}\right]. \tag{25}\] where \(i=1,\ldots,h\), \(W_{i}\) is a learnable weight matrix, and \(\rho\) is some non-trainable prior distribution. Note that the temporal contrastive learning in Equation 25 bears resemblance to learning an energy-based dynamics model in Equation 23, as established in prior work (Nachum and Yang, 2021; Nguyen et al., 2021). **Masked autoencoders.** When a trajectory \(\tau=(s_{0},a_{0},r_{0},...,s_{H},a_{H},r_{H})\) from \(\mathcal{D}_{\text{RL}}\) is treated as a flattened sequence, BERT-style denoising autoencoding objectives can be applied to the sequence to learn representations of states, actions, rewards, and dynamics through specific choices of masking patterns (Yang and Nachum, 2021; Liu et al., 2022; Carroll et al., 2022; Seo et al., 2022). These methods learn representations \(\phi(s)\) by first randomly masking a subset of tokens in \(\tau\) to obtain \(\hat{\tau}\), then pass the masked sequence \(\hat{\tau}\) to a transformer, and finally reconstruct the masked portions of the original input \(\tilde{\tau}\) from the transformer output \(F(\tilde{\tau})\). The training objective, for instance, can be characterized as maximizing \[p(\tilde{\tau}|\hat{\tau})=\Pi_{t=0}^{H}m_{t}p(\tau_{t}|\hat{\tau})=\Pi_{t=0}^ {H}m_{t}\frac{\exp\{F(\hat{\tau})_{t}^{T}\phi(s_{t})\}}{\sum_{s}\exp\{F(\hat{ \tau})_{t}^{T}\phi(s)\}}, \tag{26}\] where for each masked input state \(s_{t}\), a contrastive loss between its representation \(\phi(s_{t})\) and the transformer output at its sequential position \(F(\hat{\tau})_{t}\) is applied. Unlike model-based representation learning approaches that explicitly model state transition probabilities, masked autoencoders can learn representations from a broader dataset that potentially has missing actions and rewards, while still being able to incorporate dynamics-based information in the learned representations. **Offline RL pretraining.** When the downstream decision making tasks are to be trained with RL objectives, it might seem natural to apply similar RL objectives during pretraining when acquiring value-based representations (Mazoure et al., 2022; Ball et al., 2023). At a high level, value-based pretraining encompasses any offline RL algorithms that have been pretrained on logged experience from one or more tasks relevant to the downstream interactive task of interest. Value-based pretraining has exhibited scaling capability in multi-task settings where state action spaces are similar (e.g., all of Atari games (Kumar et al., 2022)). #### 4.3.1. Post Representation Learning: BC and RL Finetuning Unlike generative foundation models that can directly produce action or next state samples, as in Section 3, foundation models as representation learners are only directed to extract representations of states, actions, and dynamics; hence they require additional finetuning or model-based policy optimization to achieve strong decision making performance. On the theoretical side, various works have focused on developing representation learning objectives that ensure downstream BC or policy/value-based RL finetuning using the pretrained representations are provably efficient (Lin et al., 2020; Nachum and Yang, 2021; Zhang et al., 2022; Pacchiano et al., 2022; Ren et al., 2022). These analyses are generally based on properties of linear MDPs. For instance, one such assumption states that the state-action value function \(Q^{\pi}(s,a)\) can be represented as a linear combination of features \(\phi(s,a)\) under the linear MDP factorization \(\mathcal{T}(s^{\prime}|s,a)=\langle\phi(s,a),\theta(s^{\prime})\rangle\) and \(\mathcal{R}(s,a)=\langle\phi(s,a),\theta_{r}\rangle\), which ensures that standard policy and value based RL training can take place in the more compact representation space \(\phi(s,a)\) as opposed to the original state-action space. Beyond providing compact state action spaces for policy and value-based model-free RL methods, pretrained representations can also simplify model learning and policy rollouts of model-based policy optimization (Silver et al., 2014; Oh et al., 2017; Hafner et al., 2019) as described in Section 3.3. While representation learning objectives specifically devised for sequential decision making have theoretical benefits, it is less clear how these objectives can effectively incorporate broader and multi-task data when the underlying dynamics differ from that of the target task of interest. The recurring challenge of bridging learning from broad data \(\mathcal{D}\) and task-specific data \(\mathcal{D}_{\text{RL}}\) will be further discussed in Section 6.1. ## 5. Large language models as agents and environments We have seen that foundation models can characterize different components of a decision making process (\(\mathcal{M}\)), such as agent behaviors (\(A\)), world dynamics (\(\mathcal{T}\)), task specifiers (\(\mathcal{R}\)), and state (\(S\)) and action representations. In this section, we further consider a special case where pretrained large language models can serve as agents or environments. Treating language models as agents, on one hand, enables learning from environment feedback produced by humans, tools, or the real world, and on the other hand enables new applications such as information retrieval and web navigation to be considered under a sequential decision making framework. Language models can also be thought of as computational environments that take text as input and produce text as output, effectively supporting interactions with external prompts. ### Interacting with Humans **Dialogue as an MDP.** A piece of dialogue can be viewed as in alternating nteraction between a dialogue agent \(\pi\) and a human environment \(\mathcal{M}=\mathcal{E}\), where a conversation \(\tau_{<t}=\{e_{0},a_{1},e_{1},...,a_{t}\}\) consists of sentences \(a_{i}\) and \(e_{i}\) produced by \(\pi\) and \(\mathcal{E}\) respectively. On the \(t\)-th turn, a state \(s_{t}\in S\) captures the conversation history \(s_{t}=\{\tau_{<t},e_{t}\}\), an action \(a_{t}\in A\) is an agent's response given this context, a next state \(s_{t+1}\in S\) concatenates \(s_{t}\) with \(a_{t}\) and \(e_{t+1}\), and a reward \(r_{t}=\mathcal{R}(s_{t},a_{t})\) is produced. An agent \(\pi\) aims to maximize \(\mathbb{E}_{e_{0}\sim\mu,\pi,\tau}[\sum_{t=0}^{H}\gamma^{t}\mathcal{R}(s_{t},a _{t})]\). **Optimizing dialogue agents.** The application of large language models to dialogue generation is a natural one, as both the broad pretraining data \(\mathcal{D}\) and the task-specific dialogue data \(\mathcal{D}_{\text{RL}}\) are of the same text modality, which allows for task-specific finetuning using the same self-supervised loss as pretraining (Adiwardana et al., 2020; Roller et al., 2021; Nakano et al., 2021; Thoppilan et al., 2022). Such an approach has achieved impressive performance as assessed by humans, under metrics including safety, sensibleness, interestingness, truthfulness, and helpfulness (Thoppilan et al., 2022; Bai et al., 2022). Although human feedback was initially used to evaluate dialogue systems (Jiang et al., 2021), it was soon incorporated as a reward signal for optimizing dialogue agents under the _reinforcement learning with human feedback_ (RLHF) framework (Ouyang et al., 2022; OpenAI, 2022; Bai et al., 2022, _inter alia_). In practice, RLHF involves several stages: first, a pretrained language model is finetuned on dialogue data to provide an initial policy \(\pi\); second, output from this model is ranked by human raters, which is then used to train a preference (reward) model \(\mathcal{R}\); finally, the language model is finetuned using policy gradient in Equation 4 to maximize the reward given by the preference model. Other RL objectives such as Q-learning (Equation 5) and actor-critic (Equation 6) have also been used to enable dialogue agent to perform specific tasks, such as booking flights and selling items on Craigslist (Jaques et al., 2017; Verma et al., 2022; Snell et al., 2022; Jang et al., 2022; Snell et al., 2022). **Limitations of dialogue agents.** While using human feedback is a natural way to turn broad data \(\mathcal{D}\) into task-specific data \(\mathcal{D}_{\text{RL}}\), solely relying on human feedback to finetune a language model agent has a number of limitations. For instance, language models have been criticized for failing to access up-to-date information (Komeili et al., 2021), hallucinating facts (Maynez et al., 2020; Ji et al., 2022), and struggling to perform complex reasoning and mathematical calculations (Patel et al., 2021). Such failure modes are unsuprising if these desired properties were never a part of the feedback the language model received. While one approach to mitigate such failure modes is to collect human feedback on each of the desired properties, leveraging tools and external entities that can automatically provide feedback is likely to be a more scalable and reliable approach. ### Interacting with Tools Language model agents that generate API calls (to invoke external tools and receive responses as feedback to support subsequent interaction) can be formulated as a sequential decision making problem analogous to the dialogue formulation in the previous section. Several tools such as search engines (Komeili et al., 2021; Thoppilan et al., 2022; Lazaridou et al., 2022; Shuster et al., 2022; Yao et al., 2022), calculators (Cobbe et al., 2021; Thoppilan et al., 2022), translators (Thoppilan et al., 2022), MuJoCo simulators (Liu et al., 2022), scratch pads (Nye et al., 2021), computer memory (Schuurmans, 2023), and program interpreters (Gao et al., 2022) have been used to augment language models in a supervised finetuning or prompting setting, where response from tools are used as additional inputs to the language model. **Limitations of tool use agents.** Unlike dialogue systems, where the agent and environment take turns, tool-using agents need to additionally decide when to call external tools, which tools to use, and how to use these tools (e.g., reformulating query if results are not helpful), all of which pose additional challenges. Consequently, the supervised finetuning of tool-use agents requires significant human supervision through API call annotations. While prompting-based tool-use requires fewer examples, the specific prompts typically need to be hand-crafted for each tool (Schick et al., 2023). Moreover, language models are known to be sensitive to the prompt formats in both the zero and few-shot settings (Jiang et al., 2020; Schick and Schutze, 2021). As a result, the communication between language models and external tools typically needs to be cleaned-up by a rule-based parser, which further complicates the prompting setup. Recently, Parisi et al. (2022) and Schick et al. (2023) have made progress on self-supervised learning of tool use with language models, training the language model to only an external tool if this leads to an improved response over the outcome predicted by language model alone. Nevertheless, none of the existing work considers tool use in an interactive setting where an agent can _iterate_ on its behavior according to tool feedback to improve its tool-use ability. **Tools as interactive environments.** It is challenging to scale supervised finetuning and prompting to a large number of tools with different uses and tools that return large amounts of feedback (e.g., hundreds of search results). One sensible way of tackling this challenge is to treat tools like web browsers as interactive environments, from which experience can be sampled by executing search queries (Nakano et al., 2021; Gur et al., 2022), and optimizing such queries via RL techniques such as policy gradient. Treating tools as interactive environments enables methods that require massive and efficient online simulator access (e.g., Monte Carlo Tree Search for AlphaGo) to be applied to a broader set of real-world problems, such as web navigation and information retrieval. Additionally, situating language models in true knowledge obtained from the environment better grounds the model, avoiding the the Dichotomy of Control problem (e.g., sequence models generating next states without respecting environment transitions) (Yang et al., 2022). ### Language Models as Environments **Prompting as an MDP.** Iterative prompting can be characterized as an MDP that captures the interaction between a prompt provider \(\pi\) and a language model environment \(\mathcal{E}\), where a prompt history \(\tau_{<t}=\{e_{0},a_{1},e_{1},...,a_{t}\}\) consists of prompts \(a_{i}\) and language model outputs \(e_{i}\) produced by \(\pi\) and \(\mathcal{E}\) respectively. Here, \(e_{0}\) is the initial context to the language model. In the \(t\)-th turn, a state \(s_{t}\in S\) captures the prompting history and the \(t\)-th language model responses \(s_{t}=\{\tau_{<t},e_{t}\}\), an action \(a_{t}\in A\) is given by the prompt provider, a next state \(s_{t+1}\in S\) is produced by concatenating \(s_{t}\) with \(a_{t}\) and the next response of the language model \(e_{t+1}\), and a reward \(r_{t}=\mathcal{R}(s_{t},a_{t})\) is emitted. An agent \(\pi\) aims to maximize \(\mathbb{E}_{e_{0},\mu,\pi}\{\gamma\}[\sum_{t=0}^{H}Y^{t}\mathcal{R}(s_{t},a_{t})]\). In language model reasoning, for instance, \(\mathcal{R}(s_{t},a_{t})=1\) if the language model's output successfully reaches a goal answer \(s_{t}\) (i.e., correct reasoning), and \(\mathcal{R}(s_{t},a_{t})=0\) otherwise. Under this formulation, various schemes for language model prompting can be characterized by high-level actions that map input strings to desired output strings using the language model. For instance, such high-level actions include DECOMPOSE (Press et al., 2022), RANK (Kumar and Talukdar, 2021), DENOISE (Shi et al., 2023), and PARAPHRASE (Jiang et al., 2021). These high-level actions can also be recursively composed to achieve more sophisticated iterative prompting schemes (Zhou et al., 2022). Other prompting schemes such as SUMMARIZE, PRUNE, SEARCH can be considered for handling challenges such as overcoming long context lengths. Given that language models with auxiliary memory have been shown to emulate universal Turing machines (Schuurmans, 2023), language models could ultimately serve as "computers" that also operate on human language with prompting as a flexible new form of programming language. ## 6. Open Problems, Challenges, and Opportunities ### How to Leverage or Collect Datasets One key challenge in applying foundation models to decision making lies in the dataset gap: the broad datasets from vision and language \(\mathcal{D}\) and the task specific interactive datasets \(\mathcal{D}_{\text{RL}}\) can be of distinct modalities and structures. For instance, when \(\mathcal{D}\) consists of videos, it generally does not contain explicit action labels indicating the cause-effect relationship between different frames, nor does it contain explicit reward labels indicating which videos are better than others, whereas actions and rewards are key components of \(\mathcal{D}_{\text{RL}}\). Despite this gap, broad video and text data can be made more task specific through post-processing (\(\mathcal{D}\rightarrow\mathcal{D}_{\text{RL}}\)), leveraging hindsight relabeling of actions and rewards (e.g., using human feedback). Meanwhile, decision making datasets can be made more broad and general (\(\mathcal{D}_{\text{RL}}\rightarrow\mathcal{D}\)) by combining a wide range of tasks-specific datasets (e.g., Gato). Below we provide a list of examples of \(\mathcal{D}\) and \(\mathcal{D}_{\text{RL}}\) that can be used for research in foundation models for decision making, and propose additional approaches for bridging the gap between \(\mathcal{D}\) and \(\mathcal{D}_{\text{RL}}\). Existing vision and language datasets (\(\mathcal{D}\)).Vision and language datasets can be useful for decision making if they contain multiple modalities (e.g., aligned image and text pairs), (implicit) actions, movements, instructions, and notions of tasks. For instance: * LAION-5B (Schuhmann et al., 2022) contains 5.85 billion CLIP-filtered text-image pairs. * Egocentric 4D Perception (EGO4D) (Grauman et al., 2022) contains over 30k hours of time-aligned video in an inertial measurement unit (IMU) dataset of people's activities such as cooking, eating, and working at a computer in 4D (3D spatial and time). * Something-Something V2 Dataset (Goyal et al., 2017) contains 220k short videos of people performing various tasks with everyday objects, such as putting on a hat and opening a bottle. These videos are annotated with action labels at the level of verb and noun phrases. * HowTo100M (Miech et al., 2019) contains over 100 million video clips and descriptive captions, covering topics such as cooking, home improvement, and beauty. * BigBench (Srivastava et al., 2022) is a dataset consisting of NLP tasks such as question answering, summarization, and conversation modeling. It also contains text-based games such as text navigation, Sudoku, and Taboo. Existing decision making datasets (\(\mathcal{D}_{\text{RL}}\)).Foundation models are currently relevant to decision making datasets that are larger-scale, multi-task, multi-modal, real-world based, and video or text based. For example: * BabyAI (Chevalier-Boisvert et al., 2018) contains data in text-based games that require an agent to navigate in a 2D gridworld virtual environment and perform a variety of tasks. * VirtualHome (Puig et al., 2018) contains over 15k simulated images and videos of indoor scenes, along with detailed information of the scenes and objects such as object shape, size, and material properties. * RoboNet (Dasari et al., 2019) contains over 100k video clips of 7 robots over 100 camera viewpoints performing a variety of tasks in different environments. * RL Unplugged (Gulcehre et al., 2020) is an offline RL dataset consisting of simulated locomotion, manipulation, and Atari games. * Bridge Data (Ebert et al., 2021) contains 7,200 text-video demonstrations of a 6-dof WidowX250s robot arm performing 71 tasks across 10 kitchen-themed environments. * MineDojo (Fan et al., 2022) contains 640k text-video pairs (16s in length), 7k Wiki pages, and 340k Reddit posts on Minecraft. * RT-1 (Brohan et al., 2022) Robotics Transformer for Real-World Control at Scale (to be released). * CACTI (Mandi et al., 2022): A Framework for Scalable Multi-Task Multi-Scene Visual Imitation Learning (to be released). * VIMA (Jiang et al., 2022) contains 650K successful trajectories of 17 simulated robotic manipulation tasks with interleaved language and image/video frames. **Bridging \(\mathcal{D}\) and \(\mathcal{D}_{\text{RL}}\).** To enable better datasets tailored for decision making, one can either increase the scale of \(\mathcal{D}_{\text{RL}}\) by large-scale logging and merging task-specific sets of interactive data, or by relabeling \(\mathcal{D}\) with action and reward information. One could also consider augmenting \(\mathcal{D}_{\text{RL}}\) with meta data, such as informational and instructional texts and videos. * Large-scale logging of interactions. Since many automatable tasks are currently conducted by humans (driving, navigating the web, writing code), it is possible to collect large amounts of data for sequential decision making by logging human behaviors. Similar to logged human conversations that are used to train dialogue agents, one can log "actions" such as keystrokes and mouse movements for training web navigating agents. * Hindsight relabelling of existing data. Since many videos are already available on YouTube, it is possible to relabel the videos in hindsight with task descriptions and action information similar to Behbahani et al. (2019); Shaw et al. (2022). * Incorporating descriptions, instructions, and other task information. Since training a DQN Atari agent from scratch requires 7 GPU days, it is natural to consider whether information about an Atari game on the Internet (e.g., the Gameplay section of a game's Wikipedia page) could improve an agent's learning speed and sample efficiency. ### How to Structure Environments and Tasks Foundation models in vision and language can often solve a diverse set of tasks and generalize to new tasks in a few-shot or zero-shot manner (Radford et al., 2021; Alayrac et al., 2022; Brown et al., 2020; Chowdhery et al., 2022; Hoffmann et al., 2022). Unlike vision and language where images or texts can serve as a universal task interface, decision making faces environment diversity where different environments operate under distinct state action spaces (e.g., the joint space and continuous controls in MuJoCo are fundamentally different from the image space and discrete actions in Atari), thereby preventing knowledge sharing and generalization. Below are some recent approaches to structuring environments and tasks so that foundation model architectures (e.g., Transformers) and large pretrained models (e.g., video diffusion) can be applied to decision making. * **Universal encoding.** Similar to Reed et al. (2022) and Janner et al. (2021), all states, actions, and rewards across different environments and tasks can be encoded into universal tokens in a sequence modeling framework. However, such tokenization might not be able to preserve the rich knowledge and generalization abilities of pretrained vision and language models. * **Text as environment.** Alternatively, one can convert environments with different state action spaces into text descriptions and use text as a universal interface to learn generalizable policies. For instance, when an observation is an image, one may use a caption model to convert the observation to text, or directly use ASCII characters to represent the observation as text. Text-as-environment and LM-as-policy have been evaluated on a variety of simple interactive games such as Spelling Bee, Sudoku, Chess, and Taboo (Srivastava et al., 2022), though there is still a substantial gap between large language models and state-of-the-art task-specific game-solving systems (e.g., AlphaGo) in these tasks. Text as environment also seems unnatural in visual perception based applications such as self-driving. Instead of using text as states and actions, one can also use text descriptions to specify tasks (rewards) (Ahn et al., 2022; Huang et al., 2022; Brohan et al., 2022; Du et al., 2023), avoiding the difficulties around reward shaping. Using text as a task specifier requires additional data to be collected, and still faces the challenge of incongruent state action spaces across tasks. * **Video as policy and world model.** Finally, one can use image frames as a universal interface to represent state action spaces, and use videos to represent policies (Du et al., 2023). This allows policy learning to leverage web-scale pretrained text-to-video models. However, the mapping from videos to joint actions of individual agents still requires further training. This approach is further complicated by the computational difficulty of effective video generative modeling. ### Improving Foundation Models #### Long-context and External Memory Effective decision making often requires long context of the prior history of observations and actions. In contrast, existing approaches typically rely on transformers that have a bounded context length. To emulate general-purpose computations and decision making, properly incorporating interactions with external memory is important. One approach is to leverage prompting of intermediate computations (Schuurmans, 2023; Giannou et al., 2023) to extend computational context, but this approach is difficult to implement in practice due to the sensitivity of language models on prompt selection and ways of parsing the output. Another interesting direction for future exploration is to incorporate retrieval of past observations to enable effective decision making (Borgeaud et al., 2021). #### Combining multiple foundation models. Different foundation models capture different data modalities, such as visual, textual, and cross-modal representations of data. To effectively execute decision making across different environments, it is desirable to jointly leverage information across different models. One approach to compose models across different modalities is to graft them (Alayrac et al., 2022) on top of a single large language model. Alternatively, language can be used as a ubiquitous interface in which separate foundation models can communicate (Zeng et al., 2022). Different foundation models can further communicate through iterative optimization (Li et al., 2022). A limitation of existing works is that they either require finetuning (Alayrac et al., 2022) or defined interfaces within which models can communicate (Zeng et al., 2022; Li et al., 2022), which prevents novel combinations of foundation models from being easily composed at test-time in a free-form manner. #### Grounding foundation models in the world. Foundation models are typically trained on Internet-scale data without knowledge of the physical world. To effectively execute actions produced by foundation models in the real world, it is important to ground these models in both the underlying geometry and physics of the world. One existing approach uses intermediate outputs from simulators as context for action generation (Liu et al., 2022). Alternatively, foundation model outputs could be scored and optimized using feedback from simulators (Li et al., 2022). Existing works assume access to a simulator of the operating environment, which is not available in the physical world. Constructing systems that more accurately ground predictions in the physical world is therefore an interesting area for future research. ### Improving Decision Making **How to extract desirable behavior.** One key aspect of foundation models for decision making lies in effectively adapting task-agnostic models into task-specific agents. Various approaches can be seen as ways to "control" foundation models to produce desirable behaviors for specific tasks. For instance, large-pretrained language models can be specialized to output desired sentences through instruction finetuning (Wei et al., 2021) or few-shot prompting (Brown et al., 2020). For conditional generative modeling of behavior, language goals (Du et al., 2023), image goals (Brohan et al., 2022), returns (Lee et al., 2022), environment constraints (Ajay et al., 2022), and expert demonstrations (Reed et al., 2022) have all been explored as s conditioning factor for finetuning or prompting schemes, so that the models can be "controlled". Aside from goal or instruction conditioned finetuning or prompting, two types of "iterative" approaches have also been applied to elicit expert behavior. The first approach iterates through a set of chain-of-thought reasoning or computation steps (Nye et al., 2021; Wei et al., 2022; Yang et al., 2022), with the hope that a sequence model supervised to emit similar chain-of-thought steps will achieve better generalization. The second approach iterates through a set of improvement steps from less to more desirable behaviors, with the hope that a sequence model supervised on the improvement sequence can continue to regress on the improvement trend (Laskin et al., 2022; Liu et al., 2023). Both of these approaches, together with goal conditioned supervised learning, can help extract desirable behavior without requiring explicit finetuning with RL objectives. **Offline to online.** While conditional generative modeling can elicit expert behavior as discussed above, directly finetuning foundation model agents using RL objectives such as policy gradient is another approach. One major challenge that has prevented wide real-world adoption of RL finetuning is the need for large online samples to ensure learning progress (Li, 2019). Nevertheless, in game settings where massive online access is available (e.g., Go, Chess, Shogi, Dota, Atari), RL methods have surpassed human performance. Instead of avoiding online access altogether through offline RL or conditional generative modeling, language models as interactive agents enables massive online access to environments that are highly scalable and available (e.g., search engines, databases, compilers). Developing infrastructures that enable software tools as environments, remote procedure calls as interactions, and foundation models as policies can have a large impact on a wide range of real-world applications. ## 7. Discussion and Perspectives Foundation models have achieved remarkable success in emulating human intelligence at earlier stages of development: seeing, hearing, speaking, reading, and writing. To transform these basic human abilities to world-class expertise, humans spend tens of thousands of hours practicing through trial and error (Gladwell, 2008), interacting with the external world, making mistakes, and learning from them. Foundation models for decision making offers a path to transform general artificial intelligence capabilities in vision, language, and world knowledge into next-level expert capabilities. As well as achieving more sophisticated intelligence, foundation models can also characterize different components of a decision making system, such as generative models of behavior and the world (Section 3), representations of world knowledge (Section 4), and interactive agents or environments through the usage of language (Section 5). Despite the initial successes, foundation models for decision making inevitably faces significant challenges, such as the gap in data modalities, ambiguities around environment and task structures, and missing components in current foundation models and decision making paradigms (Section 6). We hope that this manuscript can serve as a stepping stone toward developing autonomous agents with next-level intelligence and more sophisticated capabilities. ###### Acknowledgements. We thank Bo Dai and Douglas Eck for reviving this manuscript.
2305.07976
Nonnegative Low-Rank Tensor Completion via Dual Formulation with Applications to Image and Video Completion
Recent approaches to the tensor completion problem have often overlooked the nonnegative structure of the data. We consider the problem of learning a nonnegative low-rank tensor, and using duality theory, we propose a novel factorization of such tensors. The factorization decouples the nonnegative constraints from the low-rank constraints. The resulting problem is an optimization problem on manifolds, and we propose a variant of Riemannian conjugate gradients to solve it. We test the proposed algorithm across various tasks such as colour image inpainting, video completion, and hyperspectral image completion. Experimental results show that the proposed method outperforms many state-of-the-art tensor completion algorithms.
Tanmay Kumar Sinha, Jayadev Naram, Pawan Kumar
2023-05-13T17:51:00Z
http://arxiv.org/abs/2305.07976v1
Nonnegative Low-Rank Tensor Completion via Dual Formulation with Applications to Image and Video Completion ###### Abstract Recent approaches to the tensor completion problem have often overlooked the nonnegative structure of the data. We consider the problem of learning a nonnegative low-rank tensor, and using duality theory, we propose a novel factorization of such tensors. The factorization decouples the nonnegative constraints from the low-rank constraints. The resulting problem is an optimization problem on manifolds, and we propose a variant of Riemannian conjugate gradients to solve it. We test the proposed algorithm across various tasks such as colour image inpainting, video completion, and hyperspectral image completion. Experimental results show that the proposed method outperforms many state-of-the-art tensor completion algorithms. ## 1 Introduction Recent years have seen an increase in the quantity of multidimensional data available, such as colour images, video sequences, and 3D images. Flattening multidimensional data to matrices usually leads to loss of information as matrices cannot capture the inherent structures present in most multidimensional data. This has led to increased research on tensor-based techniques for handling such data. The low-rank tensor completion problem aims to recover an original tensor from partial observations. A well-known [6] formulation for such problems is \[\min_{\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}}C\,L(\mathcal{ W},\mathcal{Y}_{\Omega})+R(\mathcal{W}), \tag{1}\] where \(\mathcal{Y}_{\Omega}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\) is a partially observed tensor for indices given in the set \(\Omega\), \(L:\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\rightarrow\mathbb{R}\) is a loss function, \(C>0\) denotes the cost parameter, and \(R\) is a regularizer enforcing low-rank constraint. In many applications of tensor reconstruction such as color image recovery, video completion, recommendation systems, and link prediction, the data is nonnegative. Problem (1) does not enforce this structural constraint, and as such, the recovered tensors might contain negative entries. To incorporate these constraints, we consider the nonnegative low-rank tensor learning problem of the form: \[\min_{\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}} C\|\mathcal{W}_{\Omega}-\mathcal{Y}_{\Omega}\|^{2}+R(\mathcal{W}) \tag{2}\] \[\text{subject to} \mathcal{W}\geq 0,\] where \((\mathcal{W}_{\Omega})_{i_{1},\ldots,i_{K}}=\mathcal{W}_{i_{1},\ldots,i_{K}}\) if \((i_{1},\ldots,i_{K})\in\Omega\). We convert the problem (2) into a minimax problem by constructing a partial dual similar to [15]. This leads to a factorization of the tensor \(\mathcal{W}\) in a form with separate factors for the nonnegative and low-rank constraints. The minimax problem has a rich geometric structure. We employ a Riemannian conjugate gradient algorithm to exploit this structure and develop an efficient solution. The main contributions of the paper are listed below. * We propose a novel factorization for modeling nonnegative low-rank tensors. * We develop an algorithm exploiting the inherent geometric structure of this factorization. * Experiments carried out on several real-world datasets show that the proposed algorithm outperforms state-of-the-art tensor completion algorithms. The rest of the paper is organized as follows. In Section 2, we introduce the notation used in the paper. In Section 3, we review previous work related to the tensor completion problem. In Sections 4 and 5, we develop the dual framework and present our algorithm. Section 6 details experiments carried out to compare our algorithm with several state-of-the-art algorithms. In Section 7, we end with concluding remarks. ## 2 Notation For a full treatment of tensors, we refer to [2]. Here, we outline the basic notation we use for tensors. We denote tensors by calligraphic capital letters and matrices by capital letters. For a matrix \(X\in\mathbb{R}^{m\times n}\), the nuclear norm of \(X\), denoted by \(\|X\|_{*}\), is the \(l_{1}\)-norm of the singular values of \(X\). The inner product of two same-sized tensors \(\mathcal{W},\mathcal{U}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\) is the sum of the products of their entries \[\langle\mathcal{W},\mathcal{U}\rangle=\sum_{i_{1}}^{n_{1}}\sum_{i_{2}}^{n_{2}} \cdots\sum_{i_{K}}^{n_{K}}\mathcal{W}_{i_{1},\ldots,i_{K}}\mathcal{U}_{i_{1}, \ldots,i_{K}}.\] A mode-\(k\) fiber of a tensor \(\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\), denoted by \(\mathcal{W}_{i_{1},\ldots,i_{k-1},:i_{k+1},\ldots,i_{K}}\), is a vector obtained by fixing all but \(k\)-th index of \(\mathcal{W}\). The mode-\(k\) unfolding of a tensor \(\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\) is a matrix \(W_{k}\in\mathbb{R}^{n_{k}\times n_{1}\ldots n_{k-1}n_{k+1}\ldots n_{K}}\) formed by arranging the mode-\(k\) fibers to be the columns of the resulting matrix, i.e., \[W_{k}=[\mathcal{W}_{i_{1},\ldots,i_{k-1},:i_{k+1},\ldots,i_{K}}]\;\forall i_ {j},j\neq k.\] The reverse of unfolding operation is called the folding operation which converts a given matrix to a tensor of specific order. We also represent the mode-\(k\) unfolding by the map \(\textit{unfold}_{k}:\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\to\mathbb{R}^{n _{k}\times n_{1}\ldots n_{k-1}n_{k+1}\ldots n_{K}}\) such that \(\textit{unfold}_{k}(\mathcal{W})=W_{k}\), and the mode-\(k\) folding by the map \(\textit{fold}_{k}:\mathbb{R}^{n_{k}\times n_{1}\ldots n_{k-1}n_{k+1}\ldots n_{K}} \to\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\). The \(k\)-mode product of a tensor \(\mathcal{W}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}\) with a matrix \(X\in\mathbb{R}^{m\times n_{k}}\) is denoted by \(\mathcal{W}\times_{k}X\in\mathbb{R}^{n_{1}\times\ldots\times n_{k-1}\times m \times n_{k+1}\times\ldots n_{K}}\), defined element-wise as follows: \[(\mathcal{W}\times_{k}X)_{i_{1},\ldots,i_{k-1},j,i_{k+1},\ldots,i_{K}}=\sum_{ i_{k}}^{n_{k}}\mathcal{W}_{i_{1},\ldots,i_{K}}X_{j,i_{k}}.\] Then we have \[\mathcal{U}=\mathcal{W}\times_{k}X\Longleftrightarrow U_{k}=XW_{k}.\] ## 3 Previous Work Tensor completion for visual data recovery was introduced in [8], building on the framework for low-rank matrix completion using matrix trace norm regularizer. The trace norm for tensors can be defined in several ways, and as such there exist multiple formulations for trace norm regularized tensor completion. [8], [13], and [12] use the regularizer \(R(\mathcal{W})=\sum_{k=1}^{K}\|W_{k}\|_{*}\), known as the overlapped trace norm, which promotes a lower Tucker (multilinear) rank in the recovered tensors. Another popular trace norm regularizer is the latent trace norm regularizer. Methods that use this formulation, model the tensor as a sum of \(K\) individual tensors, and the latent trace norm amounts to an \(l_{1}\) norm regularizer that promotes sparsity. A few examples of such methods are [16], which uses a Frank-Wolfe algorithm for optimization, and [14], which uses a scaled variant of the latent trace norm. The paper [6] uses a formulation that models the recovered tensor as a sum of non-sparse tensors and proposes a regularizer that uses \(l_{2}\) norm regularizer as opposed to an \(l_{1}\) norm. This allows for the development of a dual framework for tensor completion, which is solved using methods from Riemannian optimization. Another class of tensor completion algorithms attempt to exploit the smoothness properties present in real-world tensor data like hyperspectral images and 3D images. The paper [18] integrates the smooth PARAFAC decompositions for partially observed tensors and develops two variants using the total variation and quadratic variation. [17] adopts total variation(TV) regularizer to formulate the model, and [19] uses smooth matrix factorizations to incorporate tensor smoothness constraints. Tensor decomposition methods form another class of algorithms. Tensor decompositions like Tucker and CP decompositions act as generalizations of the familiar notion of singular value decomposition of matrices. [9] and [10] exploit the Riemannian geometry of the set of fixed multilinear rank tensors to efficiently learn the Tucker decomposition. [28] employs a Bayesian probabilistic CP decomposition model to recover the incomplete tensors. Other methods include [22], which uses another form of tensor singular value decomposition to define a tensor rank known as the tubal rank. [20] enforces the low-rank by factorizing the unfoldings of the tensor as low-rank matrices. In [11], an algorithm is proposed that uses a block coordinate descent method for nonnegative tensor completion, utilizing the CP decomposition. [23] performs nonnegative tensor completion based on low-rank Tucker decomposition. Most of the research considering nonnegative tensors is devoted to learning nonnegative tensor decompositions. A few examples are [24], [25], [27], [26], etc. These methods cannot perform tensor completion task on incomplete data. ## 4 Dual Framework Problem (2) models the nonnegative tensor completion using a regularizer that promotes low-rank solutions. We seek to learn \(\mathcal{W}\) as the sum \(\sum\mathcal{W}^{(k)}\) of \(K\) tensors, as detailed in [6]. For our formulation, we use the regularizer \[R(\mathcal{W})=\sum_{k=1}^{K}\frac{1}{\lambda_{k}}\|W_{k}^{(k)}\|_{*}^{2},\] Following [6], we develop a dual formulation for problem (2), incorporating the structural constraint of nonnegativity into the formulation. We do this following a similar approach developed in [15] for nonnegative matrix completion. A key lemma [7] used in the development of the formulation is given below. **Lemma 1**.: _For a matrix \(X\in\mathbb{R}^{d\times T}\), the nuclear norm of \(X\) satisfies the following relation:_ \[\|X\|_{*}^{2}=\min_{\begin{subarray}{c}\Theta\in\mathcal{P}^{d}\\ \text{range}(X)\subseteq\text{range}(\Theta)\end{subarray}}\langle\Theta^{ \dagger}X,X\rangle\] _where \(\mathcal{P}^{d}=\{S\in\mathbb{R}^{d\times d}:\,S\succeq 0,tr(S)=1\}\), \(\text{range}(\Theta)=\{\Theta z:\,z\in\mathbb{R}^{d}\}\), \(\Theta^{\dagger}\) denotes the pseudo-inverse of \(\Theta\). For a given \(X\), the optimal \(\Theta\) is \(\bar{\Theta}=\sqrt{XX^{T}}/\text{tr}(\sqrt{XX^{T}})\)._ Using the above lemma, we can write (2) as \[\min_{\begin{subarray}{c}\Theta_{k}\in\mathcal{P}^{n_{k}}, \mathcal{W}^{(k)}\\ k\in\{1,\cdots,K\}\end{subarray}}C\left\|\mathcal{W}_{\Omega}-\mathcal{Y}_{ \Omega}\right\|^{2}+\sum_{k=1}^{K}\frac{1}{2\lambda_{k}}\langle\Theta_{k}^{ \dagger}W_{k}^{(k)},W_{k}^{(k)}\rangle\] \[\text{subject to }\qquad\qquad\mathcal{W}\geq 0. \tag{3}\] The following theorem provides the dual framework for the nonnegative low-rank tensor completion problem. It is a direct generalization of Theorem 1 in [6] to the case with nonnegative constraints. **Theorem 2**.: _An equivalent partial dual formulation of the problem (2) is_ \[\min_{\begin{subarray}{c}\Theta_{k}\in\mathcal{P}^{n_{k}},\\ k\in\{1,\ldots,K\}\end{subarray}}\max_{\begin{subarray}{c}\mathcal{S}\in \Theta_{+}^{n_{1}}\times\cdots\times_{nK}\\ \mathcal{S}\in\Theta_{+}^{n_{1}}\times\cdots\times_{nK}\end{subarray}}\langle \mathcal{Z},\mathcal{Y}_{\Omega}\rangle-\frac{1}{4C}\|\mathcal{Z}\|^{2}\] \[-\sum_{k=1}^{K}\frac{\lambda_{k}}{2}\langle(Z_{k}+S_{k}),\Theta_ {k}(Z_{k}+S_{k})\rangle, \tag{4}\] _where \(\mathcal{C}=\{\mathcal{Z}\in\mathbb{R}^{n_{1}\times\cdots\times n_{K}}:\, \mathcal{Z}=\mathcal{Z}_{\Omega}\}\). \(\mathcal{Z}\) is the dual tensor variable corresponding to the primal problem (3), \(\mathcal{S}\) is the dual tensor variable corresponding to the nonnegative constraints._ Proof.: Consider the inner problem of (3) over \(\mathcal{W}^{(k)}\). We introduce auxiliary variables \(U_{k}\) with the associated constraints \(U_{k}=W_{k}^{(k)}\). The Lagrangian of this problem will be \[\mathcal{L}(\mathcal{W}^{(1)},\ldots,\mathcal{W}^{(K)},U_{1}, \ldots,U_{K},\Lambda_{1},\ldots,\Lambda_{K},\mathcal{S})=\] \[C\left\|\bigg{(}\sum_{k=1}^{K}\mathcal{W}^{(k)}\bigg{)}_{\Omega }-\mathcal{Y}_{\Omega}\right\|^{2}+\sum_{k=1}^{K}\frac{1}{2\lambda_{k}}\langle \Theta_{k}^{\dagger}U_{k},U_{k}\rangle\] \[+\sum_{k=1}^{K}\langle\Lambda_{k},W_{k}^{(k)}-U_{k}\rangle- \langle\mathcal{S},\mathcal{W}\rangle \tag{5}\] The dual function of the above will be given by \[\mathcal{Q}(\Theta_{1},\ldots,\Theta_{K},\Lambda_{1},\ldots\Lambda_{K}, \mathcal{S})=\min_{\begin{subarray}{c}U_{k},\mathcal{W}^{(k)}\\ k\in\{1,\ldots,K\}\end{subarray}}\mathcal{L} \tag{6}\] Applying the first-order KKT conditions, we get the following equations: \[\text{{fold}}_{k}(\Lambda_{k}) =\mathcal{Z}+\mathcal{S}, \tag{7a}\] \[U_{k} =\lambda_{k}\Theta_{k}\Lambda_{k}. \tag{7b}\] where \(\mathcal{Z}/(2C)=\mathcal{Y}_{\Omega}-\Big{(}\sum_{k=1}^{K}\mathcal{W}^{(k)} \Big{)}_{\Omega}\). It can seen from the definition of \(\mathcal{Z}\) that \(\mathcal{Z}=\mathcal{Z}_{\Omega}\). Using (7a) and (7b), we compute each term of (5) to be \[C\left\|\bigg{(}\sum_{k=1}^{K}\mathcal{W}^{(k)}\bigg{)}_{\Omega}- \mathcal{Y}_{\Omega}\right\|^{2}=C\bigg{(}\frac{\|\mathcal{Z}\|^{2}}{4C^{2}} \bigg{)}=\frac{1}{4C}\|\mathcal{Z}\|^{2},\] \[\sum_{k=1}^{K}\frac{1}{2\lambda_{k}}\langle\Theta_{k}^{\dagger}U_ {k},U_{k}\rangle-\langle\Lambda_{k},U_{k}\rangle\] \[=-\sum_{k=1}^{K}\frac{\lambda_{k}}{2}\langle(Z_{k}+S_{k}),\Theta_ {k}(Z_{k}+S_{k})\rangle,\] \[\sum_{k=1}^{K}\langle\Lambda_{k},W_{k}^{(k)}\rangle-\langle\mathcal{S}, \mathcal{W}\rangle=\langle\mathcal{Z},\mathcal{Y}_{\Omega}\rangle-\frac{1}{2C} \|\mathcal{Z}\|^{2}.\] Summing the terms, we obtain the expression for the dual function as \[\mathcal{Q}=\langle\mathcal{Z},\mathcal{Y}_{\Omega}\rangle-\frac{\|\mathcal{Z} \|^{2}}{4C}-\sum_{k=1}^{K}\frac{\lambda_{k}}{2}\langle(Z_{k}+S_{k}),\Theta_{k} (Z_{k}+S_{k})\rangle.\] This gives the minimax problem (4). From (7a) and (7b), we can deduce the relation between optimal points of primal and minimax problems. If \(\{\bar{\Theta}_{1},\ldots,\bar{\Theta}_{K},\bar{\mathcal{Z}},\bar{\mathcal{S}}\}\) is the optimal solution of (4), then the reconstructed tensor is given by \(\bar{\mathcal{W}}=\sum_{k=1}^{K}\bar{\mathcal{W}}^{(k)}\) where \(\bar{\mathcal{W}}^{(k)}=\lambda_{k}(\bar{\mathcal{Z}}+\bar{\mathcal{S}}) \times_{k}\Theta_{k}\) for all \(k\). This factorization gives us a decoupling of the low-rank and nonnegative constraints enforced on \(\mathcal{W}\) in (3) - the low-rank constraint is enforced by \(\Theta_{k}\), the nonnegative constraints are encoded in \(\mathcal{S}\), and \(\mathcal{Z}\) corresponds to the dual variables of the primal problem. ## 5 Proposed Algorithm Since \(\Theta_{k}\in\mathcal{P}^{n_{k}}\) we can enforce the rank constraint explicitly by factorizing \(\Theta_{k}\) as \(\Theta_{k}=U_{k}U_{k}^{T}\), \(U_{k}\in\mathcal{S}_{r_{k}}^{n_{k}}\) where \(\mathcal{S}_{r}^{n}=\{U\in\mathbb{R}^{n\times r}:\|U\|_{F}=1\}\). We rewrite (4) as \[\min_{U\in\mathcal{S}_{r_{1}}^{n_{1}\times\cdots\times\mathcal{S}_{r_{K}}^{n_{K} }}}g(U), \tag{8}\] where \(U=(U_{1},\ldots,U_{K})\), and \(g(U)\) is the optimal value of the problem \[g(U)= \max_{\begin{subarray}{c}\mathcal{Z}\in\mathcal{C}\\ \mathcal{S}\in\mathbb{R}_{+}^{n_{1}\times\cdots\times n_{K}}\end{subarray}} \langle\mathcal{Z},\mathcal{Y}_{\Omega}\rangle-\frac{\|\mathcal{Z}\|^{2}}{4C}\] \[-\sum_{k=1}^{K}\frac{\lambda_{k}}{2}\left\|U_{k}^{T}(Z_{k}+S_{k} )\right\|^{2}. \tag{9}\] ### Convex Optimization Problem The optimization problem (9) is a convex optimization problem over the variables \(\mathcal{Z}\) and \(\mathcal{S}\), for a given \(U\), hence it has a unique solution. The problem (9) is solved separately for \(\mathcal{Z}\) and \(\mathcal{S}\) using an alternating minimization method. Equating the gradient of objective with respect to \(\mathcal{Z}\) to zero, we get \[\frac{\mathcal{Z}_{\Omega}}{2C}+\sum_{k=1}^{K}\lambda_{k}( \mathcal{Z}\times_{k} U_{k}U_{k}^{T})_{\Omega}=\mathcal{Y}_{\Omega}\] \[-\sum_{k=1}^{K}\lambda_{k}(\mathcal{S}\times_{k}U_{k}U_{k}^{T})_ {\Omega}. \tag{10}\] This is a sparse linear system in \(\mathcal{Z}\), which can be solved using linear conjugate gradient method. For various preconditioned CG approaches, see [53, 33, 34, 35, 36, 37, 38, 42, 48, 52, 41, 43, 44, 45, 46, 47, 49, 50, 51]. Problem (9) has only one term involving \(\mathcal{S}\). Hence, the optimization problem over \(\mathcal{S}\) reduces to \[\min_{\mathcal{S}\in\mathbb{R}_{+}^{n_{1}\times\cdots\times n_{K}}}\sum_{k=1} ^{K}\frac{\lambda_{k}}{2}\left\|U_{k}^{T}Z_{k}+U_{k}^{T}S_{k}\right\|^{2}. \tag{11}\] This is a nonnegative least squares (NNLS) problem. We use the method detailed in [21], modified to suit our objective. ### Riemannian Optimization Problem Given the optimizer \((\hat{\mathcal{Z}},\hat{\mathcal{S}})\) of (9), we compute \(g\) at a point \(U\) as \[g(U)=\langle\hat{\mathcal{Z}},\mathcal{Y}_{\Omega}\rangle-\frac{\|\hat{ \mathcal{Z}}\|^{2}}{4C}-\sum_{k=1}^{K}\frac{\lambda_{k}}{2}\left\|U_{k}^{T}( \hat{Z}_{k}+\hat{S}_{k})\right\|^{2}. \tag{12}\] The set \(\mathcal{S}_{r}^{n}\) is a Riemannian manifold, known as the speed-trahedron manifold. The constraint set \(\mathcal{S}_{r_{1}}^{n_{1}}\times\cdots\times\mathcal{S}_{r_{K}}^{n_{K}}\), therefore forms a product manifold and problem (8) is an optimization problem on a manifold. To develop optimization algorithms on manifolds [54, 55], we need a few geometric tools. We delegate development of the specific tools to [5] and [6]. For an introduction to optimization on general manifolds, we refer [4] and [1]. For our case, the Euclidean gradient for \(g\) can be computed as \[\nabla g(U)=-(\lambda_{1}A_{1},\ldots,\lambda_{K}A_{K}),\] where \(A_{k}=(\hat{Z}_{k}+\hat{S}_{k})(\hat{Z}_{k}+\hat{S}_{k})^{T}U_{k}\), for \(1\leq k\leq K\). We use a generalization of non-linear conjugate gradient algorithm to Riemannian manifolds [29] to solve problem (8). The proposed algorithm is detailed in Algorithm 1. The reconstructed tensor is given by \[\hat{\mathcal{W}}=\sum_{k=1}^{K}\lambda_{k}(\hat{\mathcal{Z}}+\hat{\mathcal{S}} )\times_{k}(U_{k}U_{k}^{T}).\] ``` 0:\(\mathcal{Y}_{1}\), rank=\((r_{1},\ldots,r_{K})\), \(\tau\), \((\lambda_{1},\ldots,\lambda_{K})\)\(\triangleright\) Input parameters 1:for\(t=1,2,\cdots\)do 2: Check Termination: if \(\|\nabla g(U^{(t)})\|\leq\tau\) then break 3: Compute \(\hat{\mathcal{Z}}^{(t)}\) in (10) using conjugate gradient algorithm 4: Compute \(\hat{\mathcal{S}}^{(t)}\) in (11) using NNLS solver 5: Compute cost \(g(U^{(t)})\) and gradient \(\nabla g(U^{(t)})\) 6: Update \(U\): \(U^{(t+1)}\) = RiemannianCG-update(\(U^{(t)}\)) 7:endfor 8:Output:\(\hat{\mathcal{W}}=\sum_{k=1}^{K}\lambda_{k}(\hat{\mathcal{Z}}+\hat{\mathcal{S}}) \times_{k}(U_{k}U_{k}^{T})\) ``` **Algorithm 1** Proposed Algorithm for Nonnegative Tensor Completion ### Complexity * Step 3 (Computing \(\hat{\mathcal{Z}}\)): We use the linear conjugate gradient algorithm to solve the linear system (10). The major cost in this step is to compute the matrix products \(U_{k}^{T}Z_{k}\) and \(U_{k}^{T}S_{k}\), for \(k\in\{1,\ldots,K\}\). We can exploit the sparse structure of the problem to compute the products in \(O(|\Omega|r_{k})\) steps, and hence, if the linear solver takes \(T_{cg}\) iterations, the total cost of this step is \(O\left(\sum_{k=1}^{K}T_{cnls}|\Omega|r_{k}\right)\). * Step 4 (Computing \(\hat{\mathcal{S}}\)): For each iteration of the NNLS algorithm, we need to compute the cost function in (11) and its gradient with respect to \(\mathcal{S}\). Both of these operations can be computed in a similar manner as done for \(\mathcal{Z}\), and the total cost of this step is \(O\left(\sum_{k=1}^{K}T_{nnls}|\Omega|r_{k}\right)\), where \(T_{nnls}\) is the number of iterations of NNLS algorithm. * Step 5 (Computing cost and gradient): We can compute \(g(U)\) from (12) given \(\hat{\mathcal{Z}}\) and \(\hat{\mathcal{S}}\) computed in previous steps. This can be done in \(O(K|\Omega|)\). The gradient requires computing the matrix products \((Z_{k}+S_{k})(Z_{k}+S_{k})^{T}U_{k}\), and we can do this in \(O(|\Omega|r_{k})\). Hence, total cost for computing the gradient is \(O(\sum_{k=1}^{K}|\Omega|r_{k})\). * Step 6 (Riemannian Conjugate Gradient): Search direction and step length are computed in this step. Then the current solution \(U^{(t)}\) is updated to \(U^{(t+1)}\) by performing retraction at the \(U^{(t)}\) along the search direction. This step ensures that the update remains on the product manifold. These operations can be done in \(O(\sum_{k=1}^{K}n_{k}r_{k}^{2}+\sum_{k=1}^{K}r_{k}^{3})\). Therefore, the overall per-iteration complexity of the proposed algorithm is \[O\bigg{(}(T_{cg}+T_{nnls})|\Omega|\sum_{k=1}^{K}r_{k}+\sum_{k=1}^{K}n_{k}r_{k} ^{2}+\sum_{k=1}^{K}r_{k}^{3}\bigg{)}.\] We store all the tensors in the sparse format, and perform operations accordingly. Hence, the overall space complexity of the proposed algorithm is \[O\bigg{(}|\Omega|+\sum_{k=1}^{K}n_{k}r_{k}\bigg{)}.\] ## 6 Numerical Experiments ### Experimental setup We have performed experiments on several publicly available datasets (see Table 1). We compare the performance of our algorithm to other state-of-the-art tensor completion algorithms. The baseline algorithms used for comparison are given below. Note that, with the exception of NCPC, all the other baseline algorithms do not enforce nonnegativity in the completed tensors. 1. Dual [6]: A dual framework for low-rank tensor completion using a variant of the latent trace norm regularizer. 2. RPrecon [10]: A low-rank tensor completion algorithm with a multi-linear rank constraint using Riemannian preconditioning. 3. geomCG [9]: An algorithm for tensor completion using optimization on the manifold of fixed multi-linear rank tensors. 4. NCPC [11]: A nonnegative tensor completion method using the CP decomposition. 5. TMac [20]: An alternating minimization algorithm that uses parallel matrix factorization. 6. LRTC-TV [17]: An ADMM based algorithm that uses total variation regularization to enforce smoothness. 7. SMF-LRTC [19]: An algorithm that enforces smoothness constraint on factor matrices. 8. FFW [16]: An algorithm with scaled latent nuclear norm using the Frank-Wolfe algorithm. We randomly sample 10% of the tensor entries and use it as training data. The metric we use for evaluation is the RMSE between the reconstructed and original tensors \[\texttt{RMSE}=\sqrt{\frac{\|\mathcal{W}-\mathcal{W}_{true}\|_{F}^{2}}{n_{1}n_ {2}n_{3}}}.\] The proposed method is implemented based on Dual code. It uses MANOPT library [3] for implementing outer problem (8) on manifolds. For the nonnegative least squares problem in (9), we use the NNLS code [21] modified to work with our objective. ### Hyperparameters The ranks \((r_{1},r_{2},r_{3})\) are chosen as \((10,10,5)\) for all datasets, except color images where we chose \((10,10,3)\) since the dimension in mode-3 is less than \(5\). We chose the \begin{table} \begin{tabular}{c|c|c} \hline \hline Type & Dataset & Dimensions \\ \hline Hyperspectral & Ribeira & \(203\times 268\times 33\) \\ Hyperspectral & Braga & \(203\times 268\times 33\) \\ Hyperspectral & Ruivaes & \(203\times 268\times 33\) \\ Video & Tomato & \(320\times 242\times 167\) \\ Video & Container & \(144\times 176\times 150\) \\ Video & Hall & \(144\times 176\times 150\) \\ Video & Highway & \(144\times 176\times 150\) \\ Color Image & Baboon & \(256\times 256\times 3\) \\ Color Image & Splash & \(512\times 512\times 3\) \\ \hline \hline \end{tabular} \end{table} Table 1: Description of datasets. Figure 1: Variation of RMSE with iterations and rank. In the iterations plot, RMSE is in log scale. In the rank plot, the rank is taken to be value of X-label times \([1,1,1]\). regularization constants \(\lambda_{k}\)'s according to [6]. The maximum iterations for outer optimization problem (8) was set to \(200\) as no improvement in RMSE is seen after \(100\) iterations on most of the datasets. For the baseline algorithms using rank as a hyperparameter, we have chosen the same rank as in our case since it is sufficient for a variety of datasets (see [6]). Additional hyperparameters for each baseline were set as indicated in the code provided by the authors. We consider the effect of RMSE on the variation in hyperparameters. Fig. 1(a) shows the variation of RMSE over iterations of the proposed algorithm. We see that the RMSE decreases as the algorithm proceeds, and the decrease is rapid in the initial iterations. The RMSE decreases monotonically, so we have chosen \(200\) iterations as the threshold to guarantee good solutions. Fig. 1(b) shows the variation of RMSE with the rank. Increasing rank decreases the RMSE, but it quickly saturates to \(10\), which can be due to the inherent rank of the dataset. This justifies our choice of hyperparameters. ### Image Completion The task of image completion is to reconstruct the original image tensor, given only partial observations. As mentioned earlier, we have randomly sampled \(10\%\) of observations for training. We have experimented with several hyperspectral images (see Table 1, [30]) where each data tensor contains stack of images measured at different wavelengths. Following [6] we resized these datasets to \(203\times 268\times 33\) using bilinear interpolation. We have also considered two color images (see Table 1, [31]) which are naturally represented as third-order tensors. We report the RMSE in Table 2 and some of the reconstructed images in Fig. 3. Our proposed algorithm outperforms the baseline algorithms in all hyperspectral datasets considered. The reconstructed images are of good quality, given only \(10\%\) of data for training. In color image datasets, LRTC-TV performs best. We expect that this is because the original images have the local smoothness property, which is exploited by LRTC-TV through the smoothness constraints it enforces. However, the low-rank and nonnegative structure does not preserve such smoothness, which explains the performance of other algorithms. Nevertheless, the proposed algorithm achieves the best RMSE next to LRTC-TV. We believe this indicates the usefulness of nonnegative constraints. For hyperspectral images, LRTC-TV performs badly. On Braga, RMSE of LRTC-TV is 10 times that of the proposed algorithm. By comparing the reconstructed images, it can be seen that the smooth image produced by LRTC-TV is an imperfect reconstruction, suggesting the lack of local smoothness property in this dataset. The effect of nonnegativity is more pronounced in color images, where the proposed algorithm achieves 3 times lower RMSE on Baboon and 5 times lower RMSE on Splash compared to Dual. We see this effect in the reconstructed image of Baboon and Splash, where, perhaps due to negative entries, the reconstructed images appear darker. ### Video Completion Video completion task is the reconstruction of the frames of the video from the partial observations given. We considered several gray-scale videos (see Table 1, [32]) which form third-order tensors. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Dataset & Prop & Dual & RPrecon & geomCG & NCPC & TMac & LRTC-TV & SMF-LRTC & FFW \\ \hline Ribeira & **0.03090** & 0.03093 & 0.0454 & 0.06593 & 0.16465 & 0.1644 & 0.04984 & 0.04159 & 0.0696 \\ \hline Braga & **0.02817** & 0.03054 & 0.0939 & 0.07348 & 0.07348 & 0.20226 & 0.20227 & 0.06445 & 0.06691 \\ \hline Ruivaeus & **0.029211** & 0.04969 & 0.0352 & 0.07146 & 0.14059 & 0.14073 & 0.02955 & 0.040223 & 0.05437 \\ \hline Tomato & **0.04282** & 0.04286 & 0.0589 & 0.05895 & 0.44175 & 0.44203 & 0.04638 & 0.053002 & 0.11463 \\ \hline Container & **0.044908** & 0.04693 & 0.0645 & 0.06452 & 0.5433 & 0.54413 & 0.09773 & 0.05894 & 0.15116 \\ \hline Hall & **0.03696** & 0.03702 & 0.0687 & 0.06879 & 0.53629 & 0.53717 & 0.09409 & 0.06327 & 0.06327 \\ \hline Highway & **0.03255** & 0.03652 & 0.0405 & 0.04055 & 0.6416 & 0.64261 & 0.04171 & 0.03723 & 0.11204 \\ \hline Baboon & 0.11943 & 0.33258 & 0.1563 & 2.7629 & 0.53315 & 0.5210 & **0.08729** & 0.13456 & 0.14222 \\ \hline Splash & 0.06371 & 0.33475 & 0.3291 & 1.6897 & 0.50346 & 0.50014 & **0.05262** & 0.09331 & 0.07779 \\ \hline \end{tabular} \end{table} Table 2: RMSE of various methods. The best result among all methods is in bold and second best are underlined. Figure 2: Original frame, reconstructed frame and components of reconstructed frame from Hall and Highway videos. Fig. 2 shows the component frames, \(\mathcal{W}^{(k)}\)'s, of the reconstructed frames, \(\mathcal{W}\), of the proposed algorithm. For the video data, most of the information varies along the frames (i.e., along mode-\(3\) rather than the other modes). Consequently, we see the frames of \(\mathcal{W}^{(1)}\) and \(\mathcal{W}^{(2)}\) to have less information, whereas the frame of \(\mathcal{W}^{(3)}\) is close to the original frame. As we enforce the low-rank constraint on mode-\(k\) of \(\mathcal{W}^{(k)}\), each component has a compact representation that captures the original scene very well. The proposed algorithm achieves the least RMSE compared to the baselines (see Table 2). In Hall, the RMSE scores of all baseline algorithms, except Dual, is at least two times that of the proposed algorithm. Despite the increase in dimensions of the tensor compared to hyperspectral images, choosing the same rank \((10,10,5)\) gives the best RMSE scores. The reconstructed image shown in Fig. 3 are significantly clear as indicated by the RMSE scores. As mentioned earlier, in Tomato and Hall, we believe that the lack of local smoothness property leads to the failure of LRTC-TV algorithm. ## 7 Conclusion We have proposed a novel factorization for nonnegative low-rank tensor completion, \(\mathcal{W}=\sum_{k=1}^{K}(\mathcal{Z}+\mathcal{S})\times_{k}U_{k}U_{k}^{T}\). The factorization decouples the nonnegative constraint and low-rank constraint on \(\mathcal{S}\) and \(U_{k}U_{k}^{T}\) respectively. The resultant problem has a geometric structure in the constraints. We exploit this structure to propose a Riemannian optimization algorithm to solve the problem. On several real-world datasets, our proposed algorithm outperforms the state-of-the-art tensor completion algorithms. Figure 3: Original and Reconstructed Images for Different Algorithms given 10% of the fraction as training data. The datasets shown from top to bottom are Container, Hall, Tomato, Baboon, Splash, Braga, Ribeira respectively. For videos, a random frame was chosen. From left to right: Original, Proposed, Dual [6], RPrecon [10], geomCG [9], NCPC [11], TMac [20], LRTC-TV(T-LRTC) [17], SMF-LRTC(S-LRTC) [19] and FFW-LRTC [16]. ## Acknowledgement Tanmay Kumar Sinha was supported by IIIT seed grant. Jayadev Naram thanks IHub-Data, IIIT Hyderabad for a research fellowship.
2308.08461
CDR: Conservative Doubly Robust Learning for Debiased Recommendation
In recommendation systems (RS), user behavior data is observational rather than experimental, resulting in widespread bias in the data. Consequently, tackling bias has emerged as a major challenge in the field of recommendation systems. Recently, Doubly Robust Learning (DR) has gained significant attention due to its remarkable performance and robust properties. However, our experimental findings indicate that existing DR methods are severely impacted by the presence of so-called Poisonous Imputation, where the imputation significantly deviates from the truth and becomes counterproductive. To address this issue, this work proposes Conservative Doubly Robust strategy (CDR) which filters imputations by scrutinizing their mean and variance. Theoretical analyses show that CDR offers reduced variance and improved tail bounds.In addition, our experimental investigations illustrate that CDR significantly enhances performance and can indeed reduce the frequency of poisonous imputation.
ZiJie Song, JiaWei Chen, Sheng Zhou, QiHao Shi, Yan Feng, Chun Chen, Can Wang
2023-08-13T08:10:56Z
http://arxiv.org/abs/2308.08461v2
# CDR: Conservative Doubly Robust Learning for Debiased Recommendation ###### Abstract. In recommendation systems (RS), user behavior data is observational rather than experimental, resulting in widespread bias in the data. Consequently, tackling bias has emerged as a major challenge in the field of recommendation systems. Recently, Doubly Robust Learning (DR) has gained significant attention due to its remarkable performance and robust properties. However, our experimental findings indicate that existing DR methods are severely impacted by the presence of so-called _Poisonous Imputation_, where the imputation significantly deviates from the truth and becomes counterproductive. To address this issue, this work proposes Conservative Doubly Robust strategy (CDR) which filters imputations by scrutinizing their mean and variance. Theoretical analyses show that CDR offers reduced variance and improved tail bounds. In addition, our experimental investigations illustrate that CDR significantly enhances performance and can indeed reduce the frequency of poisonous imputation. Recommender Systems, Selection Bias, Doubly Robust + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: Data mining + Footnote †: journal: journal: Data mining conduct imputation for all user-item pairs, potentially leading to _poisonous imputation_. In DR, imputation values rely on the imputation model, which is typically trained on a small set of observed data and extrapolated to the entire user-item pairs. Consequently, it is inevitable for the imputation model to produce inaccurate estimations on certain user-item pairs. Poisonous imputation arises when the imputed values significantly diverge from the truth to such an extent that they negatively impact the debiasing process and could even compromise the model's performance. Upon examining existing DR methods on real-world datasets, we found that the ratio of poisonous imputation is notably high, often exceeding 35%. Addressing poisonous imputation is thus essential for the effectiveness of a DR method. A straightforward solution to this issue could be to directly identify and eliminate poisonous imputations. However, this is practically infeasible due to the unavailability of ground-truth labels of user preference for the majority of user-item pairs. To address this challenge, we propose a Conservative Doubly Robust strategy (CDR) that constructs a surrogate filtering protocol by scrutinizing the mean and variance of the imputation value. Theoretical analyses demonstrate our CDR achieves lower variance and better tail bound compared to conventional DR. Remarkably, our solution is model-agnostic and can be easily plug-in existing DR methods. In our experiments, we implemented CDR in four different methods, demonstrating that CDR yields superior recommendation performance and a reduced ratio of poisonous imputation. To summarize, this work makes the following contributions: * Exposing the issue of poisonous imputation within existing Doubly Robust methods in Recommendation Systems. * Proposing a Conservative Doubly Robust strategy (CDR) that mitigates the problem of poisonous imputation through examination of the mean and variance of the imputation value. * Performing rigorous theoretical analyses and conducting extensive empirical experiments to validate the effectiveness of CDR. ## 2. Analyses over Doubly Robust Learning In this section, we first formulate the task of recommendation debiasing (Sec. 2.1), and then present some background of doubly robust learning (Sec. 2.2). Finally, we identify the issue of poisonous imputation on existing DR methods (Sec. 2.3). ### Task Formulation Suppose we have a recommender system composed of a user set \(\mathcal{U}\) and an item set \(\mathcal{I}\). Let \(\mathcal{D}=\mathcal{U}\times\mathcal{I}\) denote the set of all user-item pairs. Further, let \(r_{ui}\in\mathbb{R}\) be the ground-truth label (_e.g.,_ rating) for a user-item pair \((u,i)\), indicating how the user likes the item; and \(\hat{r}_{ui}\) be the corresponding predicted label from a recommendation model. The collected historical rating data can be notated as a set \(\mathbf{R}^{0}=\{r_{ui}|o_{ui}=1\}\), where \(o_{ui}\) denotes whether the rating of a user-item pair \((u,i)\) is observed. The goal of a RS is to accurately predict user preference and accordingly identify items that align with users' tastes. The ideal loss for training a recommendation model can be formulated as follow: \[\mathcal{L}_{ideal}=|\ \mathcal{D}\ |^{-1}\sum_{(u,i)\in\mathcal{D}}e_{ui} \tag{1}\] Where \(e_{ui}\) denotes the prediction error between \(r_{ui}\) and \(\hat{r}_{ui}\), _e.g.,_\(e_{ui}=|r_{ui}-\hat{r}_{ui}|^{2}\) with RMSE loss or \(e_{ui}=-r_{ui}\log(\hat{r}_{ui})-(1-r_{ui})\log(1-\hat{r}_{ui})\) with BCE loss. However, only a small portion of \(r_{ui}\) is observed in RS, rendering the ideal loss non-computable. Moreover, the challenge is further accentuated by the presence of selection bias, as the observed data might not faithfully represent the entirety of user-item pairs. For instance, samples with higher ratings are more likely to be observed (Song et al., 2017). Utilizing a naive estimator that calculates directly on the observed data with \(\mathcal{L}_{naive}=\mid\mathcal{D}\ |^{-1}\sum_{(u,i)\in\mathcal{D}}o_{ui}e_{ui}\) would yield a biased estimation (Song et al., 2017). Hence, the exploration for a suitable surrogate loss towards unbiased estimation of the ideal loss is ongoing. ### Existing Estimators Now we review two typical estimators for addressing selection bias. _Inverse Propensity Score Estimator (IPS)_(Song et al., 2017). The IPS estimator aims to adjust the training distribution by reweighing the observed instances as: \[\mathcal{L}_{IPS}=|\ \mathcal{D}\ |^{-1}\sum_{(u,i)\in\mathcal{D}}\frac{o_{ui}e_{ui} }{\hat{p}_{ui}} \tag{2}\] where \(\hat{p}_{ui}\) is an estimation of the propensity score \(p_{ui}=\mathbb{P}(o_{ui}=1)\). The bias and variance of IPS estimator can be written as: \[\begin{split} Bias[\mathcal{L}_{IPS}]&=|E_{o}[ \mathcal{L}_{IPS}]-\mathcal{L}_{ideal}|\\ &=|\ \mathcal{D}\ |^{-1}\sum_{(ui)\in\mathcal{D}}\frac{(p_{ui}-\hat{p} _{ui})}{\hat{p}_{ui}}e_{ui}|\\ Var[\mathcal{L}_{IPS}]&=\mathbb{E}_{0}[(\mathcal{L }_{IPS}-\mathbb{E}_{0}[\mathcal{L}_{IPS}])^{2}]\\ &=|\ \mathcal{D}\ |^{-2}\sum_{(u,i)\in\mathcal{D}}\frac{p_{ui}(1-p_{ui})}{ \hat{p}_{ui}^{2}}e_{ui}^{2}\end{split} \tag{3}\] Once the \(\hat{p}_{ui}\) reaches its ideal value (_i.e.,_\(\hat{p}_{ui}=p_{ui}\)), the IPS estimator could provide an unbiased estimation of the ideal loss (_i.e.,_\(\mathbb{E}_{0}[\mathcal{L}_{IPS}]=\mathcal{L}_{ideal}\)). _Doubly Robust Estimator (DR)_(Song et al., 2017). DR augments IPS by introducing the error imputation with the following loss: \[\mathcal{L}_{DR}=|\ \mathcal{D}\ |^{-1}\sum_{(u,i)\in\mathcal{D}}\frac{( \hat{e}_{ui}+\frac{o_{ui}(e_{ui}-\hat{e}_{ui})}{\hat{p}_{ui}})}{\hat{p}_{ui}}) \tag{4}\] where \(\hat{e}_{ui}\) represents the imputed error, derived from a specific imputation model that strives to fit the predicted error. Recent work (Song et al., 2017) has established the bias and variance of DR as follow: \[\begin{split} Bias[\mathcal{L}_{DR}]=&|\ \mathcal{D}\ |^{-1}\sum_{(u,i)\in\mathcal{D}}\frac{(p_{ui}-\hat{p}_{ui})}{\hat{p}_{ui}}(e_{ui} -\hat{e}_{ui})|\\ Var[\mathcal{L}_{DR}]=&|\ \mathcal{D}\ |^{-2}\sum_{(u,i)\in \mathcal{D}}\frac{p_{ui}(1-p_{ui})}{\hat{p}_{ui}^{2}}(\hat{e}_{ui}-e_{ui})^{2} \end{split} \tag{5}\] As can be seen, DR change the bias term for each \((u,i)\) from \(\frac{(p_{ui}-\hat{p}_{ui})}{\hat{p}_{ui}}e_{ui}\) in IPS to \(\frac{(p_{ui}-\hat{p}_{ui})}{\hat{p}_{ui}}(e_{ui}-\hat{e}_{ui})\) and the variance from \(\frac{p_{ui}(1-p_{ui})}{\hat{p}_{ui}^{\omega}}e_{ui}^{2}\) to \(\frac{p_{ui}(1-P_{ui})}{\hat{p}_{ui}^{\omega}}(\hat{e}_{ui}-e_{ui})^{2}\). DR enjoys the doubly robust property that if either \(\hat{p}_{ui}=p_{ui}\) or \(e_{ui}=\hat{e}_{ui}\) holds, \(\mathcal{L}_{DR}\) could be an unbiased estimator (_i.e._, \(\textit{Bias}[\mathcal{L}_{DR}]=0\)). This advantageous property typically results in DR being less biased than IPS in practice, empirically leading to superior performance. ### Limitation of DR From the eq.(5), we can conclude that the accuracy of imputation \(\hat{e}_{ui}\) is of highly importance -- both the bias and variance term are correlated with \(|\hat{e}_{ui}-e_{ui}|\). Indeed, if the imputed error \(\hat{e}_{ui}\) diverges significantly from the predicted error \(e_{ui}\) such that \(|\hat{e}_{ui}-e_{ui}|>e_{ui}\), the imputation \(\hat{e}_{ui}\) becomes counterproductive. Particularly, imputing \(\hat{e}_{ui}\) for the user-item pair \((u,i)\) results in increased bias and variance, rather than reduced. We denote this phenomenon as poisonous imputation: Definition 2.1 (Poisonous Imputation).: For any user-item pair \((u,i)\), the imputation \(\hat{e}_{ui}\) is considered as a poisonous imputation if \(|\hat{e}_{ui}-e_{ui}|>e_{ui}\). In practical RS, given that the imputation model is typically trained on a limited set of observed data and generalized to the entire user-item pairs, poisonous imputation is frequently encountered. To provide empirical evidence for this point, we conducted an empirical analysis on four representative DR methods (DR-JL (Yang et al., 2018), MRDR (Yang et al., 2018), DR-BIAS (Yang et al., 2018), and TDR (Yang et al., 2018)) across three real-world debiasing datasets (YahooR3, Coat, and KuaiRand). These DR methods were finely trained on the biased training data, after which \(e_{ui}\) and \(\hat{e}_{ui}\) were calculated for the user-item pairs in the test data where ground-truth ratings are accessible. The proportion of poisonous imputation is reported in Table 1. **Surprisingly, the ratio of poisonous imputation is considerably high, often exceeding 35% across all datasets and baseline models.** It is noteworthy that even though DR generally exhibits superior performance over IPS, a substantial amount of poisonous imputation still exists. The issue of poisonous imputation is particularly severe, thereby warranting attention and resolution. ## 3. Methodology In this section, we first introduce the proposed conservative doubly robust strategy, and then conduct theoretical analyses to validate its merits. ### Conservative Doubly Robust Learning Considering the widespread occurrence of poisonous imputation, we contend that performing imputation blindly on all user-item pairs, as is customary with current methods, may not be the optimal strategy. Instead, it would be more effective to adopt a conservative and adaptive imputation approach that focuses on user-item pairs which confer benefits while excluding those leading to poisonous imputation. As previously discussed, the ideal filtering protocol involves comparing \(|\hat{e}_{ui}-e_{ui}|\) with \(e_{ui}\). If \(|\hat{e}_{ui}-e_{ui}|<e_{ui}\), the imputation should be retained as it could potentially reduce both variance and bias; if not, it implies a poisonous imputation which should be discarded. However, this approach is impractical as the ground-truth labels are typically inaccessible in real-world scenarios and \(e_{ui}\) cannot be calculated. As such, an alternative filtering protocol is necessary. Towards this end, we propose a Conservative Doubly Robust (CDR) strategy in this work that filters imputation by examining the mean and variance of \(\hat{e}_{ui}\). The foundation of CDR is based on the following important lemma: Lemma 1 ().: _Given that \(\hat{e}_{ui}\) and \(e_{ui}\) are independently drawn from two Gaussian distributions \(\mathcal{N}(\hat{\mu}_{ui},\hat{\sigma}_{ui}^{2})\) and \(\mathcal{N}(\mu_{ui},\sigma_{ui}^{2})\), where \(\hat{\mu}_{ui}\), \(\mu_{ui}\), \(\hat{\sigma}_{ui}\) are bounded with \(|\hat{\mu}_{ui}-\mu_{ui}|\leq\varepsilon_{\mu}\), \(|\hat{\sigma}_{ui}^{2}-\sigma_{ui}^{2}|\leq\varepsilon_{\sigma}^{2}\), \(2\varepsilon_{\mu}\leq\hat{\mu}_{ui}\), \(m_{\mu}\leq\hat{\mu}_{ui}\leq M_{\mu}\) and \(m_{\sigma}\leq\hat{\sigma}_{ui}\leq M_{\sigma}\), for any confidence level \(\rho\) (\(0\leq\rho\leq 1\)), the condition \(\mathbb{P}(|\hat{e}_{ui}-e_{ui}|<e_{ui})\geq\rho\) holds if_ \[\frac{\hat{\sigma}_{ui}}{\hat{\mu}_{ui}}<\left(\sqrt{5}\Phi^{-1}(\rho)+\frac{2 M_{\mu}\varepsilon_{\sigma}}{m_{\sigma}(\sqrt{5}m_{\sigma}+2\varepsilon_{\sigma})}+ \frac{2\sqrt{5}\varepsilon_{\mu}}{\sqrt{5}m_{\sigma}+2\varepsilon_{\sigma}} \right)^{-1} \tag{6}\] _where \(\Phi^{-1}(\cdot)\) denotes the inverse of CDR of the standard normal distribution._ The proof of the lemma is included in the appendix A. This lemma indicates that through the formulation of a distribution hypothesis for \(\hat{e}_{ui}\) and \(e_{ui}\), the evaluation of poisonous imputation can be reframed as a scrutiny of the mean and variance of \(\hat{e}_{ui}\). The hypothesis presented in the lemma is practical. On one hand, we hypothesize that the distribution of \(\hat{e}_{ui}\) approximates that of \(e_{ui}\) (i.e., \(|\hat{\mu}_{ui}-\mu_{ui}|\leq\varepsilon_{\mu}\), \(|\hat{\sigma}_{ui}^{2}-\sigma_{ui}^{2}|\leq\varepsilon_{\sigma}^{2}\), \(2\varepsilon_{\mu}\leq\hat{\mu}_{ui}\)), a supposition that naturally follows since the imputation model endeavors to fit \(e_{ui}\). On the other hand, we opt to employ the Gaussian distribution for analysis. This choice is informed by its widespread usage in statistical \begin{table} \begin{tabular}{c c c c} \hline \hline & Coat & Yahoo & KuaiRand \\ \hline DR-JL & 45.9\% & 41.9\% & 38.8\% \\ \hline MRDR & 48.1\% & 43.1\% & 41.2\% \\ \hline DR-BIAS & 44.1\% & 40.4\% & 39.2\% \\ \hline TDR & 42.3\% & 36.2\% & 36.3\% \\ \hline \hline \end{tabular} \end{table} Table 1. The proportion of ”poisonous imputation” in three different datasets using four typical DR methods. Figure 1. Illustration of how our CDR improves the traditional DR methods — leveraging a filter protocol to remove the poisonous imputation that may hurt debiasing. inference, as well as its standing as a second-order Taylor approximation of any distribution. While more complex distributions might yield more precise results, _e.g._, considering higher-order moments, the analytical complexity and computational burden would significantly increase. Our empirical findings indicate that the Gaussian distribution suffices to deliver superior performance. In fact, our proposed filtering protocol (inequality (6)) is intuitively appealing due to three observations: 1) A larger value of \(\hat{\sigma}_{ui}\) makes the preservation of the imputation less likely. This is consistent with the understanding that a higher variance implies a less reliable prediction, thus making it more susceptible to discarding. 2) A larger value of \(\hat{\mu}_{ui}\) makes the preservation of the imputation more likely. This can be rationalized by the notion that if the error \(e_{ui}\) is large, the imputation is safer as it is more difficult to exceed \(2e_{ui}\). 3) Larger values of \(\varepsilon_{\mu}\) and \(\varepsilon_{\sigma}\) increase the likelihood of filtering the imputation. Larger values for these parameters suggest a more significant distributional gap between \(e_{ui}\) and \(\hat{\epsilon}_{ui}\), thereby necessitating more conservative filtering. **Instantiation of CDR.** CDR can be incorporated into various DR methods by leveraging an additional filtering protocol. This protocol consists of two steps: 1) Estimation of \(\hat{\mu}_{ui},\hat{\sigma}_{ui}\): We utilize the Monte Carlo Dropout method (Grover and Leskovec, 2017) for estimating the mean and variance of the imputation, owing to its generalization and easy implementation. Specifically, we apply dropout 10 times on the imputation model (_i.e._, randomly omitting 50% of the dimensions of embeddings) and then calculate the mean and variance of \(\hat{\epsilon}_{ui}\) from the dropout model. To ensure a fair comparison, we should note that dropout is only employed during the calculation of \(\hat{\mu}_{ui},\hat{\sigma}_{ui}\), and not during the training of the imputation model. 2) Filtering based on the condition \(\frac{\hat{\sigma}_{ui}}{\hat{\mu}_{ui}}<\eta\): Note that the right-hand side of inequality (6) involves complex computation and five parameters. To simplify our implementation, we re-parameterize the right-hand side of the inequation as a hyperparameter \(\eta\). This parameter \(\eta\) can be interpreted as an adjusted threshold that directly modulates the strictness of the filtering process. With the above filtering protocol, the CDR estimator can be formulated as: \[\mathcal{L}_{CDR}=|\mathcal{D}|^{-1}\sum_{(u,i)\in\mathcal{D}}\left(\frac{ \omega_{ui}\hat{\epsilon}_{ui}}{\hat{\rho}_{ui}}+\gamma_{ui}\hat{\epsilon}_{ ui}(1-\frac{\omega_{ui}}{\hat{\rho}_{ui}})\right) \tag{7}\] where \(\gamma_{ui}\in\{0,1\}\) indicate whether the imputation \(\hat{\epsilon}_{ui}\) is retained. ### Theoretical Analyses In order to elucidate the advantages of the Conservative Doubly Robust (CDR) strategy, we present the following lemma: Lemma 2 ().: _Given the imputed errors \(\hat{\epsilon}_{ui}\), estimated propensity scores \(\hat{\rho}_{ui}\), and the retention of the imputation \(\gamma_{ui}\), the bias and variance of the CDR estimator can be expressed as follows:_ \[\begin{split} Bias[\mathcal{L}_{CDR}]&=\frac{1}{| \mathcal{D}|}\underset{(u,i)\in\mathcal{D}}{|}\underset{\hat{\rho}_{ui}}{|} \underset{\hat{\rho}_{ui}}{\sum}\left(\gamma_{ui}(\hat{\epsilon}_{ui}-\hat{ \epsilon}_{ui})+(1-\gamma_{ui})\epsilon_{ui}\right)|\\ Var[\mathcal{L}_{CDR}]&=\frac{1}{|\mathcal{D}|^{2} }\underset{(u,i)\in\mathcal{D}}{\sum}\frac{\rho_{ui}(1-\rho_{ui})}{\hat{ \rho}_{ui}^{2}}(\gamma_{ui}(\hat{\epsilon}_{ui}-\epsilon_{ui})^{2}+(1-\gamma_ {ui})\epsilon_{ui}^{2})\end{split} \tag{8}\] _With probability \(1-\kappa\), the deviation of the CDR estimator from its expectation has the following tail bound:_ \[\begin{split}|\mathcal{L}_{CDR}\mathbb{E}_{0}[\mathcal{L}_{CDR}] |\leq\sqrt{\frac{\log\left(\frac{\hat{\kappa}}{\hat{\kappa}}\right)}{2| \mathcal{D}|^{2}}\underset{u,i\in\mathcal{D}}{\sum}(\gamma_{ui}(\frac{ \epsilon_{ui}-\hat{\epsilon}_{ui})^{2}}{\hat{\rho}_{ui}^{2}}+(1-\gamma_{ui}) \frac{\epsilon_{ui}^{2}}{\hat{\rho}_{ui}^{2}})}{\hat{\rho}_{ui}^{2}}+(1-\gamma _{ui})\frac{\epsilon_{ui}^{2}}{\hat{\rho}_{ui}^{2}}.\end{split} \tag{9}\] The proof is presented in appendix B. CDR can be understood as an integration of IPS and DR. If \(|\hat{\epsilon}_{ui}-\epsilon_{ui}|>\epsilon_{ui}\), CDR will filter out the poisonous imputation and regress to IPS, as IPS demonstrates superior bias and variance properties compared to DR. Otherwise, CDR will retain the imputation, benefiting from the merits of DR. Indeed, CDR has the following advantages: Corollary 3.1 ().: _Under the condition of Lemma 1 and \(\varepsilon_{\mu}\ll\hat{\rho}_{ui},\hat{\epsilon}_{\mu}^{2}\ll\hat{\sigma}_{ ui}^{2}\), with a proper filtering threshold \(\eta\), CDR enjoys better variance and tail bound than IPS and DR._ The proof is presented in appendix C. This corollary substantiates the superiority of CDR, thereby yielding better recommendation performance. We will empirical validate it in the following section. ## 4. Experiments In this section, we designed experiments to test the performance of the proposed method on three real-world datasets. Our aim was to answer the following four research questions: 1. Does the proposed CDR improve the debiasing performance? 2. Does CDR indeed reduce the ratio of poisonous imputation in DR? 3. How does the hyperparameter \(\eta\) (filtering threshold) affect debiasing performance? 4. Does CDR incur much more computational time? ### Experimental Setup **Datasets.** To evaluate the performance of debiasing methods on real-world datasets, The ground-truth unbiased data are necessary. We closely refer to previous studies(Krizhevsky et al., 2016; Zhang et al., 2017; Zhang et al., 2018; Zhang et al., 2018), and use the following three benchmark datasets: **Coat, YahooI\(\mathbf{3}\)** and **KuaiRandPure**. All three datasets consist of a biased dataset, collected from normal user interactions, and an unbiased dataset collected from random logging strategy. Specifically, **Coat** includes 6,960 biased ratings and 4,640 unbiased ratings from 290 users for 300 items; **YahooI\(\mathbf{3}\)** comprises 54,000 unbiased ratings and 311,704 biased ratings from 15,400 users for 1,000 items; while **KuaiRand** includes 7,583 videos and 27,285 users, containing 1,436,609 biased data and 1,186,059 unbiased data. Following recent work (Chen et al., 2019), we regard the biased data as training set, and utilize the unbiased data for model validation (10%) and evaluation (90%). Also, the ratings are binarized with threshold 3. That is, the observed rating value larger than 3 is labeled as positive, otherwise negative. **Baselines.** We validate the effectiveness of CDR on four baselines including three benchmark DR methods and one classical baseline just based on imputation: * **EIB(Zhang et al., 2018):** the classical baseline that relies on data imputation for tackling selection bias. * **DR-JL[44]:** the basic doubly robust learning strategy that employs both propensity and imputation for recommendation debiasing. In DR-JL, the imputation is learned by minimizing the error deviation on observed data. * **MRDR[20]:** the method improves DR-JL by considering the variance reduction for learning imputation model. * **DR-BIAS[13]:** the novel strategy that learns imputation with balancing the variance and bias. We also compare the methods with: * **Base Model:** the basic recommendation model without employing any debiasing strategy. * **IPS[40]:** the strategy that addresses bias via weighing the observed data with the inverse of the propensity. * **INV[47]:** the state-of-the-art debiasing method that leverages causal graph to disentangle the invariant preference and variant factors from the observed data. * **TDR[27]:** the state-of-the-art DR method that learns imputation with a parameterized imputation model and a non-parameter strategy. Here we do not implement CDR in TDR due to its high complexity. Nevertheless, our experiments show that even CDR is plug-in the basic DR-JL, it could outperform TDR. Also, for fair comparison, we closely refer to recent work [5] and take the widely used Matrix Factorization (MF) [26] as the base recommendation model. **Metrics.** We employed three concurrent metrics, namely, Area Under the Curve (AUC), Recall (Recall@5) and Normalized Discounted Cumulative Gain (NDCG@5) to assess debiasing performance. NDCG@K evaluates the quality of recommendations by taking into account the importance of each item's position, based on discounted gains. \[\begin{split} DCG_{u}@K=\sum_{(u,i)\in\mathcal{D}_{test}}\frac{I( \hat{z}_{u,i}\subset=K)}{log(\hat{z}_{u,i}+1)}\\ NDCG@K=\frac{1}{\mid\mathcal{U}\mid}\sum_{u\in\mathcal{U}}\frac{ DCG_{u}@K}{IDCG_{u}@K}\end{split} \tag{10}\] where IDCG represents the ideal DCG, \(\mathcal{D}_{test}\) denotes the test data, \(\hat{z}_{u,i}\) represents the position of item \(i\) within the recommended rank for user \(u\). Recall@K measures the number of recommended items that are likely to be interacted with by the user within top \(K\) items. \[\begin{split} Recall_{u}@K=\frac{\sum_{(u,i)\in\mathcal{D}_{test} (\hat{z}_{u,i}\subset=k)}}{\mid\mathcal{D}_{test}^{u}\mid}\\ Recall@K=\frac{1}{\mid\mathcal{U}\mid}\sum_{u\in\mathcal{U}}Recall_{u}@K \end{split} \tag{11}\] where \(\mathcal{D}_{test}^{u}\) indicates all ratings of the user \(u\) in dataset \(\mathcal{D}_{test}\). **Experimental details.** Our experiments were conducted on PyTorch, utilizing Adam as the optimizer. We fine-tuned the learning rate within {0.005, 0.01, 0.05, 0.1}, weight decay within {1e - 5, 5e - 5, 1e - 4, 5e - 4, 1e - 3, 5e - 3, 1e - 2}, threshold's parameter \(\eta\) within {0.1, 0.5, 1, 3, 5, 7, 10, 50}, and batch size within {128, 256, 512, 1024, 2048} for Coat, {1024, 2048, 4096, 8192, 16384} for Yahoo!R3 and { 2048, 4096, 8192, 16384, 32768} for KuaiRand. The hyperparameters of all the baselines are finely tuned in our experiments or referred to the orignal paper. The code is available at \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Coat} & \multicolumn{3}{c}{Yahoo} & \multicolumn{3}{c}{KuaRand} \\ & AUC & NDCG@5 & Recall@5 & AUC & NDCG@5 & Recall@5 & AUC & NDCG@5 & Recall@5 \\ \hline MF & 0.7053 & 0.6025 & 0.6173 & 0.6720 & 0.6252 & 0.7155 & 0.5432 & 0.2932 & 0.2905 \\ IPS & 0.7144 & 0.6173 & 0.6267 & 0.6785 & 0.6345 & 0.7214 & 0.5446 & 0.2987 & 0.2987 \\ CVIB & 0.7230 & 0.6278 & 0.6347 & 0.6811 & 0.6482 & 0.7229 & 0.5512 & 0.3099 & 0.3027 \\ INV & 0.7416 & 0.6394 & 0.6542 & 0.6767 & 0.6443 & 0.7251 & 0.5465 & 0.3081 & 0.3013 \\ TDR & 0.7388 & 0.6378 & 0.6525 & 0.6789 & 0.6436 & 0.7269 & 0.5523 & 0.3088 & 0.3026 \\ \hline EIB & 0.7225 & 0.6288 & 0.6382 & 0.6844 & 0.6427 & 0.7241 & 0.5456 & 0.3010 & 0.2938 \\ EIB+CDR & 0.7509 & 0.6533 & 0.6608 & 0.6909 & 0.6549 & 0.7310 & 0.5510 & 0.3087 & 0.2975 \\ impv\% & +3.93\% & +3.90\% & +3.54\% & +0.95\% & +1.90\% & +0.95\% & +0.99\% & +2.56\% & +1.26\% \\ \hline DR-JL & 0.7286 & 0.6271 & 0.6355 & 0.6834 & 0.6474 & 0.7236 & 0.5485 & 0.2967 & 0.2924 \\ DR+CDR & 0.7502 & 0.6557 & 0.6658 & 0.6881 & 0.6558 & 0.7307 & 0.5540 & 0.3153 & 0.3045 \\ impv\% & +2.96\% & +4.56\% & +4.77\% & +0.69\% & +1.31\% & +0.98\% & +1.00\% & +6.27\% & +4.14\% \\ \hline MRDR & 0.7319 & 0.6317 & 0.6447 & 0.6829 & 0.6484 & 0.7243 & 0.5503 & 0.3041 & 0.2949 \\ MRDR+CDR & 0.7508 & 0.6520 & 0.6587 & 0.6879 & **0.6571** & 0.7311 & **0.5547** & **0.3167** & **0.3078** \\ impv\% & +2.58\% & +3.21\% & +2.17\% & +0.73\% & +1.34\% & +0.94\% & +0.80\% & +4.14\% & +4.48\% \\ \hline DR-BIAS & 0.7424 & 0.6408 & 0.6578 & 0.6860 & 0.6486 & 0.7269 & 0.5478 & 0.3024 & 0.2952 \\ DR-BIAS+CDR & **0.7513** & **0.6567** & **0.6678** & **0.6912** & 0.6565 & **0.7323** & 0.5533 & 0.3098 & 0.3048 \\ impv\% & +1.20\% & +2.48\% & +1.52\% & +0.76\% & +1.22\% & +0.74\% & +1.00\% & +2.45\% & +3.25\% \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison between our CDR with other baselines on three real-world datasets. The best result in that column is bolded and the runner-up is underlined. We incorporate CDR into four baseline models and report the relative improvements gained by employing CDR compared to the respective baseline. ### Performance Comparison (RQ1) Table 2 presents performance comparison of our with other baselines. We draw the following observations: 1) CDR consistently boosts the recommendation performance on four baselines and three benchmark datasets. Especially in KuaiRand, the improvement is impressive -- achieving average 0.95%, 3.86%, 3.28% improvement in terms of AUC, NDCG and Recall respectively. This result validates that our filtering protocol is effective, which could indeed filter the harmful imputation. We will further validate this point in the next experiment. 2) By comparing CDR with other baselines, we can find the best performance always achieved by CDR. CDR is simple but achieves SOTA performance. ### Study on the Poisonous Imputation (RQ2) To further validate the effectiveness of CDR, we conducted empirical study on the ratio of the poisonous imputation. We finely trained compared methods on the biased training data, and then compared \(|e_{ui}-\hat{e}_{ui}|\) with \(e_{ui}\) for the user-item pairs in the test data where ground-truth ratings are accessible. The results are presented in Figure 2. As can be seen, CDR consistently has lower ratio of poisonous imputation than its corresponding baselines over three datasets. This result clearly validate that the proposed filter is reasonable and can remove a certain ratio of poisonous imputation. As such, CDR achieves better debiasing performance than DR. ### Effect of Hyperparameter \(\eta\) (RQ3) The hyperparameter \(\eta\) serves as an adjusted threshold that directly modulates the strictness of the filtering process. Thus, exploring model performance _w.r.t_\(\eta\) could help us to better understand the nature of CDR. In theoretical terms, when \(\eta\) approaches 0, this method is equivalent to IPS; when \(\eta\) approaches infinity, this method is equivalent to the DR approach. The performance with varying \(\eta\) is presented in Figure 3. As can be seen, with \(\eta\) increasing, the performance will become better first. The reason is that the larger \(\eta\) would bring more imputation. As the threshold \(\eta\) is relatively low, the injected imputation is usually confidence, yielding performance improvement. However, when \(\eta\) surpasses a certain value, the performance becomes worse with further increase of \(\eta\). This can be interpreted by the more inaccurate imputation is injected. poisonous imputation occurs which would deteriorate model performance. Consequently, there exists a trade-off on the selection of \(\eta\). Only when \(\eta\) is set to a proper value, the model achieves the optimal performance. ### Running Time Comparison (RQ4) Additionally, we conducted experiments on the efficiency of CDR compared with other baselines on three datasets: Coat, Yahoo, and KuaiRand. As shown in the table 3, despite CDR introduces multiply times dropout for evaluating the mean and variance of the imputation, it does not incur much more computation burden. The reason can be attributed to the two factors: 1) The calculation of the mean and variance only involves forward propagation, without requiring the time-consuming backward propagation; 2) CDR would filter a certain ratio of the imputation, which make the samples in training reduced, leading to acceleration when training the recommendation model. ## 5. Related Work In this section, we review the most related work from the following two perspectives. **Debiasing in Recommendation.** Bias is a critical issue in recommendation systems as it not only hurt recommendation accuracy, but can limit the diversity of recommended items and reinforce unfairness (Beng et al., 2017; Chen et al., 2018; Chen et al., 2019). There are various sources of bias found in RS data, such as selection bias (Chen et al., 2018; Chen et al., 2018; Chen et al., 2019), exposure bias (Chen et al., 2018; Chen et al., 2019; Chen et al., 2019), conformity bias (Chen et al., 2018; Chen et al., 2019), position bias (Chen et al., 2018; Chen et al., 2019) and popularity bias (Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). To address this issue, the academic community has probed into a multitude of methodologies to rectify the bias in recommendation systems. Given the focus of this study on selection bias, we primarily concentrate our review on the latest advancements in tackling this particular bias. For a more comprehensive understanding, we recommend readers to refer to the bird's-eye-view survey (Chen et al., 2018) for additional details. Recent work on selection bias can be mainly categorized into three types: 1) Generative Models, which resorts to a causal graph to depict the generative process of observed data and infer user true preference accordingly. The most representative methods are (Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019), which jointly model the which rating value the user gives and which items the user select to rate. More recently, some researchers utilize the causal graph to disentangle the invariant preference from other variant factors (Wang et al., 2019; Wang et al., 2020) thereby enabling the recommendation to depend on the reliable invariant user preferences. 2) Inverse Propensity Score, which adjusts the data distribution by reweighing the observed samples with the inverse of the propensity. Once the propensity reaches the ideal value, IPS could provide an unbiased estimation of the ideal loss. Recent studies (Wang et al., 2019; Wang et al., 2020) have introduced a range of methodologies to learn propensities including calculating from item popularity, fitting a model to the observation, or computing from a limited set of unbiased data. 3) Doubly Robust Learning, which enhances IPS by incorporating error imputation for all user-item pairs. DR enjoys the doubly \begin{table} \begin{tabular}{c|c c|c c c|c c c c} \hline \hline Datasets & MF & IPS & EIB & DR-JL & MRDR & DR-bias & EIB+CDR & DR-JL+CDR & MRDR+CDR & DR-bias+CDR \\ \hline Coat & 33.87 & 36.72 & 112.80 & 136.72 & 131.69 & 138.23 & 135.62 & 147.31 & 153.28 & 149.31 \\ \hline Yahoo & 59.34 & 68.91 & 542.35 & 632.79 & 687.34 & 678.28 & 643.21 & 732.13 & 706.39 & 714.32 \\ \hline KuaiRand & 834.13 & 1034.24 & 5018.23 & 6390.25 & 6246.36 & 6421.56 & 7124.54 & 6893.49 & 7154.83 & 7245.71 \\ \hline \hline \end{tabular} \end{table} Table 3. Empirical runtime (s) comparison on Coat, Yahoo and KuaiRand datasets. robust property where unbiasedness is guaranteed if either the imputed values or propensity scores are accurate. The merit of DR relies on the accuracy of the imputation model. Thus, various learning strategies are proposed by recent work. For example, DR-JL [44] jointly learn the recommendation model and imputation model from the observed data, while the imputation model is optimized to minimize the error deviation on observed data; AutoDebias [5] leverages the unbiased data to supervise the learning of the imputation via meta-learning; MRDR [20] considers the variance reduction in learning imputation model; DR-BIAS [13] learns the imputation with balancing the variance and bias. More recently, some researchers consider to further boost the instability and generalization of DR with leveraging the stable regularizer [28] and non-parameter imputation module [27]. While these approaches offer promising solutions for debiasing recommendation, they all impute the error for all user-item pairs and may suffer from the issue of poisonous imputation. **Uncertainty Estimation.** Utilization of probabilistic models to assess and control uncertainty (_a.k.a._ variance), has found broad applications across numerous fields. This approach is usually characterized by probabilistic inference, which allows for continuous updating of beliefs about model parameters. Uncertainty estimation have found extensive use in diverse domains including machine learning [31], natural language processing, signal processing and clustering [55]. A prevalent approach incorporates Bayesian neural networks [4; 25; 32; 37], providing a flexible and efficient framework to encapsulate uncertainty within neural network predictions. Another line for uncertainty estimation is the MC-dropout technique [15; 16; 38], which simply perform multiple dropout and estimate the uncertainty (variance) via different models after dropout. Recent work has connected MC-dropout with Bayesian inference and Figure 3. Recommendation performance of CDR with varying threshold \(\eta\) on three datasets. Figure 2. The percentage(%) of “poisonous imputation" in three different datasets using the original EIB, DR-JL, MRDR and DR-BIAS methods, as well as the improved methods with integrating CDR. shows that MC-dropout serves as a form of variational Bayesian inference with leveraging a spike and slab variational distribution. Besides, methods like Kronecker Factored Approximation (KFAC) (Srivastava et al., 2017) and Markov Chain Monte Carlo (MCMC) (Srivastava et al., 2017; Wang et al., 2018) have been deployed to propagate uncertainties in intricate models. In this work, we simply choose MC-dropout to estimate the uncertainty of the imputation model, while it can be easily replaced by other advanced technologies. ## 6. Conclusion and Future Work This study identifies the issue of poisonous imputation in recent Doubly Robust (DR) methods - these methods indiscriminately perform imputation on all user-item pairs, including those with poisonous imputations that significantly deviate from the truth and negatively impact the debiasing performance. To counter this problem, we introduce a novel Conservative Doubly Robust (CDR) strategy that filters out poisonous imputation by examining the mean and variance of the imputation value. Both theoretical analyses and empirical experiments have been conducted to validate the superiority of our proposal. For future research, it would be compelling to explore more advanced filtering protocols. Our CDR strategy is based on the assumption on Gaussian distribution of the imputation, which may not be high accurate. Employing sophisticated techniques such as Dynamic Graph neural network (Beng et al., 2016), Generative Adversarial Networks (GAN) (Garvin et al., 2017) or diffusion models (Garvin et al., 2017) to account for more flexible distributions could be promising. Moreover, as per Table 3, DR methods typically exhibit much more computational burden compared to basic models. Therefore, investigating methods to accelerate DR presents another promising direction for future work. ###### Acknowledgements. This work is supported by the National Key Research and Development Program of China (2021ZD0111802), the National Natural Science Foundation of China (61972372), the Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-001) and the advanced computing resources provided by the Supercomputing Center of Hangzhou City University. ## Appendix A Proof of Lemma 1 Note that the errors \(e_{ui}\) and \(\hat{e}_{ui}\) are often defined as positive values, _e.g._, in the context of BCE loss or RMSE loss. Consequently, we can deduce \(P(|e_{ui}-\hat{e}_{ui}|<e_{ui})=P(\hat{e}_{ui}-2e_{ui}<0)\). Moreover, even in cases where the positive of \(e_{ui}\) and \(\hat{e}_{ui}\) is not maintained for certain losses, we still have the relations \(P(|e_{ui}-\hat{e}_{ui}|<e_{ui})\geq P(\hat{e}_{ui}-2e_{ui}<0)\). Thus, we would like to take the \(P(\hat{e}_{ui}-2e_{ui}<0)\) for analyses. For convenient, let \(g=\hat{e}_{ui}-2e_{ui}\). Considering \(\hat{e}_{ui}\) and \(e_{ui}\) are two independent variables subject to gaussian distribution \(\mathcal{N}(\hat{\mu}_{ui},\hat{e}_{ui}^{2})\) and \(\mathcal{N}(\mu_{ui},\hat{e}_{ui}^{2})\) respectively, we can easily write the distribution of \(g\) as \(\mathcal{N}(\hat{\mu}_{ui}-2\mu_{ui},\hat{e}_{ui}^{2}+4\hat{e}_{ui}^{2})\)(Beng et al., 2016). Let \(z\) be a variable from standard gaussian distribution. We further have: \[\begin{split} P(g<0)&=P(\frac{g-(\hat{\mu}_{ui}-2 \mu_{ui})}{\sqrt{\hat{e}_{ui}^{2}+4\hat{e}_{ui}^{2}}}<-\frac{\hat{\mu}_{ui}-2 \mu_{ui}}{\sqrt{\hat{e}_{ui}^{2}+4\hat{e}_{ui}^{2}}})\\ &\geq P(z<-\frac{\hat{\mu}_{ui}-2(\hat{\mu}_{ui}-\varepsilon_{ \mu})}{\sqrt{\hat{e}_{ui}^{2}+4(\hat{e}_{ui}^{2}+\varepsilon_{\mu}^{2})}})\\ &=P(z<\frac{\hat{\mu}_{ui}-2\varepsilon_{\mu}}{\sqrt{5\hat{e}_{ ui}^{2}+4\hat{e}_{\mu}^{2}}})\end{split} \tag{12}\] where the inequality holds, as \(\hat{\mu}_{ui}\), \(\mu_{ui}\), \(\hat{\sigma}_{ui}\), \(\hat{\sigma}_{ui}\) are bounded with \(|\hat{\mu}_{ui}-\mu_{ui}|\leq\varepsilon_{\mu}\), \(|\hat{\sigma}_{ui}^{2}-\sigma_{ui}^{2}|\leq\varepsilon_{\sigma}^{2}\), \(2\varepsilon_{\mu}\leq\hat{\mu}_{ui}\). And when \(\mu_{ui}=\hat{\mu}_{ui}-\varepsilon_{\mu},\hat{\sigma}_{ui}^{2}=\hat{\sigma} _{ui}^{2}+\varepsilon_{\mu}^{2}\), the right-hand side achieves minimum. Eq.(12) further has the following lower bound: \[\begin{split}& P(z<\frac{\hat{\mu}_{ui}-2\varepsilon_{\mu}}{ \sqrt{5\hat{\sigma}_{ui}^{2}+4\hat{e}_{\sigma}^{2}}})\\ &\geq\frac{1}{2}P(z<\frac{\hat{\mu}_{ui}-2\varepsilon_{\mu}}{ \sqrt{5}\hat{\sigma}_{ui}+2\varepsilon_{\sigma}})\\ &=P(z<\frac{\hat{\mu}_{ui}}{\sqrt{5}\hat{\sigma}_{ui}}-(\frac{2 \hat{\mu}_{ui}\varepsilon_{\sigma}}{\sqrt{5}\hat{\sigma}_{ui}(\sqrt{5}\hat{ \sigma}_{ui}+2\varepsilon_{\sigma})}+\frac{2\varepsilon_{\mu}}{\sqrt{5}\hat{ \sigma}_{ui}+2\varepsilon_{\sigma}}))\\ &\geq\frac{P(z<\frac{\hat{\mu}_{ui}}{\sqrt{5}\hat{\sigma}_{ui}}-( \frac{2M_{\mu}\varepsilon_{\sigma}}{\sqrt{5}m_{\sigma}(\sqrt{5}m_{\sigma}+2 \varepsilon_{\sigma})}+\frac{2\varepsilon_{\mu}}{\sqrt{5}m_{\sigma}+2 \varepsilon_{\sigma}}))\end{split} \tag{13}\] where the first inequality holds due to the fact that \(\sqrt{5}\hat{\sigma}_{ui}+2\varepsilon_{\sigma}\geq\sqrt{5\hat{\sigma}_{ui}^ {2}+4\hat{e}_{\sigma}^{2}}\), while the second inequality holds since \(\hat{\mu}_{ui}\) is upper-bounded by \(M_{\mu}\) and \(\hat{\sigma}_{ui}\) is lower-bounded by \(m_{\sigma}\). If we let: \[\frac{\hat{\sigma}_{ui}}{\hat{\mu}_{ui}}<\left(\sqrt{5}\Phi^{-1}(\rho)+\frac{ 2M_{\mu}\varepsilon_{\sigma}}{m_{\sigma}(\sqrt{5}m_{\sigma}+2\varepsilon_{ \sigma})}+\frac{2\sqrt{5}\varepsilon_{\mu}}{\sqrt{5}m_{\sigma}+2\varepsilon_ {\sigma}}\right)^{-1} \tag{14}\] We can find the following inequality holds: \[P(z<\frac{\hat{\mu}_{ui}}{\sqrt{5}\hat{\sigma}_{ui}}-(\frac{2M_{\mu} \varepsilon_{\sigma}}{m_{\sigma}(\sqrt{5}m_{\sigma}+2\varepsilon_{\sigma})}+ \frac{2\varepsilon_{\mu}}{\sqrt{5}m_{\sigma}+2\varepsilon_{\sigma}}))\geq\rho \tag{15}\] Thus, we have \(\mathbb{P}(|\hat{e}_{ui}-\hat{e}_{ui}|<e_{ui})\geq\rho\). The lemma gets proof. ## Appendix B. Proof of Lemma 2 The bias and variance of CDR can be easily obtained based on the following equations: \[\begin{split} Bias[\mathcal{L}_{CDR}]&=|E_{o}[ \mathcal{L}_{CDR}]-\mathcal{L}_{ideal}|\\ &=\frac{1}{|\mathcal{D}|}\Big{(}\sum_{u,i\}\in\mathcal{D}\Big{)} \frac{(p_{ui}-\hat{p}_{ui})}{\hat{p}_{ui}}(\gamma_{ui}(\hat{e}_{ui}-\hat{e}_{ui })+(1-\gamma_{ui})e_{ui})|\\ Var[\mathcal{L}_{CDR}]&=\mathbb{E}_{o}[(\mathcal{L}_{CDR }-\mathbb{E}_{o}[\mathcal{L}_{CDR}])^{2}]\\ &=\frac{1}{|\mathcal{D}|^{2}}\Big{(}\sum_{u,i\}\in\mathcal{D}\frac{ p_{ui}(1-p_{ui})}{\hat{p}_{ui}^{2}}(\gamma_{ui}(\hat{e}_{ui}-\hat{e}_{ui})^{2}+(1- \gamma_{ui})e_{ui}^{2}).\end{split} \tag{16}\] The proof of tail bound refers to (Wang et al., 2018) but replaces the \(\mathcal{L}_{DR}\) with \(\mathcal{L}_{CDR}\). We first let \(l_{ui}=\frac{\alpha_{ui}e_{ui}}{\hat{p}_{ui}}+\gamma_{ui}\hat{e}_{ui}(1-\frac{1}{ \hat{p}_{ui}})\). Note that \(\alpha_{ui}\) is an bernoulli variable and thus the variable \(l_{ui}\) takes the value in the interval \([\gamma_{ui}\hat{e}_{ui},\frac{\sigma_{ui}}{\hat{p}_{ui}}+\gamma_{ui}\hat{e}_{ui }(1-\frac{1}{\hat{p}_{ui}})]\) of size \(s_{ui}=(1-\gamma_{ui})\frac{\hat{e}_{ui}}{\hat{p}_{ui}}+\sqrt{5\hat{\sigma}_{ ui}^{2}+4\hat{e}_{\sigma}^{2}}\). \(Yui\frac{e_{ui}-\hat{e}_{ui}}{\hat{p}_{ui}}\). Considering the \(o_{ui}\) are independent for different \((u,i)\), Hoeffding inequality (Hoffding, 1950) can be employed with: \[P(|\sum_{u,i}l_{u,i}-\mathbb{E}_{o}[\sum_{u,i}l_{ui}]|\geq|\mathcal{D}|\epsilon )\leq 2\exp(\frac{-2|\mathcal{D}|^{2}e^{2}}{\sum\limits_{u,i}^{2}}) \tag{17}\] Set the right-hand side of the inequality to \(\kappa\) and then we can get the lemma 2. ## Appendix C Proof of Corollary 3.1 Here we primarily concentrate on demonstrating that CDR outperforms IPS in terms of variance and tail bound. A similar proof process can be applied to DR. Setting \(\rho_{0}=0.6\) allows us to derive a set of effective imputations \(S=\{(u,i)|P(|e_{ui}-\hat{e}_{ui}|<e_{ui})\geq\rho_{0}\}\). If \(S=\emptyset\), then CDR regresses to IPS, at least performing equivalently to IPS. Otherwise, it is always possible to identify a user-item pair \((u^{*},i^{*})\) that has the largest \(\frac{\hat{p}_{ui}}{\hat{e}_{ui}}\) among \(S\). We can define \(\rho=\Phi(\frac{\hat{p}_{ui}}{\sqrt{5}\hat{e}_{ui}}-\frac{2M_{o}e_{\sigma}}{ \sqrt{5}m_{\sigma}(\sqrt{5}m_{\sigma}+2\hat{e}_{\sigma})}+\frac{2\hat{e}_{ \sigma}}{\sqrt{5}m_{\sigma}+2\hat{e}_{\sigma}}))-\epsilon ps\), under the condition that only the imputation with the highest \(\frac{\hat{p}_{ui}}{\hat{e}_{ui}}\) is preserved, where \(eps\) denotes a sufficiently small positive value. Taking into account the continuous values of \(\hat{o}_{ui},\hat{p}_{ui}\), the probability of two imputations sharing the exact same value is negligible. Hence, only the imputation for the pair \((u^{*},i^{*})\) is preserved. To compare the variance and tail bounds between CDR and IPS, we can identify that the key difference pertains to the pair \((u^{*},i^{*})\). Here CDR utilizes \((\hat{e}_{ui}-e_{ui})^{2}\) while IPS utilizes \(e_{ui}^{2}\). As the relation \(|e_{ui}-\hat{e}_{ui}|<e_{ui}\) holds for \((u^{*},i^{*})\) with at least \(\rho\) probability, and considering \(\rho>\rho_{0}\), we can conclude that CDR achieves better variance and tail bound compared to DR.
2310.12673
Oort cloud perturbations as a source of hyperbolic Earth impactors
The observation of interstellar objects 1I/'Oumuamua and 2I/Borisov suggests the existence of a larger population of smaller projectiles that impact our planet with unbound orbits. We analyze an asteroidal grazing meteor (FH1) recorded by the Finnish Fireball Network on October 23, 2022. FH1 displayed a likely hyperbolic orbit lying on the ecliptic plane with an estimated velocity excess of $\sim$0.7 km$\,$s$^{-1}$ at impact. FH1 may either be an interstellar object, indicating a high-strength bias in this population, or an Oort cloud object, which would reinforce migration-based solar system models. Furthermore, under the calculated uncertainties, FH1 could potentially be associated with the passage of Scholz's binary star system. Statistical evaluation of uncertainties in the CNEOS database and study of its hyperbolic fireballs reveals an anisotropic geocentric radiant distribution and low orbital inclinations, challenging the assumption of a randomly incoming interstellar population. Orbital integrations suggest that the event on March 9, 2017 (IM2) from CNEOS may have experienced gravitational perturbation during the Scholz fly-by, contingent upon velocity overestimation within the expected range. These findings suggest that apparent interstellar meteors may, in fact, be the result of accelerated meteoroid impacts caused by close encounters with massive objects within or passing through our solar system.
Eloy Peña-Asensio, Jaakko Visuri, Josep M. Trigo-Rodríguez, Hector Socas-Navarro, Maria Gritsevich, Markku Siljama, Albert Rimola
2023-10-19T12:03:39Z
http://arxiv.org/abs/2310.12673v3
# Oort cloud perturbations as a source of hyperbolic Earth impactors ###### Abstract The observation of interstellar objects 1I/'Oumuamua and 2I/Borisov suggests the existence of a larger population of smaller projectiles that impact our planet with unbound orbits. We analyze an asteroidal grazing meteor (FH1) recorded by the Finnish Fireball Network on October 23, 2022. FH1 displayed a likely hyperbolic orbit lying on the ecliptic plane with an estimated velocity excess of \(\sim\)0.7 km s\({}^{-1}\) at impact. FH1 may either be an interstellar object, indicating a high-strength bias in this population, or an Oort cloud object, which would reinforce migration-based solar system models. Furthermore, under the calculated uncertainties, FH1 could potentially be associated with the passage of Scholz's binary star system. Statistical evaluation of uncertainties in the CNEOS database and study of its hyperbolic fireballs reveals an anisotropic geocentric radiant distribution and low orbital inclinations, challenging the assumption of a randomly incoming interstellar population. Orbital integrations suggest that the event on March 9, 2017 (IM2) from CNEOS may have experienced gravitational perturbation during the Scholz fly-by, contingent upon velocity overestimation within the expected range. These findings suggest that apparent interstellar meteors may, in fact, be the result of accelerated meteoroid impacts caused by close encounters with massive objects within or passing through our solar system. keywords: meteorites, meteors, meteoroids, comets: general, minor planets, asteroids: general + Footnote †: journal: Journal of Geophysical Research ## 1 Introduction In 2017, the Pan-STARRS1 telescope observed for the first time the reflected sunlight from a metric (\(\sim\)100 m) interstellar interloper, 1I/'Oumuamua (Meech et al., 2017). Two years later, the second discovery of a large object (0.4-1 km) not gravitationally bound to the Sun, comet 2I/Borisov, was announced (Guzik et al., 2020). The discoverer of 2I/Borisov himself estimated that a spherical volume of 50 au radius may have 50 bodies of more than 50 meters in diameter (Borisov and Shustov, 2021). The Pan-STARRS survey's detection of 1I/'Oumuamua allows the calculation of a number density of 0.1 au\({}^{-3}\), corresponding to \(10^{4}\) similar objects within Neptune's orbit and an influx of 3 objects per day (Jewitt et al., 2017). By the expected power laws of object size distribution, a much more abundant population of smaller interstellar objects is expected to cross our solar system, which may eventually collide with the Earth. A review of interstellar objects and interlopers can be found in Jewitt and Seligman (2023) and Seligman and Moro-Martin (2023). When an object impacts the atmosphere at hypervelocity, friction with air particles progressively ablates the outer layers, radiating large amounts of energy (Ceplecha et al., 1998; Silber et al., 2018; Trigo-Rodriguez, 2019). This luminous phase is known as a meteor or fireball, and its detection with ground-based or satellite instruments allows both the physicochemical analysis and the determination of the heliocentric orbit (Jenniskens et al., 2009; Trigo-Rodriguez et al., 2006; Dmitriev et al., 2015; Brown et al., 2016; Devillepoix et al., 2019; Borovicka et al., 2020; Colas et al., 2020; Pena-Asensio et al., 2021). Recently, the first detections of interstellar meteors were claimed from the flashes spotted by the U.S. Department of Defense (DoD) satellite sensors and published on the NASA-JPL Center for NEOs Studies (CNEOS) website (Tagliaferri et al., 1994): the so-called IM1 occurred in 2014-01-08 (Siraj and Loeb, 2022) and IM2 in 2017-03-09 (Siraj and Loeb, 2022), the latter being first identified as an interstellar candidate by Pena-Asensio et al. (2022). In the early 20th century, the field of meteor science was predominantly focused on determining whether most meteors originated from interstellar or interplanetary sources (Hughes, 1982). However, it was not until the 1950s that the optical observations of fireballs generated by meteoroids exhibiting hyperbolic were reported (Opik, 1950; Almond et al., 1951, 1952), in addition to subsequent meteor radar echoes detection of interstellar micrometeoroid impacts (Weryk and Brown, 2004; Froncisz et al., 2020) and interstellar dust incoming flux measurements (Meisel et al., 2002a,b). Multiple automated meteor networks have detected numerous hyperbolic Earth impactors, most of which are pointed out as the result of the instrument and method limitations (Stohl, 1970; Hajdukova, 2008; Musci et al., 2012). Hajdukova et al. (2020) reported that, of the total number of recorded events, 12.5% for CAMS, 11.9% for SonotaCO, and 5.4% for EDMOND were apparently hyperbolics. These events are clearly associated with low-quality detection and low angular elongation, so these large datasets cannot be used to discern hyperbolic impactors properly, and truly interstellar projectiles could remain hidden within the error bars. The identification of meteors with extra-solar provenance is a significant challenge and statements about the interstellar origin of IM1 and IM2 cannot be conclusive if the uncertainties of the data are not provided (Vaubaillon, 2022). Recent studies have even suggested that IM1 could be consistent with a common chondritic impactor assuming a lower atmospheric entry velocity (Brown and Borovicka, 2023). Pena-Asensio et al. (2022) and (Brown and Borovicka, 2023) identified hyperbolic fireballs recorded by the United States Government (USG) satellite sensors, representing \(\sim\)1% of total meter-sized impactors, events that are potentially meteorite-droppers. In contrast, there is no evidence of any recovered meteorite with a different composition from that of our solar nebula1. This fact opens several hypotheses: (1) CNEOS hyperbolic fireballs are spurious data; (2) There is a viable way for nearby stellar systems to be isotopically homogenous so extra-solar objects do not have distinctive non-chondritic elemental and isotopic compositions. The interstellar mate rial exchange would be enough to smooth out any differences in the initial inventory of elements; (3) Incoming interstellar objects are biased towards low-strength properties and do not survive either the interstellar medium or the ablation process during the atmospheric entry; (4) There is an efficient mechanism by which objects that belong to our solar nebula acquire hyperbolic orbits. In this work, we present evidence supporting the latter hypothesis, assuming that the former remains unverified, a matter still awaiting clarification. We show that apparent interstellar meteors may actually be the result of accelerated projectile impacts due to gravitational perturbations induced by massive objects (stars, free-floating brown dwarfs, rogue planets, sub-stellar or sub-Jovian mass perturbers, primordial black holes...) shaping or visiting the outer part of our solar system. In particular, we analyze a likely hyperbolic asteroid-like grazing meteor recorded in Finland in 2022 exhibiting no deceleration, which could be associated with Scholz passage. Additionally, we discuss the IM2 hyperbolic fireballs of the CNEOS database, which may belong either to the Oort cloud or to a hypothetical Oort-like Scholz's cloud if its velocity is overestimated by 22%. All events exhibit non-cometary compositions and probably are not of extra-solar provenance, which has profound implications for solar system formation models. In case they were truly interstellar in origin, the bias towards a high-strength composition of the incoming interstellar population would be reinforced. ## 2 Methodology For the meteor science performed in this work, we use our verified Python pipeline _3D-FireTOC_(Pena-Asensio et al., 2021, 2021) which: performs the meteor positional reduction from the stellar astrometry accounting for asymmetric radial lens distortions (Borovicka et al., 1995) and atmospheric refraction by a revised Bennett's model (Wilson, 2018), employs the plane intersection method to reconstruct the atmospheric trajectory (Ceplecha, 1987), and computes the heliocentric orbit using the N-body orbital dynamics integrator _REBOUND_ and _REBOUNDx_ packages considering the gravitational harmonics (J2, J4) of the Earth and the Moon (Rein and Spiegel, 2015; Tamayo et al., 2020). Uncertainties are calculated by generating 1,000 clones from the astrometry error fits assuming a normal distribution. For mass estimation and event classification, it is necessary to calibrate the light curve. Using the visual magnitude of the same reference stars as in astrometry, we perform aperture photometry by subtracting the local background of each one. In this way, a logarithmic fit is conducted to relate pixel values with magnitudes. We correct the atmospheric extinction and calculate the absolute magnitude of the meteor (as observed at 100 km at the zenith). The pre-atmospheric velocity is a critical quantity for orbit estimation and cannot be directly measured by optical devices. It is necessary to derive it from the distance traveled, for which a smooth function fit of the observed points is usually performed. For this purpose, we apply the function proposed by Whipple and Jacchia (1957) that allows the velocity to be obtained straightforwardly from its derivative. However, for high-altitude grazing meteors, this model does not perform properly as it can not represent the velocity end. What is expected for the atmospheric entry of a meteoroid with these characteristics is a non-appreciable deceleration. For that reason, we assume that, within the error margins of the measurements, the pre-atmospheric and terminal velocities are virtually the same as a first approximation. Nevertheless, as we do not adjust the trajectory for the influence of gravity, we opt to analyze the initial third of the observed data points, conducting a linear fit with the mean value as the most likely velocity. The standard deviation of the fit serves as a measure of velocity uncertainty. As the entire trajectory can be used for velocity estimation without applying a deceleration model, this results in a smaller margin of error than expected for regular meteor velocity estimation from optical observations (Egal et al., 2017). Assuming the radiated energy by the meteor is proportional to the loss of kinetic energy in the form of mass loss (Ceplecha, 1966), which is only theoretically valid for atmospheric flight with no deceleration (Gritsevich and Koschny, 2011), the initial meteoroid mass can be computed from \[m_{0}=\int\frac{2}{\tau(v)v^{2}}I(t)dt, \tag{1}\] where \(\tau\) is the luminous efficiency, \(v\) is the velocity, \(t\) is the time, \(I=I_{0}10^{-0.4M}\) is the radiated energy, \(I_{0}=1,300\;W\) is the zero-magnitude radiant power for high-speed meteors (Weryk and Brown, 2013), and \(M\) the absolute magnitude. The luminous efficiency in percent is taken from Ceplecha and McCrosky (1976): \(\log\tau=-1.51+\log v\) when \(v\geq 27\;\mathrm{km\,s^{-1}}\). However, Borovicka et al. (2022) note that contemporary luminous efficiency models lead to \(\sim\)7 times less mass for velocities above 27 \(\mathrm{km\,s^{-1}}\), so our initial meteoroid mass may be overestimated by one order of magnitude. For example, using the Revelle and Ceplecha (2001) updated model where \(\ln\tau=-1.53+\ln v\) for \(v\geq 25.372\) km s\({}^{-1}\), larger average luminous efficiency of are achieved. Following Ceplecha and McCrosky (1976), meteors can be classified according to the so-called \(P_{E}\) criterion: \[P_{E}=\log\rho_{e}-0.42\log m_{0}+1.49\log v_{\infty}-1.29\log\cos z_{R}, \tag{2}\] where \(\rho_{e}\) is the air density at terminal height, \(v_{\infty}\) is the pre-atmospheric velocity, and \(\cos z_{R}\) is the apparent radiant zenith distance. A more physical, dimensionless form of this criterion exists (Moreno-Ibanez et al., 2020), however, as the analyzed event presents challenges in uniquely determining their atmospheric flight parameters, we turn to the original PE criterion form. From the classical third-order system describing the meteor body deceleration, numerous efforts have been made to define a height-velocity relation (Kulakov and Stulov, 1992; Gritsevich and Stulov, 2006; Gritsevich, 2007, 2008, 2009; Turchak and Gritsevich, 2014; Lyytinen and Gritsevich, 2016; Sansom et al., 2019; Boaca et al., 2022; Pena-Asensio et al., 2023). Following these works, the dynamics of a meteor can be characterized from the analytical solution using two dimensionless parameters, namely the ballistic coefficient \(\alpha\) and the mass loss parameter \(\beta\). It is possible to express \(\alpha\) in terms of the meteoroid bulk density \(\rho_{m}\), the pre-atmospheric shape factor \(A_{0}\), the drag coefficient \(c_{d}\), the atmospheric density at the sea level \(\rho_{0}\), the height of the homogeneous atmosphere \(h_{0}=7.16\ km\), the meteoroid mass \(m\), and the slope of the trajectory with the horizon \(\gamma\): \[\alpha=\frac{1}{2}\frac{c_{d}A_{0}\rho_{0}h_{0}}{m_{0}^{1/3}\rho_{m}^{2/3}\sin \gamma}. \tag{3}\] Assuming that the ablation of the body due to its rotation is uniform over the entire surface of the meteoroid (Bouquet et al., 2014), the mass loss parameter can be calculated directly from the ablation coefficient \(\sigma\) and the entry velocity: \[\beta=\frac{1}{6}\sigma v_{\infty}^{2}. \tag{4}\] We selected a uniformly distributed range of values for the ablation coefficient between \(0.014\ s^{2}\,km^{-2}\) and \(0.042\ s^{2}\,km^{-2}\) suitable for a rocky body based on both the classical single-body ablation and contemporary mass-loss models (Ceplecha et al., 1998; Vida et al., 2018). Due to the high altitudes and velocities of the atmospheric flight with an enhanced mass loss under the condition of minimal deceleration, standard dynamical fits, such as \(\alpha\)-\(\beta\), may not perform properly as they are often organized as a function of velocity and are not primarily intended for high-height grazers, that is, for non-decelerating flights. Nevertheless, we can use the asymptotic form of the solution obtained to describe meteor trajectories at large values of the mass loss parameter given the formal fulfillment of the condition \(\ln(2\alpha\beta)<h_{e}/h_{0}<\infty\), where \(h_{e}\) is the end (terminal) height (Stulov, 1997, 1998; Gritsevich and Popelenskaya, 2008; Stulov, 2004; Moreno-Ibanez et al., 2015). To account for the possible change in velocity at the end of the luminous trajectory, we use the latest modification of this solution (Moreno-Ibanez et al., 2015; Gritsevich et al., 2016; Moreno-Ibanez et al., 2017): \[v_{e^{\prime}}=v_{\infty}\left(\frac{\ln(1-2\alpha\beta e^{-h_{e}/h_{0}})}{ \beta}+1\right)^{1/2}. \tag{5}\] Using _FireOwl_ analysis software (Visuri et al., 2020; Visuri and Gritsevich, 2021), a Finnish Fireball Network tool that performs numerical integration of the meteoroid trajectory (Moilanen et al., 2021; Kyrylenko et al., 2023), we recompute and contrast all the results. Finally, we check the dynamic association with some meteoroid stream or parent body by means of the well-known \(D_{D}\) orbital dissimilarity criterion proposed by Drummond (1981). ## 3 Results On October 23, 2022, at 19:38:34 (UTC), a very fast grazing meteor, hereafter FH1, flew through the sky of Finland and terminated over the Gulf of Bothnia. The event was observed by 3 stations of the Finnish Fireball Network (FFN) (Gritsevich et al., 2014; Trigo-Rodriguez et al., 2015; Lyytinen and Gritsevich, 2016; Visuri and Gritsevich, 2021; Moilanen et al., 2021) and 1 image observation from the public: Nyrola (Sony IMX291; 1280x720 px; 4 mm f/0.95), Tampere (Hikvision DS-2CD2T87G2-L; 3840x2160 px; 2.8 mm f/1.0), Vaala (Watec 902H; 768x576 px; 3.8 mm f/0.8), and Sastamala (NIKON D750; 6016x4016 px; 14 mm f/4.0). These observations have been used in this study (Table 1). Figure 1 shows two blended images of FH1 from the videos recorded by the Sastamala and Nyrola stations. The luminous phase of FH1 started at an altitude of 126.55\(\pm\)0.03 km (24.3104\(\pm\)0.0008\({}^{\circ}\) E, 63.6677\(\pm\)0.0002\({}^{\circ}\) N), traveling a distance of 409.47\(\pm\)0.09 km until ablation ended at 112.60\(\pm\)0.04 km altitude (18.558\(\pm\)0.002\({}^{\circ}\) E, 61.2358\(\pm\)0.0005\({}^{\circ}\) N). The flight angle with respect to the local horizon (i.e., the slope) was 3.588\(\pm\)0.013\({}^{\circ}\), with an azimuth of 229.915\(\pm\)0.007\({}^{\circ}\) (zero being north and positive in clockwise direction). Figure 2 shows the 3D scaled atmospheric flight reconstruction. The geocentric radiant, namely the corrected meteor anti-apex, is calculated outside of the gravitational influence field of the Earth and the Moon (at 10 times the Earth's Hill sphere), being the right ascension \(\alpha_{R}\) = 117.160\(\pm\)0.009\({}^{\circ}\) and the declination \(\delta_{R}\) = 19.444\(\pm\)0.020\({}^{\circ}\). The best convergence angle between the observations is \(\sim\)50\({}^{\circ}\) (for Sastamala and Vaala stations), where plane intersections with angles smaller than 5\({}^{\circ}\) are excluded (only for Sastamala and Tampere stations). Table 5 in the Appendix shows the position vectors of the initial and final points of FH1's luminous path in the Earth-centered Earth-fixed coordinate system, as recorded by each of the four stations. Figure 3 shows the apparent point-to-point velocities, together with the fitted (73.7\(\pm\)0.6 km s\({}^{-1}\)) and the parabolic velocity threshold for this specific atmospheric trajectory (\(\sim\)73 km s\({}^{-1}\)). For 0.2 second intervals, Nyrola detection has 71% of all instant velocity measurements above this threshold, while for Vaala it is 64%. Nyrola and Vaala stations record at 25 fps, Tampere at 2.5, and Sastamala is an image of 5 seconds of exposure. Note that the apparent dispersion of the point-to-point velocities depends on the time interval selected and, paradoxically, the smaller the interval, the greater the dispersion, but the more accurate the final result. In the appendix, Tables 6 and 7 offer the detected positions of FH1 for each frame, represented in a horizontal coordinate system comprising azimuth and elevation. mass is expected. Nonetheless, utilizing the fundamental equations of motion, we calculate the descent of an object due to gravitational acceleration under these conditions to be 147 m. Figure 1: Blended image of the FH1 videos recorded by Sastamala station with (top) and Nyrola station (bottom). The green illuminated area is an aurora borealis. From the inbound velocity, we estimate the heliocentric osculating orbital elements at impact to be the following: semi-major axis \(a=-8\pm\)5 au, eccentricity \(e=1.07\pm\)0.06, inclination \(i=177.18\pm\)0.04\({}^{\circ}\), the longitude of the ascending node \(\Omega=30.10390\pm\)0.00010\({}^{\circ}\), argument of periapsis \(\omega=16.1\pm\)0.8\({}^{\circ}\), and true anomaly \(f=343.9\pm\)0.8\({}^{\circ}\). The orbit shows no close encounters with any planets. Figure 4 illustrates the obtained heliocentric hyperbolic orbit. Figure 5 shows the meteoroid absolute magnitude for every frame from Nyrola and Vaala stations, which are in good agreement with each other. FH1 curve light has a mean luminous efficiency of \(\tau=2.278\pm 0.018\) %, a peak brightness of \(M=-3.0\pm 0.5\), and yields a photometric initial mass of \(m=1,312\pm 54\)\(g\). Using the Revelle and Ceplecha (2001) updated model yields an average luminous efficiency of 15.96\(\pm\)0.13 %. Note that the meteoroid underwent a smooth and gradual ablation without any flares or catastrophic disruption, resulting in the absence of saturated pixels in all recordings. For FH1 meteor we obtain \(P_{E}=-4.173\pm 0.009\). Ceplecha and McCrosky (1976) classified meteors as type I when \(P_{E}>\) -4.6, which Ceplecha et al. Figure 2: 3D scaled atmospheric flight reconstruction of the FH1 meteoroid by using the Python software _3D-FireTOC_. (1998) assigned to ordinary chondrites. Note that the photometric mass used for \(P_{E}\) classification must be computed using luminous efficiency from Ceplecha and McCrosky (1976) in Eq. 2. The obtained value is in good agreement with our estimated luminous efficiency as Revelle and Ceplecha (2001) found that asteroidal fireballs should be around 5.57% and 1.35% for carbonaceous chondrite-like fireballs. This is consistent also with similar works Subasinghe et al. (2017); Drolshagen et al. (2021, 2021). In any case, the FH1 meteoroid has a consistency equal to or greater than ordinary chondrites, tending towards high-strength materials. Conservatively, taking an asteroid-like bulk density of 3,700 \(kg\,m^{-3}\), we compute an initial meteoroid diameter of 8.75\(\pm\)0.12 cm, which in contrast may be 4.59\(\pm\)0.06 cm in diameter based on modern luminous efficiency models. Given the inferred meteoroid size and bulk density, an asteroidal origin seems likely (Blum et al., 2006; Trigo-Rodriguez and Llorca, 2006), although Figure 3: Apparent velocity points of FH1 derived from Nyrola and Vaala observations computed at intervals of 0.2 seconds, fitted velocity, and the parabolic threshold. The error bars are multiplied by a factor of 10. it could also be compatible with rocky pebbles ejected by cometary disintegration during inner solar system trips (Trigo-Rodriguez and Blum, 2022). From Eq. 3, we obtain a value of \(\alpha\) of 444\(\pm\)4 or 849\(\pm\)8 depending on the initial photometric mass estimates previously calculated, and \(\beta=25\pm 7\), far away from being a meteorite-dropper event (Gritsevich et al., 2012; Sansom et al., 2019; Boaca et al., 2022). Eq. 5 yields a velocity decrease over 5 m s\({}^{-1}\) and 10 m s\({}^{-1}\), which is below the resolution of the measurements and within the uncertainty margin estimated for the velocity along the flight. We corroborate with the _FireOwl_ pipeline that an asteroid-like meteoroid with no catastrophic disruption and the estimated characteristics would behave in agreement with the observations. However, differences between compositions are almost marginal as the projectile experiences a low air drag during its \(\sim\)5.56 seconds of flight. Therefore, on this occasion, the dynamic models cannot provide conclusive results concerning the meteoroid density. Figure 4: Osculating heliocentric orbit of the FH1 meteoroid (J2000). The arrow at the bottom right shows the direction to the point of the vernal equinox. The two candidates are the Taurids swarm and the comet P/2015 A3, with a \(D_{D}\)=0.160 and \(D_{D}\)=0.177, respectively. These values are well above the typically accepted threshold (Galligan, 2001), so this event is definitely not associated with any known parent body or meteoroid stream. In summary, FH1 was a non-cometary centimeter-sized meteoroid in an inbound retrograde likely hyperbolic orbit lying almost on the ecliptic plane. It exhibits no close encounter with any known planet and a velocity excess at impact of \(\sim\)0.7 km s\({}^{-1}\) with respect to the barycentre of the solar system. Photometry of the meteor phase yields an asteroid-like (or higher) bulk density. FH1 is the first likely hyperbolic event detected by the FFN since the beginning of the year 2004, with over 2,000 manually analyzed meteors. All computed parameters can be found in Table 2. Figure 5: Photometry of FH1 from Nyrola and Vaala stations. The mean uncertainty of the magnitude is 0.44 and 0.55 respectively. \begin{table} \begin{tabular}{l c c} \hline **Parameter** & \multicolumn{2}{c}{**Value**} \\ \hline Reference time (UTC) & \(t_{0}\) & 2022-10-23 19:38:34 \\ Velocity (km s\({}^{-1}\)) & \(v_{\infty}\), \(v_{0}\), \(v_{e}\) & 73.7\(\pm\)0.6 \\ Initial latitude (\({}^{\circ}\)) & \(\varphi_{0}\) & 63.6677\(\pm\)0.0002 N \\ Initial longitude (\({}^{\circ}\)) & \(\lambda_{0}\) & 24.3104\(\pm\)0.0008 E \\ Initial height (km) & \(h_{0}\) & 126.55\(\pm\)0.03 \\ Final latitude (\({}^{\circ}\)) & \(\varphi_{e}\) & 61.2358\(\pm\)0.0005 N \\ Final longitude (\({}^{\circ}\)) & \(\lambda_{e}\) & 18.558\(\pm\)0.002 E \\ Final height (km) & \(h_{e}\) & 112.60\(\pm\)0.04 \\ Duration (s) & \(\Delta t\) & 5.56 \\ Length (km) & \(\Delta l\) & 409.47\(\pm\)0.09 \\ Slope (\({}^{\circ}\)) & \(\gamma\) & 3.588\(\pm\)0.013 \\ Azimuth (\({}^{\circ}\)) & \(A\) & 229.915\(\pm\)0.007 \\ Peak brightness & \(M\) & -3.0\(\pm\)0.5 \\ Luminous efficiency (\%) & \(\tau\) & 2.278\(\pm\)0.018 \(\mid\) 15.96\(\pm\)0.13 \\ Ablation coef. (\(s^{2}\,km^{-2}\)) & \(\sigma\) & 0.014 \(\mid\) 0.042 \\ Initial phot. mass (g) & \(m_{0}\) & 1,312\(\pm\)54 \(\mid\) 187\(\pm\)7 \\ P\({}_{E}\) criterion & \(P_{E}\) & -4.173\(\pm\)0.009 \\ Meteoroid density (kg m\({}^{-3}\)) & \(\rho_{m}\) & 3,700 \\ Initial diameter (cm) & \(D\) & 8.75\(\pm\)0.12 \(\mid\) 4.59\(\pm\)0.06 \\ Geo. velocity (km s\({}^{-1}\)) & \(v_{R}\) & 72.7\(\pm\)0.6 \\ Geo. radiant (RA) (\({}^{\circ}\)) & \(\alpha_{R}\) & 117.160\(\pm\)0.009 \\ Geo. radiant (Dec) (\({}^{\circ}\)) & \(\delta_{R}\) & 19.444\(\pm\)0.020 \\ Hyp. excess (km s\({}^{-1}\)) & \(\Delta_{v}\) & \(\sim\)0.7 \\ Hel. velocity (km s\({}^{-1}\)) & \(v_{H}\) & 43.0\(\pm\)0.6 \\ Semi-major axis (au) & \(a\) & -8\(\pm\)5 \\ Eccentricity & \(e\) & 1.07\(\pm\)0.06 \\ Inclination (\({}^{\circ}\)) & \(i\) & 177.18\(\pm\)0.04 \\ Long. of the asc. node (\({}^{\circ}\)) & \(\Omega\) & 30.10390\(\pm\)0.00010 \\ Argument of periapsis (\({}^{\circ}\)) & \(\omega\) & 16.1\(\pm\)0.8 \\ Periapsis distance (au) & \(q\) & 0.9748 \(\pm\)0.0008 \\ True anomaly (\({}^{\circ}\)) & \(f\) & 343.9\(\pm\)0.8 \\ Ballistic coef. & \(\alpha\) & 444\(\pm\)4 \(\mid\) 849\(\pm\)8 \\ Mass-loss parameter & \(\beta\) & 25\(\pm\)7 \\ Deceleration (m s\({}^{-2}\)) & a & 0.87\(\pm\)0.01 \(\mid\) 1.67\(\pm\)0.02 \\ \hline \end{tabular} \end{table} Table 2: Atmospheric flight, photometric, physical, and heliocentric orbital (J2000) computed parameters of FH1 grazing meteor. The values with two results correspond to the luminous efficiency models considered: Ceplecha and McCrosky (1976) on the left and Revelle and Ceplecha (2001) on the right. ## 4 Discussion The distinctive attributes of FH1, primarily its remarkably high eccentricity, could conceivably prompt conjectures regarding its interstellar origin. Notably, such speculations have been posited recently in the context of certain hyperbolic fireball events cataloged in the CNEOS database. However, a critical observation emerges when analyzing orbital inclination, which appears as a key indicator that urges to exercise caution before leaping to interstellar suppositions. We need first to investigate more plausible scenarios, including the possibility that these intriguing projectiles are either indigenous to our solar system, subject to measurement inaccuracies, or potentially subjected to gravitational accelerations. In this section, due to the similarity with FH1, we discuss the possible interstellar origin of some CNEOS fireballs considering their uncertainties from events detected independently by ground-based stations. Additionally, we put forth the hypothesis that hyperbolic Earth impactors may be celestial bodies native to our solar nebula, which have been perturbed by close encounters with massive objects. More precisely, we propose that IM2's trajectory aligns exceptionally well in time and direction with the Scholz system fly-by when considering an overestimated velocity. ### CNEOS 'interstellar' fireballs As of October 2023, the CNEOS public database includes \(\sim\)956 fireballs starting from 1988. Among them, 6 events have hyperbolic orbits (see Table 3). These interstellar candidates have orbital inclinations lower than 25\({}^{\circ}\) (with an average of 12\(\pm\)9\({}^{\circ}\)). As interstellar interlopers may originate from any part of the sky, the expected inclination should be an isotropic probability density function, which follows a sinusoidal distribution (Engelhardt et al., 2017) and, therefore, is uniform in \(\cos i\). This implies that the random likelihood that \(n\) orbital inclinations fulfill \(\mid i\mid\leq\theta^{\circ}\) where \(i\in[-\pi/2,\pi/2]\) is \((1-\cos\theta)^{n}\). Consequently, the orbits of 1I/'Oumuamua and 2I/Borisov had a likelihood of being lower than \(\mid-58^{\circ}\mid\) (the largest inclination which is 1I/Oumuamu's) of \(\sim\)22%. By comparison, the likelihood of detecting six interstellar objects with inclination orbits smaller than 25\({}^{\circ}\) is \(\sim\)0.00007%. This is without considering that all 6 events are in prograde orbits, which should be expected in the 50% of extra-solar visitors and would further reduce the likelihood. Therefore, multiple options can be inferred: there are shortcoming data in the CNEOS database, these hyperbolic fireballs belonged to our solar system, or they came from sources with a directional bias. As the error bars are not provided by USG sensors, it is necessary to narrow down the uncertainties and determine the frequency of spurious data in the database. Devillepoix et al. (2019) reported that CNEOS fireball radiants are off for most events, sometimes by only a couple of degrees but other times as much as 90\({}^{\circ}\). They compared the radiants of 9 events recorded simultaneously by ground-based stations and found that the velocity vector of 4 of them was incorrectly measured by the USG sensors: Buzzard Coulee (2008-11-21 00:26:44), 2008 TC3 (2008-10-07 02:45:45), DN150102 - Kalabity (2015-01-02 13:39:11), and Crawford Bay (2017-09-05 05:11:27). As we calculate a different radiant for Crawford Bay event based on CNEOS data and, as pointed in Pena-Asensio et al. (2022), 2008 TC3 was missing a minus sign in the z velocity component, we decide to recompute the mean radiant position and velocity deviations of CNEOS fireballs including also the recent independently analyzed events Saricicek (2015-09-02 20:10:30) (Unsalan et al., 2019), Ozerki (2018-06-21 01:16:20) (Maksimova et al., 2020; Karatshova et al., 2020), Vinales (2019-02-01 18:17:10) (Zuluaga et al., 2019), 2019 MO (2019-06-22 21:25:48) (JPL Horizons, 2023), Flensburg (2019-09-12 12:49:48) (Borovicka et al., 2021), Novo Mesto (2020-02-28 09:30:34) (Vida et al., 2021), Adalen (2020-11-07 21:27:04) (Kyrylenko et al., 2023), and 2022 EB5 (2022-03-11 21:22:46) (JPL Horizons, 2023). \begin{table} \begin{tabular}{l c c c c c c} \hline Date (UTC) & \(\alpha_{R}\) (\({}^{\circ}\)) & \(\delta_{R}\) (\({}^{\circ}\)) & V\({}_{h}\) (km s\({}^{-1}\)) & a (au) & e & i (\({}^{\circ}\)) \\ \hline 2022-07-28 01:36:08 & 276.5 & 14.8 & 46.9 & -1.98 & 1.44 & 23.47 \\ 2021-05-06 05:54:27 & 62.5 & 12.2 & 44.1 & -2.64 & 1.15 & 6.05 \\ 2017-03-09 04:16:37 (IM2) & 170.6 & 34.1 & 50.1 & -1.22 & 1.57 & 24.03 \\ 2015-02-17 13:19:50 & 339.3 & -9.6 & 44.0 & -1.45 & 1.10 & 1.12 \\ 2014-01-08 17:05:34 (IM1) & 88.9 & 13.3 & 61.1 & -0.46 & 2.42 & 10.05 \\ 2009-04-10 18:42:45 & 107.8 & 4.5 & 45.5 & -1.91 & 1.33 & 6.52 \\ \hline \end{tabular} \end{table} Table 3: CNEOS hyperbolic fireballs with geocentric radiant, heliocentric velocity, semi-major axis, eccentricity, and orbital inclination. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline Event & Date & \(\alpha_{K}^{OS}\) & \(\delta_{K}^{OS}\) & \(v_{L}^{OS}\) & \(v_{L}^{OS}\) & \(\alpha_{K}^{ERF}\) & \(\delta_{K}^{ERF}\) & \(v_{L}^{ERF}\) & \(\Delta_{R}\) & \(\Delta v_{c}\) & Reference \\ & (UTC) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (km\({}_{\rm s}\)\({}^{-1}\)) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (km\({}_{\rm s}\)\({}^{-1}\)) & (\({}^{\circ}\)) & (\%) & \\ \hline 2008 TC3 & 2008-10-07 02:45:45 & 351.6 & 9.06 & 13.3 & 316.8 & 7.2 & 14.22 & 34.49 & 7.09 & Scheirich et al. (2010) \\ Buzzard Coulee & 2008-11-21 00:26:40 & 184.8 & 50.8 & 12.9 & 300 & 75 & 18.6 & 47.24 & 30.65 & Miley (2010) \\ Kosice & 2010-02-28 22:44:47 & 114.5 & 33.5 & 15.1 & 114.3 & 29 & 15 & 4.5 & 0.67 & Boroviča et al. (2013b) \\ Chelyabnisk & 2013-02-15 03:02:21 & 334.5 & -1.5 & 18.6 & 328.28 & 0.28 & 19.03 & 6.47 & 2.26 & Boroviča et al. (2013a) \\ Kalabity & 2015-01-01 23:39:11 & 53.8 & 33.5 & 18.1 & 64.3 & 51.7 & 15.4 & 19.73 & 17.53 & Devillepoix et al. (2019) \\ Romania & 2015-01-07 01:05:59 & 118.7 & 6.1 & 35.7 & 113.8 & 10.13 & 27.66 & 6.31 & 28.6 & Boroviča et al. (2017) \\ Sarjcicek & 2015-09-02 20:10:30 & 61.1 & 52.2 & 21.1 & 264.8 & 59.4 & 13.0 & 73.6 & 62.31 & Unsalan et al. (2019) \\ Baird Bay & 2017-06-30 12:46:45 & 723.6 & -16.1 & 15.2 & 272.14 & -12.5 & 15.1 & 3.87 & 0.66 & Devillepoix et al. (2019) \\ Crawford Bay & 2017-09-05 05:11:27 & 203.7 & 1.8 & 14.7 & 205.12 & 3.13 & 16.5 & 1.94 & 10.91 & Hildebrand et al. (2018) \\ 2018 LA & 2018-06-02 16:44:12 & 248.7 & -9.7 & 11.8 & 244.19 & -10.32 & 12.37 & 4.48 & 4.65 & Jenniskens et al. (2021) \\ Ozerki & 2018-06-02 01:01:16:20 & 310.6 & 44.5 & 14.4 & 307.51 & 43.11 & 14.9 & 2.63 & 3.36 & Kartashov et al. (2020) \\ Vinäles & 2019-02-01 18:17:10 & 325.4 & -42.8 & 16.3 & 324.72 & -41.43 & 16.9 & 1.46 & 3.55 & Zuluaga et al. (2019) \\ 2019 MO & 2019-06-22 21:25:48 & 217.2 & -16.1 & 9.6 & 237.3 & -15.6 & 9.3 & 19.33 & 3.23 & JPL Horizons (2023) \\ Flensburg & 2019-09-02-12 12:49:48 & 183.1 & -21.3 & 14.6 & 183.5 & -18.55 & 14.77 & 2.78 & 1.15 & Boroviča et al. (2021) \\ Novo Mesto & 2020-02-28 09:30:34 & 332 & 2.0 & 21.5 & 330.92 & 2.32 & 22.098 & 1.13 & 2.71 & Vida et al. (2021) \\ Adalen & 2020-11-07 21:27:04 & 359.2 & 47.3 & 12.4 & 358.1 & 47.6 & 13.5 & 0.8 & 8.15 & Kyrylenko et al. (2023) \\ 2022 EB5 & 2022-03-11 21:22:46 & 157.5 & 38.1 & 13.2 & 157.0 & 37.6 & 13.0 & 0.64 & 1.54 & JPL Horizons (2023) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of 17 fireballs detected by USG sensors and published on CNEOS website with independent ground-based analysis. The geocentric radiant in right ascension and declination, the entry velocity, the radiant position angle deviation, and the velocity deviation are shown. The papers used as a reference are listed in the last column. Some of the geocentric parameters have been calculated from the apparent atmospheric data. From the comparison of independently analyzed events presented in Table 4, it is found that the CNEOS radiants have a mean deviation of 21.3\(\pm\)43.2\({}^{\circ}\) in right ascension and 4.8\(\pm\)6.8\({}^{\circ}\) in declination, and a mean entry velocity deviation of 11.1\(\pm\)15.6%. It can be deduced from the standard deviations that the errors do not follow a normal distribution, as these distributions strongly depend on the relative geometry of the sensor and the fireball. Eliminating the fireballs that appear as outliers based on the radiant position and velocity errors (2008 TC3, Buzzard Coulee, Kalabity, Romania, Saricicek, 2019 MO), deviations become Gaussian where the radiant would be reduced to 1.9\({}^{\circ}\) in right ascension and 1.7\({}^{\circ}\) in declination, and a 3.6% deviation in velocity, which is the scenario we assume for the study of the CNEOS fireballs. Therefore, this assumption remains valid solely under the condition that the hyperbolic CNEOS events are part of the same distribution as the events characterized by elliptical orbits in Table 4. Hence, if these events are indeed outliers, the presented results here should be ignored. Consequently, 65% of CNEOS events provide measurements accurate enough for a rough estimation of heliocentric orbits. However, this also implies that the results for two out of the six hyperbolic fireballs will be inaccurate, rendering these estimated errors inapplicable. Note that there is a lack of a significant correlation between the velocity error and the actual velocity value. When a linear fit is performed, it yields a coefficient of determination as low as 0.027. In principle, we should not assume that the faster hyperbolic ones will necessarily exhibit larger errors These hyperbolic events, 2009-04-10 18:42:45, 2014-01-08 17:05:34 (IM1), 2017-03-09 04:16:37 (IM2), and 2021-05-06 05:54:27, appear to be somehow clusterized. The geocentric radiants of these 4 fireballs are suspiciously distributed around the Gemini constellation with an average radiant distance to the constellation center point of 34.2\({}^{\circ}\). We check the likelihood that 4 out of 6 randomly selected events from the CNEOS database (the 255 fireballs analyzed in Pena-Asensio et al. (2022)) have a lower mean distance value to Gemini (10,000 draws). We find that these 4 hyperbolic events represent 1.7\(\sigma\) with respect to the mean, which denotes a probability of 6.9% having occurred by chance (see Figure 6). From a completely isotropic radiant distribution, the probability of having obtained a smaller distance for 4/6 events is 4.3%. However, this does not strictly imply they are associated with each other as the anisotropic radiant distribution could be explained as well by both observational bias and solar system induced dynamics. Figure 6: Result of randomly selecting 6 CNEOS events and computing the mean minimum distance to the Gemini constellation center of 4 of them. The calculation is repeated 10,000 times. In cyan color are shown the draws with equal or smaller distances than the 6 CNEOS hyperbolic events. The fit to a Gaussian distribution is shown in red. ### The massive objects fly-by hypothesis Not all gravitationally unbound objects from our solar system are necessarily interstellar interlopers. There are different mechanisms capable of accelerating objects native to our solar nebula into hyperbolic orbits. Some of them are the secular perturbations induced by the Galactic disk or fly-by impulsive interaction of massive extra-solar bodies (Fouchard et al., 2011; Krolikowska and Dybczynski, 2017). Furthermore, close encounters with the Sun or giant planets may result in an inbound excess velocity, although these are not frequent enough to explain the observed hyperbolic meteor orbits (Hajdukova et al., 2014, 2019). Mercury has also been identified as a possible efficient producer of hyperbolic projectiles to Earth (Wiegert, 2014). Other exotic hypotheses suggest unseen stellar companions to the Sun (Davis et al., 1984) or unknown planets (Socas-Navarro, 2023) as a source of hyperbolic Earth impactors. When the idea of the Oort cloud was introduced, i.e. a very distant region with long-period comets, it was also pointed out the existence of a mechanism to shorten their perihelia, for example, inbound hyperbolic injection produced by passing stars (Oort, 1950). Indeed, recent studies suggest that stellar close encounters send accelerated bodies into the planetary zone (Dybczynski and Krolikowska, 2022). Higuchi and Kokubo (2020) showed that celestial bodies of sub-stellar mass (down to approximately 0.2 Jupiter masses) possess the ability to divert Oort cloud comets into hyperbolic trajectories characterized by small eccentricity but large perihelion distance. Other stellar systems may have also their own Oort-like clouds which could induce an influx of extra-solar objects through the planetary region when approaching the Sun (Stern, 1987). The most recent stellar fly-by to our solar system was the low-mass binary star WISE J072003.20-084651.2, also known as Scholz's star (hereafter Scholz), which crossed the outer layers of the Oort cloud at \(52^{+23}_{-14}\) kau about \(70^{+15}_{-10}\) kya ago (Scholz, 2014; Burgasser et al., 2015; Mamajek et al., 2015). de la Fuente Marcos et al. (2018) analyzed hyperbolic small bodies of the data provided by JPL's solar system Dynamics Group Small-Body and the Minor Planets Center (MPC) databases. They found strong anisotropies on the geocentric radiant distribution with a statistically significant overdensity of high-speed radiants towards the constellation of Gemini, which appears to be consistent in terms of time and location with the Scholz fly-by. Precisely the geocentric radiant of FH1 falls in this constellation, as well as 4 of the 6 hyperbolic fireballs from the CNEOS database that are close to Gemini or the recent Scholz motion direction. We test the compatibility of this hypothesis by integrating backward in time the FH1 grazer and the 6 CNEOS interstellar candidates for 109,000 years to account for the estimated upper time limit of the Scholz close encounter (85,000 years). To this end, we use an orbital integrator based on a leapfrog scheme with different time steps to properly resolve the Earth-induced zenith attraction prior to the impact (Socas-Navarro, 2019). We account for the gravitational influence of the Sun, Earth, Moon, Mars, Jupiter, Saturn, Uranus, and Neptune by querying ephemerids to JPL HORIZONS system. Figure 7 shows the apparent motion of the objects starting from their geocentric radiants for the considered time. None of the events experienced a close encounter with planets, only 2021-05-06 05:54:27 fireball passed \(\sim\)5 years ago at \(\sim\)3 times Hill radius from Uranus. New results definitively dismiss the possibility that Scholz may have penetrated the dynamically active inner Oort cloud region (\(<\)20 kAU), but support the notion that it would have passed through the outer Oort cloud where objects can have stable orbits (Dupuy et al., 2019; de la Fuente Marcos and de la Fuente Marcos, 2022). Given the apparently better constrained time (\(\sim\)80 kya) and distance (\(\sim\)68 kau) for the Scholz close encounter, and its current separation from the Sun (6.80 pc), it can be computed a linear velocity with respect to the solar system barycenter of \(v^{*}=82.4\pm 0.3\,km\,s^{-1}\), which is a valid approximation for 100 kya within 2.5% accuracy (Mamajek et al., 2015). Considering the fly-by occurring at high velocity and the low mass of the Scholz system (\(M_{*}=165\pm 7\,M_{Jup}\)), it appears plausible that small bodies may have been injected towards the Earth. The meteoroid FH1 was 36\(\pm\)18 kya ago at \(\sim\)67 kau, and the IM2 object reached the same distance 14\(\pm\)2 kya ago. The closest encounter found in the simulation of FH1 with the Scholz trajectory was at 39 kau, while IM2 passed at 131 kau. In spite of almost intersecting trajectories, the excess velocity of IM2 at impact (-8 km s\({}^{-1}\)) causes it not to be compatible in time with the Scholz passage. Looking at Table 4, it can be seen that \(\sim\)18% of the events have velocity errors around 30% or more of the nominal value. If the IM2 measurement had an uncertainty of 22% it would be perfectly compatible in time with the Scholz fly-by. Velocities proximal to the parabolic limit for FH1 are consistent, both temporally and directionally, with the passage of Scholz. This consistency necessitates only a 1% reduction in Scholz's nominal velocity, a value well within the estimated range of uncertainty. Figure 8 presents the geometric configuration of the encounter involving FH1, IM2, and Scholz. Long-period objects can acquire excess velocity from relatively low gravitational perturbations with no need for very close encounters if they are oriented in the appropriate direction at the appropriate time. Moreover, the perturbation may not necessarily have occurred during the time of the maximum approach of Scholz, which took \(\sim\)21.5 kya to traverse the Oort cloud. Considering that an object is fixed in reference to the solar system barycenter, it is possible to estimate the time-integrated impulse exerted by Figure 7: Apparent motion of the objects starting from their geocentric radiants during the backward orbital integration. Scholz, FH1 grazer event, 1I/’Oumuamua (Meech et al., 2017), and 2I/Borisov (de Leon et al., 2020) are shown. The 6 dated events correspond to the hyperbolic fireballs in the CNEOS database and include the new mean deviation found for the 17 fireballs compared. All CNEOS non-hyperbolic events are also depicted, together with a center point of the Gemini constellation. Markers represent the radiant position at impact or at the current time in the case of Scholz. The ecliptic plane is plotted in yellow. Figure 8: Integration of clones for FH1, IM2, and Scholz. The diagram illustrates the heliocentric cartesian coordinates evolution during the encounter, including the respective outer clouds of material associated with both the Sun and Scholz. The arrow at the bottom right shows the direction to the point of the vernal equinox. a passing star from classical impulse approximation (Rickman et al., 2005): \[\Delta v=\frac{2GM_{*}}{v^{*}}\left(\frac{\hat{b}_{o}}{b_{o}}-\frac{\hat{b}_{*}}{b _{*}}\right), \tag{6}\] where \(G\) is the gravitational constant, \(b_{o}\) is the vector from the Oort cloud object to the Scholz closest approach, and \(b_{*}\) is the vector from the Sun to the Scholz closest approach. To elucidate, consider a notional object situated at a radial distance of 39 kau beyond the point of closest approach between the Scholz star system and the Sun, which occurs at 68.7 kau. According to Eq. 6, the interaction with the Scholz could impart a maximum velocity change of approximately 0.136 m s\({}^{-1}\). In the defined encounter geometry within the Oort Cloud, the object would need to achieve a velocity of 0.982 m s\({}^{-1}\) to attain a perihelion distance of 1 au. The velocity impulse generated by the Scholz star would represent approximately 14% of the object's perigee velocity. Therefore, such a perturbation has the capacity to significantly modify the orbital parameters of a distantly located object, resulting in a highly eccentric inbound trajectory toward Earth. Detection of exocomets and warm and cold debris belts around stars suggests the existence of Oort-like outer clouds material in other stellar systems (Marois et al., 2010; Kiefer et al., 2014). Highly eccentric evaporating comets are compatible with the metallic absorption lines observed in debris disk spectra, which may be evidence of exocomet clouds (Beust et al., 1990; Hanse et al., 2018). Hanse et al. (2018) found that 25% to 65% of the mass lost from the Oort cloud is due to objects being either injected into the planetary region or ejected into interstellar space mainly because of stellar encounters. A hypothetical Oort-like Scholz cloud should have the outer edge less distant from its center than the Sun due to its smaller mass. Assuming that both stellar systems have undergone similar processes, we can establish that the binding energies of their outer clouds of material are roughly the same and, therefore, their gravitational potential energies as well: \[-\frac{GM_{\odot}m_{o}}{R_{\odot}}=-\frac{GM_{*}m_{o}}{R_{*}}, \tag{7}\] where \(R\) is the distance from the objects to the star, \(m_{o}\) is the mass of the surrounding objects, \(\odot\) subscript refers to the Sun, and \(*\) to another star. Accordingly, the outer edge of the Scholz outer cloud should be scaled down as \(R_{*}=0.16R_{\odot}\). Given that the classical outer edge of the Oort cloud is 200 kau (Dones et al., 2004), it is expected that the Scholz system has an outer cloud edge of 32 kau, close enough to the Sun for any object to be disturbed towards the planetary region during the fly-by, even more so when Scholz Hill sphere at the closest approach was 26 kau. In fact, IM2 clone trajectory traversed the path of the Scholz cloud and a considerable percentage of the clones intersected its Hill sphere path, while some FH1 clones passed at 2 Scholz Hill sphere radius. These possible injections could be facilitated by the joined effect of the Galactic disc, other passing stars interactions, and the presence of massive objects on the outskirts of the Scholz system or of our solar system. For example, the Scholz motion region passes through the zone of high probability for the putative planet 9 and multiple hyperbolic CNEOS fireballs fall around it (Batygin and Brown, 2016; Brown and Batygin, 2021; Socas-Navarro, 2023). Note that although the maximum velocity change in a fly-by is double the relative velocity of the encounter, a very massive nearly static object could redirect another with a zero net velocity change into a hyperbolic orbit due to the new geometry of the motion with respect to the central body. However, this situation would lead to slow unbound orbits, which might explain the modest impact velocity excess for many hyperbolic solar objects. With successive data releases from Gaia, the space observatory of the European Space Agency, the identification of new stellar close encounters has been increasing. For example, in the first Gaia data release (GDR1) 2 stars (out of 300,000) were found to come within 1 pc of the Sun (Bailer-Jones, 2018), while in GDR2 were 26 stars (out of 7.2 million) passing within the same distance (Bailer-Jones et al., 2018) and 61 stars (out of 33 million) in GDR3 (Bailer-Jones, 2022). Bailer-Jones et al. (2018) inferred the present rate of stellar encounters to be 19.7\(\pm\)2.2 per Myr within 1 pc. This implies that the Oort cloud is expected to have experienced \(\sim\)2 interloper visits in the last 80 kya. However, only one has been identified during this period of time. We stress that as uncertainties of CNEOS data are unavailable, no definite conclusions can be established regarding IM2. Nevertheless, considering that there may be complex gravitational interactions, we claim that both FH1 and IM2 are consistent with being gravitationally accelerated impactors originating from the Oort Cloud, likely injected during Scholz's recent fly-by. Additionally, IM2 could plausibly be an object ejected from the outer cloud of the Scholz binary system. The other CNEOS hyperbolic events (if they are not outliers) possibly have experienced close encounters with massive objects (such as stars, free-floating brown dwarfs, rogue planets, sub-stellar or sub-Jovian mass perturbers, rogue planets, primordial black holes...) when traversing the Oort cloud less time ago than the Scholz passage, which in the case of a star could be supported by the current rate of stellar encounters. ## 5 Conclusions We have analyzed an unusual grazing meteor (FH1) recorded in October 2022 by the Finnish Fireball Network. The cm-sized meteoroid exhibited an inbound likely hyperbolic orbit and an (at least) asteroidal consistency. Considering that its orbital plane coincides with the ecliptic and is close to the parabolic velocity limit, it seems more likely to be a perturbed Oort cloud object rather than an interstellar interloper. Within the estimated uncertainties, FH1 could be associated with the known most recent stellar encounter with our solar system, i.e., the Scholz system. 4 of the 6 hyperbolic CNEOS fireballs exhibit a statistical oddity in the geocentric radiant distribution around the Gemini constellation, an area with an overdensity of hyperbolic radiants and identified as compatible in time with the Scholz fly-by. We show statistical evidence that these events cannot really pertain to a randomly incoming interstellar population as the likelihood of their low orbital inclinations is extremely improbable compared to the expected one (with a probability of having occurred by chance of 0.00007%). Therefore, these impactors most likely belonged to our solar nebula and have been perturbed by massive objects lying on or intersecting the plane of the ecliptic. These massive objects could also form part of the Oort cloud, although this would limit the excess velocity of the projectiles. Given the new mean uncertainties estimated in this work for CNEOS detections by benchmarking with 17 independent ground-based observations, the 2017-03-09 04:16:37 (IM2) fireball appears to be consistent with the Scholz close encounter if the velocity was overestimated by 22%, which is within the error range for \(\sim\)18% of events compared. In addition, IM2 showed a peak power in its light curve that corresponded to a dynamic pressure (i.e. aerodynamic strength) typical of iron meteorites, about \(\sim\)75 MPa, and most likely produced a recoverable metallic meteorite (Pena-Asensio et al., 2022; Siraj and Loeb, 2022b). This would inaugurate a window of opportunity for stellar archaeology sample collection with known trajectory beyond the tiny presolar grains embedded in meteorites. If these hyperbolic impactors were interstellar visitors, it would have significant implications both for the incoming flux of extra-solar objects and for the characterization of their physical properties, which would be biased toward high-strength compositions and low inclinations. If they were Oort cloud objects, FH1 would be the second cm-sized asteroid-like observed object after Vida et al. (2023) and IM2 the first detected iron-like body from the outermost part of our solar or another stellar system. This would provide further evidence for the massive proto-asteroid belt and Jupiter's "Grand Tack" dynamical instability scenario (Shannon et al., 2019). It would imply that the Oort cloud could currently be populated not only by weak cometary objects but also by ice-free rocky material scattered by giant planets's round trip to inner orbits. The absence of interstellar meteorites and the low orbital inclinations of the hyperbolic projectiles studied indicate that the population of massive objects forming and crossing the Oort cloud may be larger than previously thought, injecting large meteoroids into the planetary regions. Our results reinforce the idea that passing stars or other massive objects represent a source of hyperbolic Earth impactors that must be examined in detail on a case-by-case basis before claiming the interstellar origin of any object with excess velocity. ## Author Contributions EP-A performed the analysis of FH1, the statistical study of CNEOS hyperbolic events, the investigation of the massive object close encounters hypothesis, and wrote the manuscript. JV coordinated the Finnish Fireball Network efforts and supported the astrometry and atmospheric trajectory calculation of FH1. JMT-R oversaw the research activity and provided scientific insights. HS-N performed the N-body simulations and suggested the possible association of FH1 with Scholz's star. MG clarified the issues with the dynamic model fit and contributed to the estimation of the terminal velocity. MS first identified FH1 as an interstellar candidate. JMT-R and AR acquired financial support for the project leading to this publication and supervised the work. All authors read, edited, and approved the manuscript. ## Acknowledgements This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 865657) for the project "Quantum Chemistry on Interstellar Grains" (QUANTUMGRAIN). JMT-R and EP-A. acknowledge financial support from the project PID2021-128062NB-I00 funded by MCIN/AEI/10.13039/501100011033. AR acknowledges financial support from the FEDER/Ministerio de Ciencia e Innovacion - Agencia Estatal de Investigacion (PID2021-126427NB-I00, PI: AR). MG acknowledges the Academy of Finland project no. 325806 (PlanetS). HS-N acknowledges support from the Agencia Estatal de Investigacion del Ministerio de Ciencia e Innovacion (AEI-MCINN) under grant Hydrated Minerals and Organic Compounds in Primitive Asteroids with reference PID2020-120464GB-I100. This work was also partially supported by the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M. We thank all FFN station operators and photographers whose continuous dedication has allowed to record this grazing meteor: Jarmo Moilanen, Harri Kiiskinen, Markku Lintinen, Jari Luomanen, and Kari Haila. We also thank Marc Corretge-Gilart for his support in astrometric calibrations.
2302.12462
Global Pandemics Influence on Cyber Security and Cyber Crimes
COVID-19 has caused widespread damage across many areas of life and has made humans more dependent on the internet and technology making us realize the importance of secure remote working environments. While social separation is encouraged during moments of lockdown, online infrastructure has become the central focus for communication, commerce, working, and learning, creating a new challenge and trend for companies to adopt new methods and operating models. The cases of cyber-attacks increased, and fraudsters and cybercriminals took use of this to intensify their illegal activities by taking advantage of remote workers' vulnerabilities and the public's interest in information about the coronavirus. This paper examines the different types of security threats and cyber crimes that people faced in the pandemic time and the need for a safe and secure cyber infrastructure. This paper attempts to analyze the security implications of the issues.
Somya Khatri, Aswani Kumar Cherukuri, Firuz Kamalov
2023-02-24T05:26:42Z
http://arxiv.org/abs/2302.12462v1
# Global Pandemic's Influence on Cyber Security and Cyber Crimes ###### Abstract COVID-19 has caused widespread damage across many areas of life and has made humans more dependent on the internet and technology making us realize the importance of secure remote working environments. While social separation is encouraged during moments of lockdown, online infrastructure has become the central focus for communication, commerce, working, and learning, creating a new challenge and trend for companies to adopt new methods and operating models. The cases of cyber-attacks increased, and fraudsters and cyber criminals took use of this to intensify their illegal activities by taking advantage of remote workers' vulnerabilities and the public's interest in information about the coronavirus. This paper examines the different types of security threats and cyber-crimes that people faced in the pandemic time and the need for a safe and secure cyber infrastructure. This paper attempts to analyze the security implications of the issues. Keywords:cyber-crimes, cyber-attacks, fraudsters, pandemic ## 1 Introduction The COVID-19 pandemic has ushered in a sea of change that has affected nearly every aspect of our life. Since March 2020, global internet usage has increased by 50-70%. Cybersecurity has not been spared from these developments, and it now faces a whole new set of challenges. The transition to remote employment, for example, has opened the door to a variety of attack vectors. Furthermore, the pandemic's fear, confusion, and disinformation have provided chances for cybercriminals to conduct phishing scams, ransomware assaults, and other malicious activities. We can frequently see patterns forming in cybersecurity by looking at current developments. However, because things change so rapidly, even research from 2019 doesn't accurately portray the challenges we face now. Fortunately, much effort is being done to assess the current situation so that we may better prepare for the post-pandemic environment. Our dependency on the cyber infrastructure has increased and therefore the threats have also multiplied. With the introduction of hybrid modes of working, it becomes extremely important for us to take measures to mitigate the effects of cyber security threats and attacks and to increase the security of our cyber infrastructure. This article examines the different types of security threats and cyber-crimes that people faced in the pandemic time and what were its security implications. The motivation behind this work is to highlight the need for a safe and secure cyber infrastructure for future. We have first provided the background for the study which talks about the effect on organizations, security policies, then there is detailed literature survey done on the existing work in the aspects of identification of threats, their mitigation and the available cyber infrastructure. It is followed by a summary of key security issues related to COVID-19 cybersecurity and how they escalated during the Covid-19 pandemic. We have then analysed the security implications these issues had. It is then followed by the conclusion of the study done. ## 2 Background Cyberattacks have increased as a result of growth of communications and the shift to the digital mode. This has led to a number of additional dangers and vulnerabilities for information and systems. The security of organizations is at risk of being breached. For breaches that occur at both physical and digital access points, they need to have continuous surveillance and risk assessments in order to keep an eye on the situation and take appropriate action to prevent and mitigate the ill effects. For many organizations, the IT department is under a lot of pressure due to this transition to remote employment, while ensuring the security of the information systems. Collaboration tools fulfill the requirement of keeping staff in synchronization, at the same time it makes the system more vulnerable to critical information of the organization being hacked as it is now stored in less secure remote locations. Applying company security policies and controls to remote employees is difficult as they have limited scalability and can take a long time to set up. Business continuity plans (BCPs) and incident response plans (IRPs) are almost always inadequate or nonexistent. Security officials had never before attempted a BCP attack on this scale. The increased digital footprint and traffic are being used by cybercriminals to scan systems for weaknesses or steal money. They are well aware that numerous businesses and workers have exposed themselves to hacking. For instance, phishing emails with attachments that download malware, crash computers, or steal sensitive data are being sent out. They are themed on COVID-19. Attackers build fictitious websites or hijack weak ones to house malware. The websites draw visitors, who then download malicious malware onto their machines as a result. Some fraudulent websites also solicit payments from paid workers through email connections. Numerous Covid-19 patient count-status programs and URLs have been identified to include viruses and malware that steals identities. ## 3 Literature Survey The scientific community has begun to pay attention to the problem of COVID-19 cyberattacks. There has been a lot of research work that has been done in the cyber security field which we can relate to the covid-19 scenario. We have classified the literature survey done into three categories broadly: Table 1 for identification of threats, Table 2 their mitigation or prevention and Table 3 for research on the cyber security infrastructure available. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Refer- ence & Methodology/Approach & Remarks/Observations \\ \hline [1] & Pattern identification and detection, penetration testing, and cyber-crime mapping & Automating schedule assignments to improve security processes, allowing security examiners to focus on data that requires significant programming abilities. \\ [2] & 5 types of attacks were examined and related to the Cyber Kill Chain model, as well as protection techniques against these attacks. \\ [3] & Exploratory data analysis performed on 155k shared threats including 10k cyber threats related to COVID-19. \\ [4] & Fuzzy logic and data mining - based intelligence system fo r detecting Covid-19 themed malicious phishi ng attacks. \\ \hline \hline \end{tabular} \end{table} Table 1: Identification of threats \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Refer- ence & Methodology/Approach & Remarks/Observations \\ \hline [5] & Approaches to overcome cyber threats(anti-virus software, firewalls, updated OS) & Skilled hackers can exploit weakness in network configurations and bypass it. \\ [6] & Neural Network to collect data about phishing attacks & New IoT layered models with privacy and security components and layer recognition. This may be used in con junction with security measures to mitigate cybersecurity threats at each layer: IoT, edge, and cloud. Gain insights into common network security concerns. Not enough data sources. SDN becomes into a single point of failure, making it a valuable target for attackers. Implementing the correct security measures for the wireless network may be expensive. Such an analytical advantage might be used by MTS sub-sectors to identify how to best prioritize security enhancements, encouraging improved resilience in the industry as a whole. A ubiquitous secure technological system model that protects the company's network architecture as well as access to secret data will need to be built. Any university can use it as a broad security guideline to assess the security of its remote access and Internet border system. High cost for installing these devices and software for everyone on campus Used the Bayesian inference technique and the Dempster Shafer theory 80% handled sensitive data without adequate protection and weak cryptographic algorithms and embedded data trackers Prototype of a National Blockchain-based Public Data Registration System \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline Reference & Methodology/Approach & Remarks/Observations \\ \hline [10] & The report examines the consequences and makes recommendations to improve the marine transportation system’s security and resilience in the event of future disruptions. & Such an analytical advantage might be used by MTS sub-sectors to identify how to best prioritize security enhancements, encouraging improved resilience in the industry as a whole. \\ [11] & Describes company’s evaluation of critical threats in the on-site working paradigm versus the remote working paradigm & A ubiquitous secure technological system model that protects the company’s network architecture as well as access to secret data will need to be built. Any university can use it as a broad security guideline to assess the security of its remote access and Internet border system. High cost for installing these devices and software for everyone on campus \\ [12] & Examine a university’s current remote access(VPN) security vulnerabilities & Any university can use it as a broad security guideline to assess the security of its remote access and Internet border system. High cost for installing these devices and software for everyone on campus \\ [13] & Development of a secure online learning system, which utilizes edge computing and privacy mechanisms such as trust evaluation from direct and indirect observations. Comprehensive analysis of CT apps & The Bayesian inference technique and the Dempster Shafer theory \\ [14] & Identify cybersecurity problems and present alternatives to & 80\% handled sensitive data without adequate protection and weak cryptographic algorithms and embedded data trackers \\ [15] & Identify cybersecurity problems and present alternatives to & Prototype of a National Blockchain-based Public Data Registration System \\ \hline \hline \end{tabular} \end{table} Table 3: Cyber Infrastructure improve IS in public organizations. (SNRDP-DLT) to help better manage risk from natural and man-made catastrophes. ## 4 Security Issues Cybercrime is defined as any criminal activity that takes place through the use of a networked device or a network and a computer. Most of them involve stealing or damaging information to make money for the criminals, but there are also cases where they do so to harm people. Additionally, there are cybercrimes that disseminate viruses, illicit data, photographs, and other items via computers and networks. Below is a summary of key security issues related to COVID-19 cybersecurity and how they escalated during the Covid-19 pandemic and what to expect in future. ### Identity theft As a result of the COVID-19 outbreak, we were obliged to go online. When we purchase online, seek employment, or deal with crucial affairs, we give up our personal information. Attackers have the ability to perform a variety of things to a victim's identity. They can take control of their accounts accreditation, open new money accounts, and steal their hard-earned money. The attackers' main goal is to obtain critical pieces of information about them in order to persuade a bank that they are the person. Clients should be careful not to provide too much personal information about themselves on social media and other platforms. The victim must also keep all financial information hidden. On any social networking platform, personal information should not be shared. ### Hacking Hacking is gaining access to the system structure without the owners or users consent. Interlopers are mostly software engineers with advanced understanding of computer systems who exploit this expertise for nefarious ends. Some snoopers do it casually to demonstrate their proficiency, which may range from executing secure transactions to changing computer programmes to complete projects that exceed the creator's expectations. During the current COVID-19 pandemic, the criminal-minded threat actors driving the bulk of cyberattacks have updated their attacking techniques. Since the epidemic began, fraudsters have been building new phishing tools, hacking methods, and experimenting with new attack paths in order to profit from the crisis and demonstrate their cyber expertise. ### Denial of Service attack An uncompromising endeavour in which the attackers refuse to provide services to customers who are entitled to them is a Dos attack. A Dos attack overloads the com puter's asset by flooding it with requests that are more than the computer can handle, exhausting the computer's available transfer speed and overburdening its server. This causes the framework to fail, with the server temporarily failing or crashing altogether. The greatest notable rise in DDoS assaults was seen in March and April during the pandemic times. In comparison to the first quarter of 2019, the number of assaults has climbed by 542.46%, with a QoQ increase of 278.17%. DDoS assaults have traditionally been "off-season" in the first quarter. While this is an intriguing divergence, we feel that the current COVID-19 pandemic may be a key factor.[16] ### Phishing By impersonating an authorized user, private data of the user like the password, phone number may be extracted. It's a sort of social engineering that's frequently used in the form of mail hoaxing. The victims are persuaded to download malware from the internet by social ads, and the cybercriminals force them to give out personal information under false pretenses. It is also a common way to propagate malware in, by allowing victims to download a report or click on a link that would secretly install the malicious payload in assaults that may include trojan virus, ransomware, or other types of harmful and bothersome attacks. According to a new study by F5 Labs, COVID-19 continues to contribute to the phishing and fraudulent activities of cybercriminals. It was shown that during the height of the outbreak, phishing incidents rose by 220% when compared to the annual average. The three primary goals of COVID-related phishing emails have been identified as credential collection, malware dissemination, and fraudulent donations to fictitious charity. ### Digital Stalking Type of cybercrime that includes online torture, in which the victim is harassed by a barrage of tweets and emails. Typically, cyber tormentors use social networks and websites to intimidate and instil terror in their victims. The cyber tormentors are well aware of the harm their actions have resulted in, and the goal is to make the victim bewildered and concerned about their safety. Pandemic has increased our dependence on digital tools and increased the incidence of cyber harassment. Therefore, in order to increase the protection of online users and to stop digital abuse, legal protection and other measures must be urgently adopted to address this new reality. Civil communities can as well act as an important role in increasing public awareness, building the ability to detect and report the cases of cyber harassment, and finding suitable tools and services to mitigate impact. In addition to the efforts of the international community, government and policy makers, changes in the law will give men and women equal access to a secure digital environment. ### Exploiting machine learning When a programmer attempts to put rules on a machine learning algorithm, the system and framework are open to attack. In order to learn for themselves, machine learning algorithms require information from social or community-driven networks. Snoopers try to use surveys, web surfing, and other online activities to collect personally identifying information. Cyber snoopers interested in ML exploitation may employ malicious tests or backdoors to access both the framework and the data required for training. ## 5 Security Implications ### Cyber Infrastructure Cyberinfrastructure is made up of computational systems, data and information management, cutting-edge instruments, visualization environments, and people. These components are connected by software and cutting-edge networks to increase productivity and facilitate discoveries and breakthroughs in knowledge that would not otherwise be possible. With the advent of technology, Internet of Things (IoT) and home automation, many of our devices are getting connected and our dependence on them has increased and so have the possible threats and attacks. To mitigate the effects of these threats several new authentication mechanisms have been introduced for example password-less authentication, token based authentication etc. ### Government's role Our dependency on digital technology has considerably expanded as a result of the COVID-19 pandemic. As remote working has become crucial to our economy and medical treatment, people and organisations will only continue to become more dependent on digital technology. The risk of cyberattacks, however, increases with every new device, user, and business that connects to the internet. In order for society and economies to advance, governments must be able to deliver dependable and secure digital connectivity. A robust national cybersecurity ecosystem, a dedicated national cyber-security agency, a National Critical Infrastructure Protection programme, a national incident response and recovery strategy, and clearly defined laws pertaining to all cybercrimes are the main elements of an extensive national cybersecurity plan that make up effective national cybersecurity strategies. ### Hybrid Environment Even if the pandemic has been going on for two years, there are still insufficient resources available. This presents a challenge to malicious individuals looking for weaker organizations. However, the pandemic's uncontrolled actions have also created new possibilities for stronger security and economic continuity. Increase in remote employees and clients has irreversibly altered the corporate landscape. Even businesses that are still holding onto commercial real estate realize they must deal with remote workers or hybrid workplaces for the foreseeable future. Some businesses closed their physical offices permanently. As a result, companies have increased their reliance on traditional network security measures like VPNs and firewalls installed on workplace mobile devices and transferred more apps into the cloud. Many businesses are implementing systems to monitor and control DNS, DHCP, and IP traffic moving into and out of servers for employees using their own equipment. Emerging hybrid workforce reality is producing more problems with assaults, ransomware, and data leaking. The cloud, employee-owned endpoints, and WiFi access points were the usual starting locations for attacks. Phishing was a popular method of gaining unauthorized access to take over credentials and steal or lock down data files. The use of Secure Access Service Edge (SASE) frameworks is growing. To protect sensitive information from the hazards associated with remote employees using a mix of personal and company devices, several organisations added smartphones to their equipment fleets, giving corporate-owned mobile devices to slightly more than half of respondents. A similar number of people used virtual private networks to encrypt internet traffic, especially when a remote worker was using an unprotected or unsecured wireless network. Along with these and other procedures were developed to provide personnel with safe tools to perform their responsibilities under difficult circumstances without disturbing the organization. Some of the most popular security procedures are brokers for secure cloud access (CASB) data loss prevention through data protection Endpoint DNS safety detection and response monitoring, detecting, and reacting to network activity for network protection, de-provisioning and provisioning trustworthy web gateways, VPNs, and other access control devices in a secure manner. ### Cyber hygiene and awareness Cyber hygiene is a set of consistent practises used by organisations and people to safeguard users, devices, networks, and data. A company can lower the risk of operational disruptions, data compromise, and data loss by strengthening its overall security posture and using excellent cyber hygiene. The entire effectiveness of a company's cybersecurity programme determines how well-prepared it is to counter both current and future threats. This is referred to as an organization's security posture. Use proper cyber hygiene to gain the best cybersecurity. Threats to cyber hygiene include user buy-in, predictability, and the breadth and complexity of IT systems. Users need to be aware of the best cyber hygiene practices that is to take regular backups, educate themselves about how to prevent common attacks, use encryption to protect sensitive data, make sure firewalls and routers are properly set, maintain good password hygiene, use MFA (Multi Factor authentication), patch management, online discretion and security software. ## 6 Conclusions The study made an effort to concentrate on the existing cyber security threats and weaknesses in relation to the COVID-19 worldwide pandemic. The greatest internet use ever occurred as a result of this pandemic. Everyone was required to utilize the internet to maintain their communication, commerce, and education across all age groups and walks of life. As so many people use the internet, cybercriminals have taken advantage of this to make money. The threats of cyber security have substantially increased during this pandemic due to a lack of understanding in this area. It is necessary to have a fundamental grasp of all types of potential cybersecurity threats and cyberattacks. To avoid crucial data getting into the hands of cybercriminals, it is imperative that all workers have a basic awareness of cybersecurity. The corona virus pandemic is only the start. These viruses may become more prevalent in the coming generations. It is now appropriate to consider the future. To lessen and ameliorate the challenges brought on by the COVID-19 epidemic, we must all be able to learn from it and get ready for the future.
2310.02460
Chain trajectories, domain shapes and terminal boundaries in block copolymers
The packing geometry of macromolecules in complex mesophases is of key importance to self-organization in synthetic and biological soft materials. While approximate or heuristic models rely on often-untested assumptions about how flexible molecules "fit in" to distinct locations of complex assemblies, physical assemblies derive from ensembles of fluctuating conformations, obscuring the connection between mesophase geometry and the underlying arrangements. Here, we present an approach to extract and analyze features of molecular packing in diblock block copolymer (BCP) melts, a prototypical soft matter system, based on the statistical description of chain conformations in self-consistent field (SCF) theory. We show how average BCP chain trajectories in ordered morphologies can be analyzed from the SCF-derived orientational order parameter of chain segments. We use these extracted trajectories to analyze the features of local packing geometry, including chain bending and tilt, as well as the terminal boundaries that delineate distinct domains in ordered BCP morphologies. We illustrate this analysis by focusing on measurable features of packing frustration in 2D (columnar) and 3D (spherical and bicontinuous) morphologies, notably establishing an explicit link between chain conformations in complex morphologies and their medial geometry.
Benjamin R. Greenvall, Michael S. Dimitriyev, Gregory M. Grason
2023-10-03T22:10:48Z
http://arxiv.org/abs/2310.02460v1
# Chain trajectories, domain shapes and terminal boundaries in block copolymers ###### Abstract The packing geometry of macromolecules in complex mesophases is of key importance to self-organization in synthetic and biological soft materials. While approximate or heuristic models rely on often-untested assumptions about how flexible molecules "fit in" to distinct locations of complex assemblies, physical assemblies derive from ensembles of fluctuating conformations, obscuring the connection between mesophase geometry and the underlying arrangements. Here, we present an approach to extract and analyze features of molecular packing in diblock block copolymer (BCP) melts, a prototypical soft matter system, based on the statistical description of chain conformations in self-consistent field (SCF) theory. We show how average BCP chain trajectories in ordered morphologies can be analyzed from the SCF-derived orientational order parameter of chain segments. We use these extracted trajectories to analyze the features of local packing geometry, including chain bending and tilt, as well as the _terminal boundaries_ that delineate distinct domains in ordered BCP morphologies. We illustrate this analysis by focusing on measurable features of packing frustration in 2D (columnar) and 3D (spherical and bicontinuous) morphologies, notably establishing an explicit link between chain conformations in complex morphologies and their _median geometry_. ## I Introduction Supramolecular assembly of amphiphilic molecules underlies structure formation in a broad class of material systems, from synthetic surfactants [1; 2; 3], liquid crystals [4], and block copolymers [5; 6; 7] to intra-cellular assemblies in biology [8]. In these systems, molecules spontaneously organize into a set of basic motifs -- spheres, cylinders, layers, networks -- and in high concentration, or neat systems, adopt periodically-ordered arrangements of those motifs. A generic challenge facing supramolecular assembly, in each of the specific macromolecular contexts, is to understand how molecular degrees of freedom couple into, and ultimately select among, the many possible hierarchical morphologies. Most conceptual and theoretical frameworks rely on the notion of molecular "packing" in distinct phases, roughly referring to the set of spatial arrangements of amphiphilic building blocks in a host morphology and its likely thermodynamic costs. A well-known heuristic associates a tapered, conical shape to amphiphilic units and compares the fit of that local motif into collective packing in competing morphologies (e.g. spherical vs. cylindrical micelles) [9]. In most structurally complex, and often functionally desirable, supramolecular morphologies, packing geometry is expected to be spatially variable, which is a result of frustration between constraints of space-filling at constant density and the presumed thermodynamic preference for uniform local molecular environments [10; 11; 12; 13; 14]. Examples of these complex phases include so-called bicontinuous, or double-network, phases related to the triply-periodic Gyroid and Diamond minimal surfaces [8; 15], or complex alloy-like crystals of space-filling micelles, known as the Frank-Kasper phases [16]. In these examples, frustration is colloquially associated with molecular packing constraints of filling the nodal junctions of tubular networks and the interstitial regions between sphere-like domains, respectively [12; 13; 17; 18]. These scenarios pose a basic and broad question: How do collective configurations of flexible macromolecules "fit into" and "measure" geometrically complex supramolecular phases? In this paper, we address this question in the specific context of block copolymer (BCP) melts, based on self-consistent field (SCF) theoretical methods. While we restrict our analysis to the case of BCP melts, specifically linear diblocks, we consider this system as a prototype for a more general class of macromolecular amphiphiles, most of which exhibit analogs of micellar, columnar, lamellar and bicontinuous mythologies subject to similar packing considerations. In general, attempts to connect molecular conformations to complex supramolecular morphologies face several challenges. Foremost, molecular degrees of freedom are largely "invisible" to experimental methods that probe self-assembled morphology. For example, small-angle scattering as well as electron microscopy methods resolve only spatial patterns of composition -- that is, they resolve spatial "lumps" of density of distinct parts of amphiphilic units. In the context of BCP, this typically amounts to the collective density of different block chemistries, while the underlying chains themselves are not distinguished. Simulations of either coarse-grained or atomistic models of amphiphiles provide an alternative "computational microscopy" on this issue. Such approaches can be useful for generating direct snapshots of molecular conformations in ordered phases. Notwithstanding obvious limitations in accurate parametrization of molecular models and computational sampling of sufficiently large time and length scales, such approaches are generally difficult to interpret in terms of direct and spatially-resolved thermodynamic costs, which necessarily depend on ensembles of highly fluctuating conformations. As noted above, packing models can shed more direct light on the link between molecular geometry and thermodynamics. In the context of BCP melts, a particularly useful packing model derives from the _strong-segregation theory_ (SST) of the standard SCF model, and accounts for the local entropic and enthalpic free energies of BCPs by an approximation of microscopic structure based on locally brush-like collections of chains confined within variable wedges that tessellate a space-filling morphology. A shortcoming of such packing models is that they are based on limited, and largely untested, prior ansatz about packing patterns in a given morphology. Moreover, the thermodynamic accuracy of these models is limited to certain regimes, e.g. SST of diblock melts is strictly accurate in the \(\chi N\to\infty\) limit, where \(\chi\) is the Flory-Huggins parameter, which quantifies repulsion between unlike components and \(N\) is the chain length. Hence, even presuming accurate chain packing ansatz for SST models, the role of finite \(\chi N\) fluctuation effects that are relevant to real experimental conditions remains less clear. Numerical implementations of the SCF model of BCP are arguably a "gold standard" for modeling equilibrium morphologies at finite segregation, at least sufficient far from the critical point (typically for \(\chi N\gtrsim 40\)) where composition fluctuations play a small role. This approach provides a fully statistical description of chain fluctuations in competing morphologies without any _a priori_ assumptions on the molecular packing. However, while the SCF theory is built upon the statistics of BCP chain conformations, traditional SCF implementations, like in the case of current experimental methods, are cast only in terms of the scalar composition fields of block components, leaving the locations and arrangements of underlying chains unresolved. In this article we present and illustrate an approach to map the geometry of chain conformations in BCP melts based on the SCF theory. We exploit the fact that ordered solutions of SCF, even in the standard Gaussian chain model, are described by orientational order parameters describing local chain "trajectories" in the structure. We show how these mean trajectories can be computed from finite \(\chi N\) SCF solutions of ordered phases, and argue that they extract the key "chain packing" degrees of freedom from an ensemble of fluctuating chain conformations in a spatially resolved manner. We apply this approach to consider distinct motifs of chain packing in complex morphologies and compare to prior heuristic notions of frustration in micellar and bicontinuous network phases, particularly in the large \(\chi N\) regime. We show that trajectories extracted from SCF calculations can be used to analyze specific geometric features of the morphology, including the tilting and kinking at the intermaterial dividing surface (IMDS) as well as the so-called _terminal boundaries_ that represent the contacting "ends" of brush-like domains. We exploit this approach to analyze how these geometric signatures of chain packing vary with structural features of the morphology as well as physical parameters of the diblocks themselves, including composition and conformational asymmetry. The remainder of this article is organized as follows. In Sec. II we present our method of extracting chain trajectories from SCF solutions of diblock melts, as well as what we call the _association map_ that relates spatial regions in the solution to a particular point on the IMDS. In Sec. III we apply these methods to analyze the variation of tilting and bending of chains at the IMDS in columnar phases of different symmetries. We consider the shapes of terminal boundaries in the packing as a function of the anisotropy of the columnar domain cross-section. In Sec. IV, we turn to three dimensional frustrated morphologies, illustrating and analyzing chain packing in a complex Frank-Kasper (A15) phase as well as a bicontinuous (double-gyroid) phase. This latter analysis provides direct evidence from a fluctuating chain description of a recently proposed "medial packing" picture in complex BCP assemblies. ## II Methods Here, we outline an approach to reconstruct chain trajectories and packing geometry from numerical solutions of the SCF equations for BCP melts. In this article, we illustrate the approach for linear AB diblock copolymers, although the approach may be generalized to other architectures and multi-chain mixtures. We consider chains of \(N\) total segments, where \(N_{\rm A}=fN\) ( \(N_{\rm B}=(1-f)N\)) are A-type (B-type). Segments are defined to have equal volume \(\rho_{0}^{-1}\), but may have unequal statistical lengths, Figure 1: (A) Depiction of data obtained from SCF calculations, with red and blue regions depicting domains of majority A-block and B-block components, respectively. Magenta arrows show the polar order parameter field \({\bf P}({\bf r})\) and yellow curves are stream lines of \({\bf P}({\bf r})\). (B) Spatial variations in polar order represent averaged deflections in polymer conformations. (C) Mean polar order arises from microscopic measures of chain flux, the ensemble average over all chain conformations (such as the one depicted) of the orientation \(\delta{\bf r}\) joining segment \(n\) to segment \(n+1\) along a chain oriented with respect to ends at \(n=0\) and \(n=N\). \(a_{\rm A}\) and \(a_{\rm B}\), corresponding to a _conformational asymmetry_\(\epsilon=a_{\rm A}/a_{\rm B}\). Our approach applies to the "standard" Gaussian chain model SCF of melts [17], where interactions between A and B-type segments are parameterized by the Flory-Huggins parameter, \(\chi\). The analysis that follows relies on mean-field solutions to the SCF equations for the chain-end distribution functions, as is achieved through several well-known approaches [17; 19; 20], although results in the present article are derived from the Polymer Self-Consistent Field (PSCF) code [21] ([https://pscf.cems.umn.edu/](https://pscf.cems.umn.edu/)). Supporting codes for extracting (polar) orientational order parameters (discussed in Appendix A), reconstructing trajectories and analyzing packing geometry of PSCF solutions are provided ([https://doi.org/10.7275/1b2p-q547](https://doi.org/10.7275/1b2p-q547)). ### Chain Trajectories and local packing geometry Our approach to reconstruct chain trajectories, shown schematically in Fig. 1, is based on the (mean field) polar orientational order parameter computed from SCF introduced in ref. [22]. This parameter derives from chain end distribution functions \(q_{\pm}({\bf r},n)\), which describe the statistical weights of chain conformations that "diffuse" from their free ends at \(n=0\) (\(+\)) and \(n=N\) (-) to the \(n^{\rm th}\) segment at position \({\bf r}\) in the melt. The probability that the \(n^{\rm th}\) segment of a chain in the melt is at \({\bf r}\) is proportional to the joint probability that both ends reach this point \(q_{+}({\bf r},n)q_{-}({\bf r},n)\), such that the mean-field local volume fractions of the \(n^{\rm th}\) segment at \({\bf r}\) is \[\varphi({\bf r},n)=\frac{\rho_{0}^{-1}}{\mathcal{Q}}q_{+}({\bf r},n)q_{-}({\bf r },n) \tag{1}\] and the composition fields (i.e. scalar order parameters) of \(\alpha\) = A or B segments are \[\phi_{\alpha}({\bf r})=\int_{n\in\alpha}{\rm d}n\ \varphi({\bf r},n), \tag{2}\] where \(\mathcal{Q}=V^{-1}\int{\rm d}^{3}{\bf r}\ q_{+}({\bf r},n)q_{-}({\bf r},n)\) is the normalized single-chain partition function for a total volume \(V\). To model the _trajectories_, we consider the orientational distribution of chain steps from the \(n^{\rm th}\) to the \((n+1)^{\rm th}\) segments, described by the vector \(\delta{\bf r}\), that is oriented from the \(n=0\) (A-block) end toward the \(n=N\) (B-block) end, as shown schematically in Fig. 1C. The mean orientation \(\langle\delta{\bf r}\rangle/a\) of random-walk steps at point \({\bf r}\) from \(n\) to \(n+1\) is proportional to _chain flux_ operator \[{\bf J}({\bf r},n)=\frac{\rho_{0}^{-1}}{6\mathcal{Q}}\Big{[}q_{+}({\bf r},n) \nabla q_{-}({\bf r},n)-q_{-}({\bf r},n)\nabla q_{+}({\bf r},n)\Big{]}\,. \tag{3}\] Specifically, the relation \[{\bf J}({\bf r},n)=\Big{\langle}\frac{\delta{\bf r}}{a_{\alpha}}\Big{\rangle} _{({\bf r},n)}\varphi({\bf r},n) \tag{4}\] follows from the average of \(\delta{\bf r}\) weighted by the chain-end probabilities \(q_{+}({\bf r}-\frac{\delta{\bf r}}{2},n)q_{-}({\bf r}+\frac{\delta{\bf r}}{2},n+1)\) times probability of a random-walk step from \({\bf r}-\frac{\delta{\bf r}}{2}\) to \({\bf r}+\frac{\delta{\bf r}}{2}\). Notably, the same differential form follows from both the "Gaussian thread" model as well as the continuum (\(N\gg 1\)) limit of a freely jointed chain. Given the relation in Eq. (4), it is straightforward to construct the _mean paths_ of chains where the \(n_{0}^{\rm th}\) segment passes through \({\bf r}_{0}\), described by the function \({\bf R}_{({\bf r}_{0},n_{0})}(n)\) by identifying the path tangent \(\partial_{n}{\bf R}_{({\bf r}_{0},n_{0})}\) as proportional to the local chain flux, i.e., \[\partial_{n}{\bf R}_{({\bf r}_{0},n_{0})}={\bf J}\big{(}{\bf R}_{({\bf r}_{0},n _{0})}(n),n\big{)}/\varphi\big{(}{\bf R}_{({\bf r}_{0},n_{0})}(n),n\big{)}, \tag{5}\] which can be integrated subject to the initial condition \({\bf R}_{({\bf r}_{0},n_{0})}(n_{0})={\bf r}_{0}\). In equilibrium states of BCP melts, a given point is intersected by an ensemble of chain paths, leading to a distribution of segment numbers (i.e. a distribution of \(n\)) at a given point. As our interest is in the statistical average of conformations at distinct spatial points, we consider the average over all chain conformations with segments at a given point, information that is encoded in the _polar order parameters_ \[{\bf p}_{\alpha}({\bf r})=\int_{n\in\alpha}{\rm d}n\ {\bf J}({\bf r},n), \tag{6}\] which give the local "flux" of trajectories of all \(\alpha\)-type segments at a point \({\bf r}\). We define the mean _trajectories_ of chains in terms of the total polar order parameter \({\bf P}\), the sum of averages over local densities of both segment types, \[{\bf P}({\bf r})={\bf p}_{\rm A}({\bf r})+{\bf p}_{\rm B}({\bf r}). \tag{7}\] Figure 2: Relationships between trajectories and domain morphology. (A) Depiction of \(p4mm\) columnar phase (\(f=0.3\), \(\chi N=100\), \(\epsilon=5.0\)) with slightly faceted IMDS. (B) Distribution of signed curvature \(\kappa\) (here, \(\kappa>0\) is left-handed and \(\kappa<0\) is right-handed) for a variety of trajectories, showing maxima and minima near the IMDS and near the outer terminal boundary. (C) Measure of chain kinking, given by the angle \(\beta\) between \(\hat{\bf p}_{\rm A}\) and \(\hat{\bf p}_{\rm B}\), evaluated at the IMDS. (D) Measure of chain tilt, given by the angle \(\psi\) between \(\hat{\bf P}\) and the local IMDS normal \({\bf N}_{\rm IMDS}\). In effect, mean trajectories are simply the _integral curves_ of the vector field \(\mathbf{P}(\mathbf{r})\). Defining \(\mathbf{R}_{\mathbf{r}_{0}}(t)\) as trajectory that passes through point \(\mathbf{r}_{0}\) at \(t=0\), the "flow" of trajectories along \(\mathbf{P}(\mathbf{r})\) satisfies \[\partial_{t}\mathbf{R}_{\mathbf{r}_{0}}(t)=\mathbf{P}\big{(}\mathbf{R}_{ \mathbf{r}_{0}}(t)\big{)}, \tag{8}\] subject to the initial condition \(\mathbf{R}_{\mathbf{r}_{0}}(0)=\mathbf{r}_{0}\). Note that \(t\), which parameterizes the flow along a trajectory from the A- to B-end of chains, has no specific relation to the distance between segments along the paths. An example of the relationship between the polar order parameter (magenta vectors) and reconstructed trajectories (yellow stream lines) is shown in Fig. 1A-B. In what follows, we analyze the geometry of chain trajectories as illustrated schematically in Fig. 2. First we can analyze the _bend_ of trajectories \(\mathbf{b}(\mathbf{r})\) from the unit vector of polar orientation \(\hat{\mathbf{p}}(\mathbf{r})\), \[\mathbf{b}(\mathbf{r})=(\hat{\mathbf{p}}\cdot\nabla)\hat{\mathbf{p}}\equiv \kappa(\mathbf{r})\hat{\mathbf{n}}(\mathbf{r}), \tag{9}\] where \(\kappa(\mathbf{r})\) and \(\hat{\mathbf{n}}(\mathbf{r})\) are the curvature and normal to the trajectory at \(\mathbf{r}\). As shown in the example of Fig. 2B, trajectories are largely straight, with the exception of two regions. First are the portions of trajectories near the _outer terminal boundaries_, where trajectories from one domain meet trajectories flowing in from another domain/region. In general, this leads to localized bending of trajectory orientation parallel to those boundaries. We show in Appendix B, however, that such "high deflections" in the distal ends of trajectories correspond to overlap between opposing brushes where the chain loses orientation, corresponding to a regions where \(|\mathbf{p}(\mathbf{r})|\to 0\). Hence, for the purposes of focusing on the strong-segregation features of chain packing, deflections in this distal zone can be neglected. Additionally, some chain configuration show localized bend at the IMDS, which is defined at the points where \(\phi_{\mathrm{A}}(\mathbf{r})=\phi_{\mathrm{B}}(\mathbf{r})=1/2\). These sharp bends, or _kinks_, are anticipated in certain SST models of BCP melts as one means to negotiate the conflicting demands of chain packing [23; 24; 25]. We analyze the kink angle \(\beta\) from SCF solutions, which we take to be the difference between the polar orientation on the A and B side of the IMDS. Since the polar order parameter transforms from all A-type to B-type segments over the interfacial width, in practice it is most convenient to do this by comparing the values of \(\hat{\mathbf{p}}_{\mathrm{A}}\) and \(\hat{\mathbf{p}}_{\mathrm{B}}\) at the IMDS, or \[\cos\beta=\hat{\mathbf{p}}_{\mathrm{A}}(\mathbf{r})\cdot\hat{\mathbf{p}}_{ \mathrm{B}}(\mathbf{r}),\qquad\text{for }\mathbf{r}\in\text{IMDS}. \tag{10}\] In Appendix C, we compare this measure of kink to the angle between orientations \(\hat{\mathbf{P}}\) along the same trajectory, but at points just "up-/down-stream" of the composition gradient at the IMDS, and find that both measures capture at least the same qualitative features of the packing and its dependence on BCP parameters. A related feature of local chain packing geometry is the _tilt_ of chains relative to the IMDS. While the simplest models of packing assume that the mean trajectories of chains extend _normal_ relative to the IMDS, such a pattern may come into conflict with constraints of filling space at constant density. This feature of smectic-\(C\)-like packing is well appreciated in packing models of lyotropic phases of amphiphiles [26; 27], particularly in complex, bicontinuous phases. More recently, a SST model of network phases based on the so-called _medial packing_ suggested that tilt is a generic feature of BCP melt packing as well [28; 29]. To assess the degree of tilt, we measure the angle \(\psi\) between the mean chain orientation \(\hat{\mathbf{P}}\) and the IMDS normal \(\hat{\mathbf{N}}_{\text{IMDS}}\), \[\cos\psi=\hat{\mathbf{P}}(\mathbf{r})\cdot\hat{\mathbf{N}}_{\text{IMDS}}( \mathbf{r})\qquad\text{for }\mathbf{r}\in\text{IMDS}, \tag{11}\] where \(\hat{\mathbf{N}}_{\text{IMDS}}\equiv-\nabla\phi_{\mathrm{A}}/|\nabla\phi_{ \mathrm{A}}|\) defines the normal to the IMDS, where \(\phi_{\mathrm{A}}(\mathbf{r})=1/2\). ### Association map, domains and terminal boundaries Beyond the local geometry of chain trajectories, packing is generally concerned with the shapes of _domains_ and the distributions of conformations of the chains that compose those domains. Following ref. [30], we define a _domain_ as the set of points in space occupied by BCP chains whose junctions belong to a particular IMDS, a property we denote as _association_. Here, we construct association in terms of the mean trajectories extracted from SCF calculations. Since every point on the IMDS is associated with a distinct trajectory, all points in a domain (except at terminal boundaries) can be mapped to single, corresponding points on an IMDS; this defines the _association map_. The association map \(\boldsymbol{\alpha}(\mathbf{r})\) is constructed by selecting an arbitrary point \(\mathbf{r}\) and then propagating from that Figure 3: Global structure of the association map and terminal boundaries in a columnar morphology. (A) focuses on a domain, with the inner terminal boundary (“Term. A”) highlighted in red, as well as example points \(\{\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}\}\) with their associations \(\{\boldsymbol{\alpha}(\mathbf{r}_{1}),\boldsymbol{\alpha}(\mathbf{r}_{2}), \boldsymbol{\alpha}(\mathbf{r}_{3})\}\) onto a single IMDS. (B) shows the association maps onto distinct IMDSs with separate outer terminal boundaries (“Term. B”) of distinct domains highlighted in different colors. Example points \(\{\mathbf{r}_{4},\mathbf{r}_{5},\mathbf{r}_{6}\}\) associate to distinct IMDSs, despite their relatively close proximity in space. point using Eq. (8) until the IMDS is reached. In terms of the notation defined above, \[\begin{split}\mathbf{\alpha}(\mathbf{r})&=\mathbf{R}_{\mathbf{ r}}(t_{\text{IMDS}}),\\ &\text{such that }\phi_{\text{A}}\big{(}\mathbf{R}_{\mathbf{r}}(t_{ \text{IMDS}})\big{)}=1/2\,.\end{split} \tag{12}\] Note that this introduces a map from the points on the IMDS to all points within a domain, corresponding to the trajectories that pass through the the IMDS. As shown in Fig. 3A, the preimage of the association map (i.e., \(\mathbf{\alpha}^{-1}\)), consists of trajectories passing through the IMDS that extend through the A and B portion of the domain. The ends of these trajectories mark the _terminal boundaries_ of the domains. The association maps \(\mathbf{\alpha}\) can, however, be multi-valued for a subset of points, namely those that lie on the terminal boundaries of a domain. This derives from the fact that at the contact point between two locally opposing brush regions, the segments at that point are equally likely to be anchored (i.e. their junctions are located) at different IMDS points. We thus determine the locations of the terminal boundaries by searching for regions where the association map fails to be continuous. Operationally, we search for regions where the Jacobian matrix, \[\Lambda_{ij}(\mathbf{r})=\frac{\partial\alpha_{i}}{\partial r_{j}} \tag{13}\] becomes singular, i.e. where numerically, the principal eigenvalue \(\Lambda\) of the matrix \(\mathbf{\Lambda}\) tends to diverge (\(|\Lambda|\to\infty\)). As outlined in Appendix D, to implement this numerically from SCF solutions, we remesh our solutions with triangular (2D) or tetrahedral (3D) elements, and interpolate \(\alpha(\mathbf{r})\) onto vertices. This remeshing allows us to increase resolution near to singular points in the association map as needed. The discrete approximation of \(\mathbf{\Lambda}\) can be computed using these finite elements, by noting that under the action of the association map \(\mathbf{\alpha}\), each element is stretched and compressed in different directions, characterized by the affine matrix \(\Lambda_{ij}\). We search for regions where this deformation diverges in terms of the maximal stretch of this transformation. The maximal stretch is determined by the maximal eigenvalue of the matrix product \(\mathbf{\Lambda}\); we can then identify terminal boundaries as passing through facets whose maximal eigenvalue is larger than some threshold value \(\Lambda_{\text{thresh}}\). As noted in Appendix D, the maximal distortion is intrinsically limited by mesh resolution, and in practice some analysis of the distribution of element distortions is used to numerically delineate the singular from the non-singular regions. Finally, in order to generate meshes of these terminal boundaries, we start with a mesh of the IMDS (i.e. the isocontour \(\phi_{\text{A}}(\mathbf{r})=1/2\)) and flow vertices along trajectories until threshold stretch eigenvalue is reached. Examples of the terminal boundaries for a 2D columnar morphology are highlighted in Fig. 3. The terminal boundary of the A region (highlighted as red in Fig. 3A) corresponds to points where chains are associating to distant points on the _same_ convex IMDS, which we call an _inner terminal boundary_. As we describe below, the fact that this inner terminal boundary is in general not point-like indicates that the chains are not focusing to the centroid of the convex domain, but instead their terminal ends spread over a finite 1D region of the cross-section. The terminal boundaries of the B regions (highlighted as different colors according to distinct domains in Fig. 3B) corresponds to points where chains are associating to _distinct_ IMDSs. Denoting these as _outer terminal boundaries_, they clearly split the melt into distinct domains and function similar to Voronoi (or Wigner Seitz) cells of the assembly. Crucially, unlike Voronoi cells, which are defined in terms of distances from central points, terminal boundaries are defined by the actual underlying molecular conformations. In other words, the geometric features of these terminal boundaries are _selected by the molecules_ as means to minimize the system's free energy. As we show below, this distinction means that shapes of terminal boundaries vary with BCP parameters controlling segregation, composition and chain stiffness, unlike Voronoi cells, which are fixed for a given space group. ## III Columnar morphologies We first illustrate the analysis of chain packing geometry and the terminal boundaries by considering 2D columnar mesophases. Columnar phases have long provided a testing ground for ideas related to packing frustration and how it alters domain morphology and thermodynamics in BCP melts [31, 32, 33, 11, 12, 13, 34]. The conflict between a thermodynamically uniform favored cylindrical geometry and then need to "fill the empty corner" that would be created by closed packing of cylinders is generally considered to be a source of frustration [13], which is resolved by some combination of variable IMDS curvature and chain trajectory deflection towards the interstices in the packing. In SST models, Olmsted and Milner formulated two variants of chain trajectories that satisfy packing constraints [34, 25]. On one hand, if the IMDS remains perfectly round, chains can kink from radial orientation towards the interstices to satisfy local volume constraints. On the other hand, if the IMDS perfectly copies the shapes of the (outer terminal boundary) unit cell, chains can retain straight trajectories, but will clearly incline, or tilt, with respect to the IMDS. Known respectively as the _kinked-_ and _straight_-path ansatzes, these represent two extremes for how BCP chains resolve packing frustration, and of course, variants that interpolate between these extremes suggest that SST packing [31, 12], even in columnar morphologies where frustration is relatively weaker than other morphologies, can vary significantly with chain parameters. For what follows we consider variable A-block volume fraction \(f\) as well as variable conformational asymmetry \(\epsilon\), which controls the elastic asymmetry between A- and B-brush regions. In the context of these prior motifs for chain packing in columnar morphologies, we explore the chain trajectories in finite, but generally large, values of \(\chi N\), based on SCF solutions which impose no assumptions of the packing motif and therefore reflect at least some degree of conformational fluctuations absent from SST. We first consider the classical hexagonal cylinder phase, as well as the more frustrated square phase to illustrate how inter-domain packing alters the subdomain geometry of chain packing. Next, we consider a lower symmetry family of cylinder packings to explore the link between domain anisotropy, terminal boundary geometry, and chain packing. ### Trajectories near the IMDS: Packing in Hexagonal (\(p6mm\)) and Square (\(p4mm\)) Cylinders We analyze the chain trajectories of square and hexagonal lattice columnar phases extracted from SCF solutions for a range of chain parameters. Examples in Fig. 4 highlight trajectories within the fundamental domain (or, asymmetric wedge) of each morphology, at variable \(f\) and \(\epsilon\) with fixed \(\chi N=100\). Of the ordered columnar phases, the hexagonal cylinder (with space group \(p6mm\), shown in Fig. 4E) is most generically an equilibrium phase for linear diblocks. Heuristically, this is often attributed to the fact that hexagonally-closed packing requires the lowest void density (\(<10\%\)), so that distortions of chain packing to fill the gaps away from cylindrical symmetry is minimal compared to other packings [12, 35, 3]. Notably, the square columnar phase (space group \(p4mm\), shown in Fig. 4A) has been observed in some block copolymer architectures [36, 37, 38], as well as under template-directed assembly conditions [39], and is a morphology that is expected to be relatively challenging for chains to occupy, requiring larger deflections from radial trajectories. We quantify these distinctions by comparing trajectories extracted from SCF solutions of square and cylinder packing. The reconstructed chain trajectories in Fig. 4B (\(p4mm\)) and Fig. 4F (\(p6mm\)) clearly illustrate the frustrated nature of packing in columnar phases, where trajectories bend away from the direction of the nearest cylindrical neighbor (\(\theta=0\)) towards the next nearest neighbor (\(\theta=45^{\circ}\) or \(30^{\circ}\) for \(p4mm\) and \(p6mm\), respectively). This deflection of the trajectories towards the diagonal is visibly smaller for the case of hexagonal cylinders, where chains are deflected by a smaller angle due to the higher coordination (or symmetry). For both square and hexagonal phases, the cylindrical domain outlined by the IMDS becomes increasingly warped away from a circular shape with either increasing with \(f\) or \(\epsilon\) (Fig. 4C, G). Increasing both parameters is expected to increase the relative importance of the stretching free energy of matrix (B) blocks relative to the core (A) blocks and the IMDS surface energy, and have been argued to lead to "quasi-faceted" IMDS shapes akin to the straight-path SST assumptions [12, 31]. For a given set of parameters, this IMDS faceting is more obvious for the square packing, presumably due to the larger variation of IMDS-to-outer terminal distance traversed Figure 4: Domain structure of square cylinders (\(p4mm\)) with cell edge length \(d\) at fixed \(\chi N=100\) and for parameters (A) \(f=0.20\), \(\epsilon=0.5\) and (C) \(f=0.55\), \(\epsilon=5.0\), with corresponding trajectories in (B) and (D), respectively. Selected trajectories pass through the IMDS at angle \(\theta\) with respect to the \(x\)-axis. The fundamental domain \(0\leq\theta<45^{\circ}\) is highlighted. Domain structure of hexagonal cylinders (\(p6mm\)) with cell edge length \(d\) at fixed \(\chi N=100\) and for parameters (E) \(f=0.20\), \(\epsilon=0.5\) and (G) \(f=0.55\), \(\epsilon=5.0\), with corresponding trajectories in (F) and (H), respectively. The fundamental domain \(0\leq\theta<30^{\circ}\) is highlighted. Chosen parameters approximately follow the \(p6mm\)-lamella equilibrium phase boundary as reported in [31] by which is mitigated by deforming the IMDS such that the matrix domain approaches a more uniform thickness. Underlying the more obvious changes of IMDS shape with increasing \(\epsilon\) or \(f\) are more subtle changes in trajectories. We first analyze the kink angles \(\beta\) around the IMDS in Fig. 5. Fig. 5A and E compare the kinking as function of angular position at the IMDS, which reverses sign across the mirror planes separating asymmetric wedges (at \(\theta_{\text{max}}=45^{\circ}\) and \(30^{\circ}\) respectively). Interestingly, the sign of the kink angle \(\beta\) within a fundamental domain (\(0\leq\theta<\theta_{\text{max}}\)) depends on \(\epsilon\), with relatively small negative values at low \(\epsilon\) and large positive values at high \(\epsilon\). As such, both morphologies exhibit an inversion in the sign (_or direction_) of kinking, during which the kink angle vanishes, on average (at \(\epsilon\simeq 1.45\) and \(\epsilon\simeq 1.75\) for \(p4mm\) and \(p6mm\), respectively). Curiously, both morphologies exhibits kink angles that alternate in sign _within a fundamental domain_ for intermediate values of \(\epsilon\) around the inversion point, as shown for \(p4mm\) in Fig. 5A. Importantly, the general variation of mean kink angle on \(\epsilon\) is robust, confirmed for broad ranges of \(\chi N\) and \(f\) (see Fig. 5C,D and G,H). Perhaps intuitively, we observe a much large degree of kink in square over hexagonal cylinders, accounting for the better close packing geometry of cylinders in hexagonal over square lattices. The negative kink angles at small \(\epsilon\) are seemingly consistent with the kinked-path ansatz [25], where stiffer A blocks prefer a uniformly radial packing, and deflect towards an interstitial corner (next nearest neighbor direction) at the IMDS in order to fill the B domain at uniform density. Surprisingly, this degree of negative tilt is relatively small compared to a much more prominent positive kink at large \(\epsilon\). This effect, which might be called a "counter-kinking" arrangement, can be rationalized heuristically by the tendency of the stiffer block to intersect the IMDS in a nearly-normal orientation in order to minimize the cost of stretching (i.e. associating to the shortest possible path to the IMDS in stiffer domain). Figure 5: (A) Kink angle as a function of polar angle \(0\leq\theta\leq 90^{\circ}\) for the square cylinder phase at \(\epsilon=0.5-5.0\) (fixed \(\chi N=100\), \(f=0.3\)). Intermediate values of \(\epsilon\sim 1.4-1.5\) show an inflection of the kink angle within the fundamental domain \(0\leq\theta\leq 45^{\circ}\). (B) Average of the kink angle, \(\langle\beta\rangle\), taken over a fundamental domain as a function of \(\epsilon\) for \(\chi N=60-150\) (fixed \(f=0.3\)) showing that the kink angle tends towards negative values for low \(\epsilon\), positive values for high \(\epsilon\) and passes through \(0\) at \(\epsilon\simeq 1.45\pm 0.0.5\). Inset shows the variation in the kink angle, where the shaded region extends for a single standard deviation on either side of the mean. Finally, bending angle statistics for extreme values of conformational asymmetry (\(\epsilon=0.5\) and \(5.0\)) are shown as a function of (C) \(\chi N\) and (D) \(f\), confirming the consistency of the sign of the kink angle. (E)-(H) show these data for the hexagonal cylinder phase (\(p6mm\)), where, notably, the kink inversion occurs at slightly higher \(\epsilon\), \(\epsilon\simeq 1.75\pm 0.05\). There is a smaller penalty for the less-stiff block to meet the IMDS off-normal, along a longer distant path that presumably has to absorb the cost of packing frustration, and results in a kinked trajectory that becomes increasingly kinked for more extreme values of conformational asymmetry \(\epsilon\). Since the cylinder and matrix domains invert in stiffness as \(\epsilon\) is tuned through 1, this argument predicts an inversion of the kink angle; a similar tendency has been seen in the triply-periodic network phases [29]. Relative to low \(\epsilon\), the degree of positive counter-kink for large \(\epsilon\) is enhanced due to simultaneous faceting of the IMDS, which becomes more parallel to the outer terminal boundary (i.e. the square and hexagonal Voronoi cells), requiring even greater deflection to the nominally radial inner block trajectories (see Fig. 4D, H). This scenario is consistent with the trends observed from tilt angle \(\psi\) of mean trajectories with respect to the IMDS, shown in Fig. 6. Here, we observe, for both symmetries and all conditions, that the sign of \(\psi\) is always the same (negative), and the inclination increases with _both_\(\epsilon\) and \(f\). This trend is consistent with the increasing IMDS faceting, which varies from relatively round (low \(\epsilon\), \(f\)) to relatively polyhedral (high \(\epsilon\), \(f\)), driven the by prerogatives of the stiffer block. Hence, even while the chain trajectories remain within a few degrees of radial, deformation of the IMDS away from circular implies they meet it at tilt with increasing quasi-polyhedral warping, as shown in Fig. 4C,D and G,H. Taken together, comparison of the similar ranges of kink and tilt angles in Figs. 5 and 6 suggest that to a large extent, the trajectory at the IMDS is determined by the warping of its shape from circular towards polyhedral, combined with the effect of elastic asymmetry to orient the stiffer block trajectories normal to the IMDS. Additionally, the degree of kinking, while measurable, remains fairly small for the most stable columnar state of hexagonal cylinders. For both morphologies, we find evidence that the degree of tilt and kink tends towards asymptotic saturation in the \(\chi N\rightarrow\infty\) limit, suggesting the features and trends analyzed at finite segregation bear hallmarks of a well-defined SST packing limit. ### Trajectories near the their termini: Packing in Stretched (\(c2mm\)) Cylinders Next, we turn to the shape of domain edges, i.e. the _terminal boundaries_, of columnar phases. For BCP and other amphiphile assemblies, the ends of core (A) and matrix (B) domains of columnar phases are often assumed take simplified forms. The contact surface between opposing brushes in the outer boundary is most often approximated by the 2D Voronoi cell of the packing, while inner chain trajectories are assumed to extend to a central line extending along the centroid of the 2D cross-section of that cell. As discussed previously [28; 30], these seemingly intuitive assumptions are likely to fail to describe the domain geometry in many conditions, specifically because they do not reflect how changes in IMDS shapes impact the association of BCP chains the IMDSs. That is, Voronoi cells describe only the set of points closest to a given generating point (usually a Wyckoff position of 2D crystal), not necessarily the points where chains are most likely to associate to one IMDS or another. Previously it was argued that the _medial map_[30], that maps all points in the morphology onto the closest point on the IMDSs [40; 15], would therefore provide a better (albeit purely geometric) proxy to an association map for a given IMDS shape. Here we analyze the association maps and their terminal boundaries directly from SCF solutions for a set of columnar morphologies with non-trivial symmetry. In particular, we compute a family of solutions for fixed parameters -- \(f=0.3\), \(\epsilon=1\), and \(\chi N=100\) -- for the \(c2mm\) space group. These solutions correspond to centered rectangular unit cells, including two columns per cell, in general parameterized by unit cell parameters \(\ell_{x}\) and \(\ell_{y}\). We define the aspect ratio \(\lambda\equiv\sqrt{3}\ell_{x}/\ell_{y}\) in terms of the distortion away from \(p6mm\)-like packing at \(\ell_{x}/\ell_{y}=1/\sqrt{3}\), which has higher, sixfold symmetry. For each \(\lambda\) we consider SCF solutions with minimal free energy cross-sectional area but a fixed aspect ratio. We consider the range \(0.66\leq\lambda\leq 3\), spanning a range where the nearest neighbor direction in hexagonal packing is relatively compressed (\(\lambda<1\)) or stretched (\(\lambda>1\)). For small distortions from \(\lambda=1\), the cross-sectional IMDS shapes become evidently eccentric and approximately elliptical with major axes along the stretch direction. Fig. 7 shows a sequence SCF solutions with corresponding association maps and terminal boundaries. Notably, for \(\lambda<1\) the domain cross-section becomes increasingly stretched, far beyond elliptical, for aspect ratios much lower than 1 (Fig. 7A). In contrast, increasing \(\lambda>1\) leads to a more complex evolution of domain anisotropy. This is because \(\lambda=\sqrt{3}\) (Fig. 7D) corresponds to equal unit cell dimensions, commensurate with a square (\(p4mm\)) packing. And stretching even further past the square packing to \(\lambda=3\), the structure again returns packing to hexagonal (\(p6mm\)) but with neighbor directions rotated by \(30^{\circ}\) relative to \(\lambda=1\) (Fig. 7F). Hence, at these special aspect ratios, the IMDS shapes are fairly circular (or at least, consistent with 6- or 4-fold symmetry) while at intermediate points (Fig. 7C, E) the domains exhibit evident anisotropic shapes. The effect of domain anisotropy has an obvious effect on the underlying chain packing, most evident in the geometry of the inner (A) terminal boundary. Crudely speaking, the terminal boundaries of _isotropic domains_ appear point-like, to our ability to resolve them via discrete meshes (see Appendix D), while for anisotropic shapes, these boundaries _spread_ along the longer axis of the domain shape. The spreading of the terminal boundary reflects the fact that the chain trajectories do not focus to a central point in the cross-section, and to some extent pack in quasi-lamellar fashion at the core of the domain. The extent of this line-like inner terminal boundary provides a measure of quasi-lamellar packing in the anisotropic columnar phase. To characterize the shape of the inner (A) terminal boundary, we compute its r.m.s. deviation from the cell centroid in both directions, \(\langle(\Delta x)^{2}\rangle^{1/2}\) and \(\langle(\Delta y)^{2}\rangle^{1/2}\), and normalize those by \(R_{0}\), defined as the radius of circular equal area to the A-subdomain (i.e. \(\pi R_{0}^{2}=f/2\epsilon_{x}\epsilon_{y}\)). Results for the the r.m.s. extent of the A terminal boundary in both directions are plotted in Fig. 8 as a function of \(\lambda\). These show that for the cases of hexagonal (\(\lambda=1\), \(3\)) or square (\(\lambda=\sqrt{3}\)), \(x\) and \(y\) dimensions of terminal boundary are equal (and of small magnitude), consistent with point-like shape, although finite mesh limits the ability to resolve any potentially finer scale features. Away from the these points, these alternate dimensions become unequal, by amounts that are consistent with the visibly anisotropic IMDS shapes. Note, that while the terminal boundary is likely line-like for most of these anisotropic shapes, the minimal value of the scaled r.m.s. dimensions of the terminal boundary is never lower than \(\sim 10^{-2}\), reflecting the inherent limits to imposed of finite grid resolution (as discussed in Appendix D). The observed changes in these dimensions are at least an order of magnitude larger than this scale, indicating that at least for sufficiently anisotropic of domains, analysis of the singularities of association map reconstructed from SCF is nevertheless able to illuminate these otherwise "hidden" features of chain packing and their dependence on anisotropic domain shapes. ## IV Three-dimensional morphologies: network and micellar crystals We now turn to 3D complex morphologies, which are often considered to be subject to even larger degrees of packing frustration than their 2D columnar counterparts [13]. Having shown that the anisotropy of 2D domains has a prominent impact on underlying chain trajectories, most notably the spreading of the terminal boundaries, here we focus on the terminal bound Figure 6: (A) Tilt angle as a function of polar angle \(0\leq\theta\leq 90^{\circ}\) for the square cylinder phase at \(\epsilon=0.5-5.0\) (fixed \(\chi N=100\), \(f=0.3\)). (B) Average of the tilt angle, \(\langle\psi\rangle\), taken over a fundamental domain as a function of \(\epsilon\) for \(\chi N=60-150\) (fixed \(f=0.3\)) showing that the tilt angle is negative for all \(\epsilon\). Inset shows the variation in the tilt angle, where the shaded region extends for a single standard deviation on either side of the mean. Finally, tilting angle statistics for extreme values of conformational asymmetry (\(\epsilon=0.5\) and \(5.0\)) are shown as a function of (C) \(\chi N\) and (D) \(f\), confirming the consistency of the sign of the tilt angle. (E)-(H) show these data for the hexagonal cylinder phase (\(p\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! aries of two classes of 3D morphologies: bicontinuous networks and 3D crystalline arrangements of quasi-spherical (micelle-like) domains. In both cases, the nature and shapes of the terminal boundaries and their role in describing the thermodynamics of packing frustration has been the subject of considerable debate and speculation for decades. Here, we analyze the terminal boundaries directly from the SCF description of BCP conformations, and compare it to the previously invoked proxies for the shape of domain edges, including Voronoi and medial geometry. ### Bicontinuous double-gyroid networks Terminal geometry plays a crucial role in stabilizing triply-periodic network phases. In particular, the stability of the double gyroid (DG) with respect to lamellae and columnar morphologies has long been thought to be due to its ability to relieve "packing frustration" [11]. However, the prevailing picture relied on a simplification Figure 8: Structure of the family of continuously stretched cylinders, parameterized by stretch factor \(\lambda\). Inner terminal boundary shape, quantified by the r.m.s. spread of points around the domain center along the x-direction (\(\sqrt{\langle\Delta x^{2}\rangle}\), purple) and the y-direction (\(\sqrt{\langle\Delta y^{2}\rangle}\), orange), in units of the effective IMDS radius \(R_{0}\). Inset relates the aspect ratio \(\ell_{x}/\ell_{y}\) of the \(c2mm\) unit cell to \(\lambda\). Figure 7: Domain morphologies, chain trajectories, outer terminal boundaries (blue) and inner terminal boundaries (red) for stretched cylinders along the deformation path (steps A - F). The hexagonal cylinder structure in step B is transformed to an equivalent hexagonal cylinder structure at step F (rotated by \(90^{\circ}\)) by passing through a square phase at step D. Purple and orange arrows delineate the \(x\)- and \(y\)-edges, (\(\ell_{x}\), \(\ell_{y}\)) respectively, of the unit cell at each step. of terminal geometry wherein the inner terminal boundary consists of the 1D skeletal graph, with the outer terminal boundary resembling the grayoid minimal surface [41]. The skeletal graph approximation for the inner terminal boundary proved problematic for investigations of packing frustration based on strong-segregation theory (SST), as it places extreme constraints on A-block chain packing [42; 43; 25; 44; 23]. Recently [28; 29], it was shown via a "medial surface" construction of SST (so-called medial-SST) that the inner terminal boundaries are better approximated by twisting ribbon-like surfaces that contain the skeletal graph, yet relax constraints on chain packing without violating the space-filling constraints required of a polymer melt. Crucially, it was shown in the SST limit, that entropic relaxation due to "spreading" of terminal ends of the tubular block (A-block) away from the confines of the 1D skeleton onto a web-like terminal surface is necessary to account for the equilibrium stability of DG intermediate to (hexagonal) columnar and lamellar morphologies for diblock melts. Here, we test this _medial ansatz_ for packing in DG morphologies by direct analysis of the terminal boundaries from SCF solutions. Using the terminal map \(\mathbf{\alpha}\) extracted from SCF trajectories, we find a set of terminal boundaries for a DG morphology with \(f=0.29\) and \(\chi N=50\), shown in Fig. 9A. Here, we focus on a fundamental unit of the double gyroid centered about the network nodal regions (i.e. the 16b Wyckoff positions) as an analog of the Voronoi cells in columnar and spherical phases. As they are bounded by a terminal boundary that wraps a single domain, these have been dubbed the "mesoatoms" of the DG assembly, and are taken to represent a basic unit of self-assembly [45]. The inner terminal boundary consists of a nearly-flat surface with trihedral coordination that twists by \(70.5^{\circ}\) about edges connecting neighboring nodes, similar to the inner medial web shown for comparison in Fig. 9B. The extended flat sections of the inner terminal web indicate regions of quasi-lamellar chain packing, with chain trajectories in the vicinity oriented roughly parallel to each other. About the thin strut region and along the boundary of the inner terminal web, trajectories extend in a radial pattern, indicating quasi-cylindrical chain packing. This hybrid of lamellar and cylindrical packing is thought to rationalize the stability of the double gyroid phase intermediate to lamellar and columnar phases, which has been supported by the medial model of terminal boundaries for the double gyroid and other network phases [29]. As demonstrated by the projection of the terminal webs along the [111]-direction in the bottom panels of Fig. 9, the SCF-computed inner terminal boundary is similar in gross shape to medial surface generated by the same IMDS, but is slightly reduced in dimensions relative the medial set, where the projected area of the surface shown in the inset of Fig. 9A is roughly 50% less than that of the corresponding medial surface in the inset of panel B. We attribute the reduction in size of the SCF inner terminal boundary compared with the inner medial surface to the fact that trajectories are in general not normal to the IMDS, and exhibit at least a modest degree of tilt and kinking (see [29]), which is also evident in Fig. 9A. Figure 10: (A) Plot of A-block density field \(\phi_{\rm A}(\mathbf{r})\) in a unit cell of the BCC sphere phase with single mesoatomic cell bounded by an outer terminal boundary (blue), with \(f=0.29\) and \(\chi N=50\). (B) Cutaway of the outer terminal boundary shows an IMDS (gray). (C) Cross-section of the IMDS shows the point-like inner terminal boundary (red), along with a selection of trajectories. Figure 9: “Mesoatom” unit of the double gyroid network phase with inner and outer terminal boundaries shown in red and blue; the IMDS is shown in gray. There is additional inner terminal “web” shown outside of the mesoatom that twists between neighboring nodes. (A) shows SCF-computed (\(f=0.29\) and \(\chi N=50\)) terminal boundaries and a selection of trajectories. (B) shows the medial set model of terminal boundaries. Comparisons of the inner terminal boundary geometry along the [111]-direction are shown below. ### Classical and Frank-Kasper sphere phases In contrast to the network morphologies, sphere morphologies share a compact domain structure that are a 3D analog of the 2D cross-sections of the columnar morphologies already discussed. These sphere phases are effectively crystalline packings of micelle-like domains, warped in shape by the lower-symmetry constraints of their inter-domain arrangement [12; 13; 46]. Here, we analyze SCF solutions at \(\chi N=50\) and \(f=0.29\), which is above the core composition window for sphere phases for elastically-symmetric chains, but within the window where stiff matrices stabilize them for high \(\epsilon\), notably the Frank-Kasper phases [16]. The most generically stable of the simple sphere phases, the BCC packing, is shown in Fig. 10, with rendered chain trajectories and terminal boundaries. Owing to the single mesoatom of the BCC crystal along with symmetry constraints, the outer terminal boundary, which we refer to as its _terminal cell_, appears to conform closely to its Voronoi cells, which are truncated octahedra (note, we do not attempt to resolve the possible curvature of the terminal cells). As shown in Fig. 10B, the IMDS enclosed by the outer terminal boundary is nonetheless highly spherical (here, shown for \(f=0.29\) and \(\epsilon=1.0\) at \(\chi N=50\)). The association of points between the faceted outer terminal boundary and the round IMDS requires curved chain trajectories, as shown in Fig. 10C, terminating in a small, roughly point-like inner terminal boundary, consistent with the conventional assumption of radial packing. The complex Frank-Kasper sphere phases require multiple, distinct mesoatom units that exhibit different symmetries according the symmetry-inequivalent Wyckoff positions [16]. Of these, the A15 (shown in Fig. 11A-B extracted from chain trajectories for \(\epsilon=1\)) is arguably the simplest example with only two inequivalent mesoatomic units. It also plays a particularly interesting role, as the cell boundaries resemble the Weaire-Phelan foam [47], which has a smaller area per volume than the BCC lattice, leading to windows of stability despite its relatively complex structure [48; 49; 50; 31]. It consists of two mesoatoms, labeled by coordination: the Z12, centered at the 2a Wyckoff sites and the Z14, centered at the 6c Wyckoff sites of the \(Pm\bar{3}n\) space group. These coordination numbers also describe the number of faces in the terminal polyhedra that form the boundaries of the cells housing each of the mesoatoms. As shown in Fig. 11B, the Z12 cells have dodecahedral outer terminal boundaries, consisting of identical pentagonal facets, whereas the Z14 cells have boundaries consisting of a pair of hexagonal facets and twelve pentagonal facets. It is generally observed for Frank-Kasper phases of soft matter assemblies, that the shapes of the mesoatomic units can be quite distinct. For A15, Z12 cells enclose nearly-spherical IMDSs, while the Z14 cells enclose fairly oblate quasi-ellipsoidal IMDS shapes, visibly "squashed" along the stacking directions for neighbor Z12 cells [51; 30; 46]. This different symmetry is reflected in the underlying packing in the core, where Z12 is generically observed to maintain a point-like inner terminal boundary, while the discoidal Z14 domain exhibits a disk-like terminal surface that spreads along the wider dimension of the domain. Much like the quasi-lamellar packing seen in the double-gyroid, a bundle of nearly-parallel trajectories emerge from the Z14's terminal disk, resulting in similar quasi-lamellar packing along the stacking direction, with radial packing along the edges. This difference in packing motifs within the same structure leads to asymmetry in the shape of the outer terminal boundary, namely the polygonal facets that separate neighboring cells. In Fig. 11C we compare the detailed shapes of outer terminal cells to two other models of boundaries between quasi-spherical domains: medial and Voronoi cells [30]. In particular, we directly compare the shapes of the distinct polygonal faces (all of which belong to the Z14 cell), for two values of conformatial asymmetry, \(\epsilon=1\) and \(3\). The hexagonal faces (here referred to as Z14-Z14(6), shown in green) separate neighboring Z14s along the stacking direction; a collection of four pentagonal faces (Z14-Z12(5), purple) separate Z14 and Z12 cells; the remaining eight pentagonal faces (Z14-Z14(5), orange) separate Z14 cells off of the stacking direction. Due to the slight curvature in the terminal faces, for a given face, we determine an approximate tangent plane (by averaging over all of the normal vectors of that face), and then project the face onto that tangent plane; we then fit the average shape of the face to a polygonal boundary. Through this procedure, we find that the SCF-computed terminal cell, while resembling Voronoi cells, differ in size and shape, with smaller Z14-Z14(6) and Z14-Z12(5) faces and larger Z14-Z14(5) faces. Compared to the Voronoi cell, the terminal cell is closer in shape to the outer medial surface generated by the IMDS, which is consistent with the notion that the medial map is a close approximation of the association map as it minimizes the cost of chain stretching [30]. Interestingly, as the matrix phase is made stiffer (\(\epsilon=3\) shown), the SCF-computed terminal cell approaches the shape and size of the outer medial surface even more closely. This suggests that prerogatives of the stiffer matrix block restructure the chain packing in a way that brings it somewhat closer "medial packing," an effect which has been predicted for DG networks [28; 29]. The variations in outer terminal boundary shape with changing conformational asymmetry \(\epsilon\) accompany changes in the geometry of the inner terminal boundary. As shown in Fig. 12A, the discoidal inner terminal boundary of the Z14 presents as a significant anisotropy in its shape, which is narrow (surface-like) along the local stacking direction (here, the \(z\)-axis) and spread out along the transverse directions (here, the \(x\)- and \(y\)-axes). As \(\epsilon\) increases, the spread of points along the disk narrows, resulting in a decrease in \(\sqrt{\langle\Delta x^{2}\rangle}\) (equivalent to \(\sqrt{\langle\Delta y^{2}\rangle}\), which is not shown), while the disk thickens, indicated by an increase in \(\sqrt{\langle\Delta z^{2}\rangle}\). This marks a trend wherein the terminal disk becomes more isotropic and point-like, approaching the dimensions of the Z12 inner terminal boundary, shown in Fig. 12B, which has a radius of gyration that is \(\sim 5-7\%\) of the effective radius of the IMDS. By comparison, the inner medial surfaces for the Z14 and Z12 cells, while similar in size and showing similar trends near \(\epsilon\simeq 1\), attain significantly different dimensions for larger values of \(\epsilon\). Notably, this transition from quasi-discoidal to radial chain packing from low- to high-\(\epsilon\) in the squashed FK domains was previous suggested based on observations of IMDS shape variation with \(\epsilon\)[16, 46], but here we resolve it directly from the changes in trajectories encoded the statistical descriptions of BCP conformations available in SCF. As the matrix block becomes stiffer and the morphology is increasingly dominated by the need to minimize the stretching cost of the matrix, the stretching cost of chains in the spherical core has a smaller relative impact on the total free energy. Thus, for large enough \(\epsilon\) the cost of maintaining an anisotropic cell outweighs the benefit of forming quasi-lamellar regions and a discoidal inner terminal boundary of Z14. We anticipate that similar transitions in sub-domain chain packing underlie the structure of a broader family of Frank Kasper phases, as well as their related dodecagonal quasicrystalline cousins. We expect further, that differences between medial and terminal cell shapes are likely to be even more pronounced in phases like C14 and C15, where the relative volumes between distinct mesoatoms is much greater than the case for A15, here found to exhibit \(\lesssim 20\%\) difference between Z14 and Z12. Figure 11: (A) Plot of A-block density field \(\phi_{\Delta}(\mathbf{r})\) in a unit cell of the A15 sphere phase with Z12 and Z14 mesoatomic cells bounded by outer terminal boundaries (blue), with \(f=0.29\) and \(\chi N=50\). (B) Individual Z12 cells (top) and Z14 cells (bottom) with cutaways showing the IMDSs (gray) and inner terminal boundaries (red), along with a selection of trajectories. (C) Planar projections of polygonal facets from different models of outer terminal boundaries, with Voronoi cell polygons shown in gray and medial boundary polygons shown as blue, dashed lines. Faces correspond to those labeled on the stereographic projection of the Z14 cell, with the hexagonal Z14-Z14(6) boundary in green, the pentagonal Z14-Z12(5) boundary shown in purple, and the pentagonal Z14-Z14(5) boundary shown in orange. Comparisons between \(\epsilon=1\) and \(3\) are shown. Figure 12: Comparisons of A15 inner terminal boundary geometry computed with SCF (solid lines and symbols) against medial set approximations (dashed lines, open symbols) for (A) Z14 cells and (B) Z12 cells as a function of \(\epsilon\). Variance in the distribution of points about the nodal center, \(\langle\Delta r_{i}^{2}\rangle\) are plotted in reference to the effective IMDS radius \(R_{0}\), with the \(x\)-component of the variation, \(\sqrt{\langle\Delta x^{2}\rangle}/R_{0}\), shown in purple and the \(z\)-component of the variation, \(\sqrt{\langle\Delta z^{2}\rangle}/R_{0}\), shown in orange. Here, \(R_{0}\) is the radius of a sphere with equal volume to the (A-block) domain core. Discussion and concluding remarks In this study, we employed the statistical description of fluctuations of BCP chains via SCF to extract average chain trajectories and analyze detailed features of the subdomain packing, features which are often considered be "invisible" in this formalism. From analysis of the chain trajectories, we found that chain tilt and kinking are generic features of packing within curved morphologies. These behaviors bear some characteristic behavior of previously-untested ansatz and also show some surprising new trends. We additionally extracted terminal boundaries from these SCF-computed chain trajectories, features of block copolymer domains that have been recognized as playing a key role in the formation of complex and frustrated morphologies, but again, have been otherwise "invisible" to direct study. Typically modeled using oversimplified proxies for domain boundaries, (e.g. Voronoi tessellations, skeletal graphs, and more recently, medial surfaces), here we demonstrate how these can be rigorously extracted directly from the statistical description of chain degrees of freedom in SCF. Finally, for two examples of 3D morphologies, we tested and broadly confirmed some recent conjectures about the connections between chain packing and the medial geometry of domains [28; 29; 30]. We note that this analysis for packing features of finite-\(\chi N\) raises a number of open questions, even for the restricted cased on AB diblock BCP melts, for example whether terminal boundaries of other complex morphologies, like the double-diamond and double-primitive networks, are indeed composed of multiple "leaves" joining at finite angles, as suggested by corresponding medial surface [29]. Moreover, a more mechanistic understanding of the apparently complex interplay between distinct, multiple "modes" of responding to packing frustration (e.g. chain tilt, kink and combined shapes of IMDS _and_ terminal boundaries) and how these vary with BCP parameters remains to be explored. Access to average chain trajectories within SCF gives a direct microscopic view of chain packing that has only previously been accessible to molecular simulation methods, such as molecular dynamics and Monte Carlo simulations [52; 53; 54; 22], while ensuring that equilibrium conditions are maintained. While such simulation techniques can reveal rich subdomain structures [55], general and robust approaches to quantify spatially resolved mean-trajectories and terminal boundaries from ensembles of fluctuating chain configurations remains an open challenge. Related to this, the inclusion of homopolymer has important consequences for the thermodynamics of a BCP phase and the distribution of homopolymer has been widely studied, in particular for its purported effects to relieve the costs of packing frustration [53; 56; 57; 58; 11]. Notably, the packing of homopolymer chains amongst BCPs can be addressed via suitable extension of the approach introduced above for diblock melts. Given resolution limits of microscopy, resolving information about chain trajectories and terminal boundaries remains an outstanding experimental problem, yet advancements in sophisticated 3D microscopy (namely "Slice-and-View" SEM) and reconstruction techniques that have already shed light on subdomain geometry [59; 30] and refinements of such methods may make this feasible. Beyond linear diblocks, the approach presented in this manuscript may be readily applied to more complex block copolymer architectures, such as branched polymers and even bottlebrush polymers and dendrimeric copolymers [60; 61]. These complex architectures involve block-specific orientational order parameter fields, leading to rich multiplexed trajectory information, which may give rise to exotic packing structures, such as nested terminal boundaries. Finally, trajectory information obtained from SCF calculations of polymer brushes can help address outstanding questions about brush organization and intrabrush segregation [62] and even interpenetration between brush-coated nanoparticles [63]. ## Acknowledgements The authors gratefully acknowledge valuable discussions with A. Reddy and E. Thomas. This research was supported by the U.S. Department of Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under award DE-SC0022229. SCF computations and chain trajectory analysis were carried out on the Unity Cluster at the Massachusetts Green High Performance Computing Center. ## Conflicts of interest The authors declare no conflicts of interest. ## Data availability The analysis code that was used for this study is openly available in UMass Amherst ScholarWorks at [https://doi.org/10.7275/1b2p-q547](https://doi.org/10.7275/1b2p-q547). ## Appendix A Order parameter calculations Calculation of the polar order parameter field \(\mathbf{p}_{\alpha}(\mathbf{r})\) requires prior calculation of the chain end distribution functions \(q_{\pm}(\mathbf{r},n)\), which can be exported from the open-source PSCF software used for our SCF calculations [21] (available at [https://pscf.cems.umn.edu/](https://pscf.cems.umn.edu/)). The export feature writes a pair of files (one for \(q_{-}\) and the other for \(q_{+}\)) for each of the two blocks considered here; in general, for multi-block architectures, a separate pair of files is exported for each block. We have written a Python script (available at [https://doi.org/10.7275/](https://doi.org/10.7275/) 1b2p-q547) that performs the finite difference and numerical integration necessary to the calculation of \(\mathbf{p}_{\alpha}(\mathbf{r})\). To perform the calculation for a given block \(\alpha\), the user supplies the pair of chain end distribution files, along with simulation parameters such as unit cell dimensions and statistical segment lengths. Finally, the terminal end distribution function, either \(q_{+}(\mathbf{r},N)\) or \(q_{-}(\mathbf{r},0)\), is needed to calculate the chain conformation partition function, which appears as a normalization in the order parameter calculations. Note that the modular construction of this procedure for calculating the polar order parameter field means that it can be used for arbitrary block compositions and branched architectures, outside the scope of this paper. ## Appendix B Distal bending versus interpenetration Here, we briefly comment on characteristic deflection of chain trajectories as their distal ends approach outer terminal boundary (see for example Fig. 2B). We note that the magnitude of \(\mathbf{P}\) tends to zero in this region [22], as chain trajectories become "disorientated" in the contact region between brushes from opposing domains. Indeed, we observe that the relative size of this "deflected zone" compare to the domain sizes decreases with segregation strength, consistent with the size of the interpenetration zone in BCP domains in the \(\chi N\rightarrow\infty\) limit, predicted to decreases as \((\chi N)^{-2/9}\)[64]. We characterize the edge deflected zone by points of maximal curvature near the distal ends of each trajectory, as highlighted in Fig. 13A for the \(p4mm\) (square) columnar phase. The decreasing size of this high-bending, distal zone is evident from the sequence of moderate \((\chi N=25)\) to strong (\(\chi N=500\)) segregation SCF trajectories. Fig. 13B plots the thickness of this distal zone \(\Delta\) relative to the domain size as a function of segregation strength, showing it vanishes in proportion to the degree of interpenetration. Hence, we expect that in \(\chi N\rightarrow\infty\) limit trajectories tend to a well-defined, limiting configuration that abruptly meets the terminal boundary at finite angle of incidence. ## Appendix C Measures of kink at the IMDS In the main text, the kink angle \(\beta\) is defined through Eq. (10) as the angle between the A- and B-block polar order parameter fields, respectively \(\mathbf{p}_{\text{A}}\) and \(\mathbf{p}_{\text{B}}\), evaluated at the IMDS. This definition of kink angle is perhaps the simplest and most easily generalizable; it is a _strictly local_ measure defined at the IMDS. Alternatively, we can consider a broader estimation of the turning angle of the trajectories as they pass through the IMDS. To do this, we calculate the orientation of each trajectory \(\mathbf{R}_{\mathbf{r}_{0}}(t)\) at two endpoints \(t_{-}\) and \(t_{+}\) on either side of the IMDS. Call these two orientations \(\hat{\mathbf{P}}_{-}=\hat{\mathbf{P}}(\mathbf{R}_{\mathbf{r}_{0}}(t_{-}))\) and \(\hat{\mathbf{P}}_{+}\equiv\hat{\mathbf{P}}(\mathbf{R}_{\mathbf{r}_{0}}(t_{+}))\). This alternative kink angle \(\beta\) is then taken to be the angle between \(\hat{\mathbf{P}}_{-}\) and \(\hat{\mathbf{P}}_{+}\); like in the main text, averages are then taken over the collection of trajectories with a given fundamental domain. The two endpoints \(t_{\pm}\) are selected via choice of level sets of the A-block density field \(\phi_{\text{A}}\). Since \(\phi_{\text{A}}=1/2\) corresponds to the IMDS, we choose \(t_{\pm}\) such that \[\phi_{\text{A}}(\mathbf{R}_{\mathbf{r}_{0}}(t_{\pm}))=\frac{1}{2}\pm\delta\phi \tag{12}\] for fixed values of \(\delta\phi\). The resulting kink angle statistics are shown in Fig. 14 for \(p4mm\). As shown in Fig. 14A, this window-thresholded mean kink angle exhibits similar trends as the at-IMDS kink angle from the main text as a function of \(\epsilon\) as well as \(\chi N\). Moreover, as shown in Fig. 14B, these two definitions generally approach each other as \(\delta\phi\) is decreased (i.e. the window about the IMDS is taken to be narrower). In other words, as the IMDS Figure 13: (A) Demarcation of the distal bending region, where interactions between chains from different domains result in pronounced bending, using the maximum curvature of each trajectory. \(\Delta\) quantifies the maximum distance of this enveloped from the outer terminal boundary. Examples show the distal bending region for \(p4mm\) at (i) \(\chi N=25\), (ii) 100, (iii) 300, and (iv) 500. (B) Log-log plot of the distal bending region width \(\Delta\) normalized by unit cell parameter \(d\) as a function of \(\chi N\), with a \(-2/9\) slope indicating the predicted SST scaling the size of the interpenetration zones between contacting brushes [64]. window is increased (with the exception of the largest window), the magnitude of kinking generally increases as more of the curved trajectory is taken into account. Note that in taking \(\delta\phi\to 0\), the window-thresholded kink angle will additionally limit to 0 since \(\hat{\mathbf{P}}\) is continuous through the IMDS. However, for non-zero \(\delta\phi\), the value of \(\hat{\mathbf{P}}_{\pm}\) will be dominated by the orientation of majority block within a given domain, meaning that to a good approximation, \(\hat{\mathbf{P}}_{+}\simeq\hat{\mathbf{p}}_{\mathrm{A}}\) and \(\hat{\mathbf{P}}_{-}\simeq\hat{\mathbf{p}}_{\mathrm{B}}\); this approximation is expected to improve as \(\chi N\rightarrow\infty\) since width of the IMDS decreases as \((\chi N)^{-1/2}\)[6]. As \(\delta\phi\) is increased maximally (\(\delta\phi=0.49\)), the width of the IMDS-window is extended such that the \(\hat{\mathbf{P}}\) field becomes increasingly dominated by single-block contributions, the agreement with the at-IMDS definition significantly decreases, which we attribute to "far-field" effects of the chain orientation. ## Appendix D Process for locating terminal boundaries To calculate the Jacobian of the association map, \(\Lambda_{ij}(\mathbf{r})=\frac{\partial\alpha_{i}}{\partial r_{j}}\), we generate a triangular (tetrahedral in 3D) mesh that is refined in the vicinity of the terminal boundaries. Each mesh facet consists of \(d+1\) vertices, where \(d\) is the spatial dimension, which we shall label \(\mathbf{r}^{(0)}\), \(\mathbf{r}^{(1)}\), \(\dots\), \(\mathbf{r}^{(d)}\). Using these vertices, we define \(d\) edge vectors \(\hat{\mathbf{g}}\mathbf{r}^{(1)}\equiv\mathbf{r}^{(1)}-\mathbf{r}^{(0)}\), \(\dots\), \(\delta\mathbf{r}^{(d)}\equiv\mathbf{r}^{(d)}-\mathbf{r}^{(0)}\). The association map \(\mathbf{\alpha}(\mathbf{r})\) maps these vertices to \(d+1\) image points \(\mathbf{\alpha}^{(n)}\equiv\mathbf{\alpha}(\mathbf{r}^{(n)})\); similarly, we can define \(d\) image edge vectors \(\hat{\mathbf{\alpha}}^{(n)}\equiv\mathbf{\alpha}^{(n)}-\mathbf{\alpha}^{(0)}\). The facet undergoes an affine transformation under the association map, with edge vectors related by the affine matrix \(\Lambda\) via \[\delta\mathbf{\alpha}^{(n)}=\mathbf{\Lambda}\,\delta\mathbf{r}^{(n)}\,, \tag{14}\] where \(\mathbf{\Lambda}\) takes on a different value for each facet. Expressing the correction of edge vectors as a row vector \(\left[\delta\mathbf{r}^{(1)}\dots\,\delta\mathbf{r}^{(d)}\right]\), Eq. (14) can be written as \(\left[\delta\mathbf{\alpha}^{(1)}\,\dots\,\delta\mathbf{\alpha}^{(d)}\right]=\mathbf{ \Lambda}\,\left[\delta\mathbf{r}^{(1)}\,\dots\,\delta\mathbf{r}^{(d)}\right]\). This form can then be inverted to solve for \(\mathbf{\Lambda}\), \[\mathbf{\Lambda}=\left[\delta\mathbf{\alpha}^{(1)}\,\dots\,\delta\mathbf{\alpha}^{(d)} \right]\,\left[\delta\mathbf{r}^{(1)}\,\dots\,\delta\mathbf{r}^{(d)}\right]^{ -1}\,, \tag{15}\] which approximates the Jacobian, becoming exact in the limit \(\delta\mathbf{r}^{(n)}\to 0\). Figure 14: Mean kink angle \(\langle\beta\rangle\) calculated from the total polar order parameter \(\mathbf{P}\) evaluated on a thresholding window about the IMDS for \(p4mm\) for \(f=0.3\). (A) Variation in \(\langle\beta\rangle\) calculated at level sets \(\phi=0.1\) and \(\phi=0.9\) (\(\delta\phi=0.4\)) with conformational asymmetry \(\epsilon\) for \(\chi N=60\) - \(150\). Inset shows variation in kink angle, with shading representing a single standard deviation about the mean. (B) Variation in \(\langle\beta\rangle\) with respect to variation in thresholding window \(\delta\phi\) at \(\chi N=150\). For comparison, the at-IMDS kink angle (labeled \(\hat{\mathbf{p}}_{\mathrm{A}}\cdot\hat{\mathbf{p}}_{\mathrm{B}}\)) employed in the main text is additionally shown. The principal eigenvalue of the Jacobian matrix, \(\Lambda\), is used for determining the locations of the terminal boundaries. We find that the distribution of \(\Lambda\) over mesh elements is region-dependent, generally sharply peaked around \(\Lambda\simeq 1\), but acquiring a second local maximum near the outer terminal boundary, which we attribute to the outer terminal boundary's role in partitioning space into disconnected domains. To determine the location of the outer terminal boundary, we choose a threshold value \(\Lambda_{\rm thresh}\) to include this second local maximum in the \(\Lambda\) distribution; for most computations analyzed here, \(\Lambda_{\rm thresh}\simeq 5\) seems to be a reasonable value for demarcation of the outer terminal boundary. Since the inner terminal boundaries do not partition space into disconnected domains, the distribution of stretching values in the vicinity of these boundaries remain unimodal, requiring a separate heuristic for determining \(\Lambda_{\rm thresh}\). As a first approximation, we estimate calculate the stretching factor that is required to re-scale the SCF grid spacing, \((\delta x,\delta y,\delta z)\), to a characteristic length scale of the IMDS, namely the radius of a sphere with volume equal to the volume \(V_{\rm A}\) of the \(\Lambda\)-block domain (or area \(A_{\rm A}\) in 2D), \(R_{0}=\left(3V_{\rm A}/(4\pi)\right)^{1/3}\) (or \(R_{0}=\left(A_{\rm A}/\pi\right)^{1/2}\)), so \(\Lambda_{\rm thresh}\simeq R_{0}/{\rm Max}(\delta x,\delta y,\delta z)\). Using this heuristic, we find \(\Lambda_{\rm thresh}\sim\mathcal{O}(10)\), with typical values of 10 - 30 for the inner terminal boundary, depending on the geometry of the IMDS and the spatial grid resolution of the SCF calculation. For highly non-spherical IMDSs, we find that smaller values of \(\Lambda_{\rm thresh}\) are required to determine finder features of the inner termini. In regards to our heuristic, this is because the IMDS develops finer-scale features, such as variable radii of curvature that are \(\lesssim R_{0}\), leading to a depressed \(\Lambda_{\rm thresh}\). Moreover, since the heuristic for \(\Lambda_{\rm thresh}\) estimates the limit based on numerical resolution, the resulting terminal boundaries are sensitive to discretization error. As a result, for the \(c2mm\) calculations, we use a lower threshold, choosing \(\Lambda_{\rm thresh}\simeq 10\) rather than \(\simeq 30\) based on the heuristic. As shown in Fig. 15, the shape and size of the \(c2mm\) inner terminal boundary depends on the choice of \(\Lambda_{\rm thresh}\), but importantly the qualitative features are retained even at the limits of numerical resolution.
2306.01857
Knowledge of cultural moral norms in large language models
Moral norms vary across cultures. A recent line of work suggests that English large language models contain human-like moral biases, but these studies typically do not examine moral variation in a diverse cultural setting. We investigate the extent to which monolingual English language models contain knowledge about moral norms in different countries. We consider two levels of analysis: 1) whether language models capture fine-grained moral variation across countries over a variety of topics such as ``homosexuality'' and ``divorce''; 2) whether language models capture cultural diversity and shared tendencies in which topics people around the globe tend to diverge or agree on in their moral judgment. We perform our analyses with two public datasets from the World Values Survey (across 55 countries) and PEW global surveys (across 40 countries) on morality. We find that pre-trained English language models predict empirical moral norms across countries worse than the English moral norms reported previously. However, fine-tuning language models on the survey data improves inference across countries at the expense of a less accurate estimate of the English moral norms. We discuss the relevance and challenges of incorporating cultural knowledge into the automated inference of moral norms.
Aida Ramezani, Yang Xu
2023-06-02T18:23:35Z
http://arxiv.org/abs/2306.01857v1
# Knowledge of cultural moral norms in large language models ###### Abstract Moral norms vary across cultures. A recent line of work suggests that English large language models contain human-like moral biases, but these studies typically do not examine moral variation in a diverse cultural setting. We investigate the extent to which monolingual English language models contain knowledge about moral norms in different countries. We consider two levels of analysis: 1) whether language models capture fine-grained moral variation across countries over a variety of topics such as "homosevality" and "divorce"; 2) whether language models capture cultural diversity and shared tendencies in which topics people around the globe tend to diverge or agree on in their moral judgment. We perform our analyses with two public datasets from the World Values Survey (across 55 countries) and PEW global surveys (across 40 countries) on morality. We find that pre-trained English language models predict empirical moral norms across countries worse than the English moral norms reported previously. However, fine-tuning language models on the survey data improves inference across countries at the expense of a less accurate estimate of the English moral norms. We discuss the relevance and challenges of incorporating cultural knowledge into the automated inference of moral norms. ## 1 Introduction Moral norms vary from culture to culture (Haidt et al., 1993; Bicchieri, 2005; Atari et al., 2022; Iuring and Saucier, 2020). Understanding the cultural variation in moral norms has become critically relevant to the development of machine intelligence. For instance, recent work has shown that cultures vary substantially in their judgment toward moral dilemmas regarding autonomous driving (Awad et al., 2018, 2020). Work in Natural Language Processing (NLP) also shows that language models capture some knowledge of social or moral norms and values. For example, with no supervision, English pre-trained language models (EPLMs) have been shown to capture people's moral biases and distinguish between morally right and wrong actions (Schramowski et al., 2022). Here we investigate whether EPLMs encode knowledge about moral norms across cultures, an open issue that has not been examined comprehensively. Multilingual pre-trained language models (mPLMs) have been probed for their ability to identify cultural norms and biases in a restricted setting (Yin et al., 2022; Arora et al., 2022; Hammerl et al., 2022; Touileb et al., 2022). For instance, Hammerl et al. (2022) show that mPLMs capture moral norms in a handful of cultures that speak different languages. However, it remains unclear whether monolingual EPLMs encode cultural knowledge about moral norms. Prior studies have only used EPLMs to assess how they encode undesirable biases toward different communities (Ousidhoum et al., 2021; Abid et al., 2021; Sap et al., 2020; Nozza et al., 2021, 2022). For instance, Abid et al. (2021) show that GPT3 can generate toxic comments against Muslims, and Nozza et al. (2022) explore harmful text generation toward LGBTQIA+ groups in BERT models (Devlin et al., 2018; Liu et al., 2019). Extending these lines of work, we assess whether monolingual EPLMs can accurately infer moral norms across many cultures. Our focus on EPLMs is due partly to the fact that English as a lingua franca has widespread uses for communication in-person and through online media. Given that EPLMs may be applied to multicultural settings, it is important to understand whether these models encode basic knowledge about cultural diversity. Such knowledge has both relevance and applications for NLP such as automated toxicity reduction and content moderation (Schramowski et al., 2022). Another motivation for our focus is that while it is expected that EPLMs should encode western and English-based moral knowledge, such knowledge might entail potential (implicit) biases toward non-English speaking cultures. For example, an EPLM might infer a situation to be morally justifiable (e.g., "political violence") in a non-English speaking culture (because these events tend to associate with non-English speaking cultures in corpora) and thus generate misleading representations of that community. Here we probe state-of-the-art EPLMs trained on large English-based datasets. Using EPLMs also supports a scalable analysis of 55 countries, which goes beyond existing work focusing on a small set of high-resource languages from mPLMs and monolingual PLMs. We take the moral norms reported in different countries to be a proxy of cultural moral norms and consider two main levels of analysis to address the following questions: * Level 1: Do EPLMs encode moral knowledge that mirrors the moral norms in different countries? For example, "getting a divorce" can be a morally frowned-upon topic in country \(i\), but morally acceptable in country \(j\). * Level 2: Can EPLMs infer the cultural diversity and shared tendencies in moral judgment of different topics? For example, people across nations might agree that doing \(X\) is morally wrong while disagreeing in their moral judgment toward \(Y\). We probe EPLMs using two publicly available global surveys of morality, World Values Survey wave 7 Haerprfer et al. (2021)1 (WVS) and PEW Global Attitudes survey (PEW) Research Center (2014)2. For example, according to WVS survey (illustrated in Figure 1), people in different cultures hold disparate views on whether "having casual sex" is morally acceptable. In contrast, they tend to agree more about the immorality of "violence against other people". Our level 1 analysis allows us to probe the fine-grained cultural moral knowledge in EPLMs, and our level 2 analysis investigates the EPLMs' knowledge about shared "universals" and variability across cultures in moral judgment. Following previous work Arora et al. (2022) and considering the current scale of global moral surveys, we use country as a proxy to culture, although this approach is not fully representative of all the different cultures within a country. Footnote 1: [https://www.worldvaluessurvey.org/WSContents.jsp](https://www.worldvaluessurvey.org/WSContents.jsp) Footnote 2: [https://www.penwresearch.org/global/interactives/global-morality/](https://www.penwresearch.org/global/interactives/global-morality/) We also explore the utility-bias trade-off in encoding the knowledge of cultural moral norms in EPLMs through a fine-tuning approach. With this approach it may be possible to enhance the moral knowledge of EPLMs in a multicultural setting. We examine how this approach might reduce the ability Figure 1: Comparison of human-rated and machine-scored moral norms across cultures. Left: Boxplots of human ratings of moral norms across countries in the World Values Survey (WVS) Haerprfer et al. (2021). Each dot represents the empirical average of participants’ ratings for a morally relevant topic (e.g., “abortion”) within a country. Right: Corresponding moral scores estimated by a language model (Sentence-BERT) Reimers and Gurevych (2019). Each dot represents the moral score obtained by probing the language model in a given country. of EPLMs to infer English-based moral norms and discuss how it might induce cultural biases. ## 2 Related work ### Automated moral inference in NLP Large language models have been utilized to make automated moral inference from text. Trager et al. (2022) used an annotated dataset to fine-tune language models to predict the moral foundations Graham et al. (2013) expressed in Reddit comments. Many other textual datasets and methods have been proposed for fine-tuning LMs for moral norm generation, reasoning, and adaptation Forbes et al. (2020); Emelin et al. (2021); Hendrycks et al. (2021); Ammanabrolu et al. (2022); Liu et al. (2022); Lourie et al. (2021); Jiang et al. (2021). Schramowski et al. (2022) proposed a method to estimate moral values and found EPLMs to capture human-like moral judgment even without fine-tuning. They identified a MoralDirection using the semantic space of Sentence-BERT Reimers and Gurevych (2019) (SBERT) that corresponds to values of right and wrong. The semantic representations of different actions (e.g., _killing people_) would then be projected in this direction for moral judgment estimation. However, this method assumed a homogeneous set of moral norms, so it did not examine cultural diversity in moral norms. ### Language model probing Probing has been used to study knowledge captured in language models. Petroni et al. (2019) proposed a methodology to explore the factual information that language models store in their weights. Similar probing techniques have been proposed to identify harmful biases captured by PLMs. Ousidhoum et al. (2021) probed PLMs to identify toxic contents that they generate toward people of different communities. Nadeem et al. (2021) took a similar approach and introduced Context Association Tests to measure the stereotypical biases in PLMs, Yin et al. (2022) used probing to evaluate mPLMs on geo-diverse commonsense knowledge, and Touileb et al. (2022) developed probing templates to investigate the occupational gender biases in multilingual and Norwegian language models. Related to our work, Arora et al. (2022) used cross-cultural surveys to generate prompts for evaluating mPLMs in 13 languages. For each country and category (e.g., Ethical Values) in the surveys, they take an average of participants' responses to different questions in the category and show that mPLMs do not correlate with the cultural values of the countries speaking these languages. Differing from that study, we assess finer-grained prediction of EPLMs on people's responses to individual survey questions. More recently, Dillion et al. (2023) prompted GPT-3.5 Brown et al. (2020) with human judgments in different moral scenarios and found striking correlation between the model outputs and the human judgments. Similar to Schramowski et al. (2022), this work also used a homogeneous set of moral ratings which represented English-based and Western cultures. ## 3 Methodology for inferring cultural moral norms We develop a method for fine-grained moral norm inference across cultures. This method allows us to probe EPLMs with topic-country pairs, such as "getting a divorce in [Country]".3 We build this method from the baseline method proposed by Schramowski et al. (2022) for homogeneous moral inference, where we probe EPLM's moral knowledge about a topic without incorporating the cultural factor (i.e., the country names). Similar to that work, we use SBERT through bert-large-nli-mean-tokens sentence transformer model and use topic and topic-country pairs as our prompts.4 This model is built on top of the BERT model, which is pre-trained on BooksCorpus Zhu et al. (2015) and Wikipedia. Footnote 3: We replace [Country] with a country’s name. ### Autoregressive EPLMs Since the MoralDirection is constructed from the semantic space of the BERT-based EPLMs Schramowski et al. (2022), we develop a novel approach to probe autoregressive state-of-the-art EPLMs, GPT2 Radford et al. (2019) and GPT3 Brown et al. (2020). For each topic or topic-country pair, we construct the input \(s\) as "In [Country] [Topic]". We then append a pair of opposing moral judgments to \(s\) and represent them formally as \((s^{+},s^{-})\). For example, for \(s=\)"In [Country] getting a divorce", and (_always justifiable_, _never justifiable_) as the moral judgment pair, \(s^{+}\) and \(s^{-}\) would be "In [Country] getting a divorce is always justifiable" and "In [Country] getting a divorce is never justifiable" respectively.5 To make our probing robust to the choice of moral judgments, we use a set of \(K=5\) prompt pairs (i.e.,{_(always justifiable, never justifiable), (morally good, morally bad), (right, wrong), (ethically right, ethically wrong), (ethical, unethical)_}), and refer to appended input pairs as \((s_{i}^{+},s_{i}^{-})\) where \(i\in[K]\). Since GPT2 and GPT3 are composed of decoder blocks in the transformer architecture Vaswani et al. (2017), we use the probabilities of the last token in \(s_{i}^{+}\), and \(s_{i}^{-}\) as a moral score for each. The moral score of the pair \((s_{i}^{+},s_{i}^{-})\) is the difference between the log probabilities of its positive and negative statements. Footnote 5: We also try probing with the template \(s=\) “People in [Country] believe [Topic]”, but the results do not improve, so we report the most optimal prompts in the main text, and the rest are shown in Appendix C. \[MS(s_{i}^{+},s_{i}^{-})=\log\frac{P(s_{iT}^{+}|s_{i<T}^{+})}{P(s_{iT}^{-}|s_{i< T}^{-})} \tag{1}\] Here \(s_{iT}^{+}\) and \(s_{iT}^{-}\) are the last tokens in \(s_{i}^{+}\) and \(s_{i}^{-}\) respectively, and their probabilities can be estimated by the softmax layer in autoregressive EPLMs. We take an average of the estimated moral scores for all \(K\) pair statements to compute the moral score of the input. \[MS(s)=\frac{1}{K}\sum_{i=1}^{K}MS(s_{i}^{+},s_{i}^{-}) \tag{2}\] To construct the baseline, we compute the homogeneous moral score of a topic without specifying the country in the prompts. Using prompt pairs allows us to operationalize moral polarity: a positive moral score indicates that on average the EPLM is more likely to generate positive moral judgment for input \(s\), compared to negative moral judgment. We use GPT2 (117M parameters), GPT2-MEDIUM (345M parameters), GPT2-LARGE (774M parameters), and GPT3 (denoted as GPT3-PROBS, 175B parameters)6. GPT2 is trained on WebText, which is a dataset of webpages and contains very few non-English samples. Around \(82\%\) of the pre-training data for GPT3 comes from Common Crawl data and WebText2 Kaplan et al. (2020), an extended version of WebText Radford et al. (2019). Around \(7\%\) of the training corpus of GPT3 is non-English text. Considering such data shift from books and articles in BERT to webpages in GPT2 and GPT3 in astronomical sizes, it is interesting to observe how cultural moral norms would be captured by EPLMs trained on webpages, which cover a more heterogeneous set of contents and authors. Footnote 6: We access GPT2 through transformer package provided by huggingface. We access GPT3 through OpenAI API of text-davinci-002 engine with a temperature of \(0.6\) for text generation. We also design multiple-choice question prompts to leverage the question-answering capabilities of GPT3 (denoted as GPT3-QA). Similar to the wording used in our ground-truth survey datasets, questions are followed by three options each describing a degree of moral acceptability. We repeat this question-answering process \(5\) times for each topic-country pair and take the average of the model responses. Table 2 in the Appendix shows our prompts for all models. ## 4 Datasets We describe two open survey data that record moral norms across cultures over a variety of topics. ### World Values Survey The Ethical Values section in World Values Survey Wave 7 (WVS for short) is our primary dataset. This wave covers the span of 2017-2021 and is publicly available Haerpfer et al. (2021). In the Ethical Values section, participants from 55 countries were surveyed regarding their opinions on 19 morally-related topics. The questionnaire was translated into the first languages spoken in each country and had multiple options. We normalized the options to range from \(-1\) to \(1\), with \(-1\) representing "never justifiable" and 1 "always justifiable". The moral rating of each country on each topic (i.e., topic-country pair) would then be the average of the participant's responses. ### PEW 2013 global attitude survey We use a secondary dataset from PEW Research Center (Research Center, 2014) based on a public survey in 2013 that studied global moral attitudes in 40 countries toward eight morally-related topics (PEW for short). 100 people from each country participated in the survey. The questions were asked in English and had three options representing "morally acceptable", "not a moral issue", and "morally unacceptable". We normalized these ratings to be in the range of \(-1\) to \(1\) and represented each topic-country pair by taking an expected value of all the responses. ### Homogeneous moral norms We also use the data from the global user study in Schramowski et al. (2022) which were collected via Amazon MTurk from English speakers. This dataset contains \(234\) participants' aggregated ratings of moral norms used for identifying the MoralDirection. Around half of the participants are from North America and Europe. We refer to this dataset as "Homogeneous norms" since it does not contain information about moral norms across cultures. ## 5 Evaluation and results We evaluate EPLMs' moral knowledge with respect to 1) homogeneous moral norms, 2) fine-grained moral norms across cultures, and 3) cultural diversities and shared tendencies on moral judgment of different topics. ### Homogeneous moral norm inference For homogeneous moral norm inference, we compute Pearson correlation between 1) the empirical homogeneous moral ratings, obtained by aggregating the human moral ratings toward a topic from all countries, and 2) language model inferred moral scores, estimated from our homogeneous probing method (i.e., without specifying country in prompts). Figure 2 shows the results on World Values Survey (\(n=1,028\)), PEW survey (\(n=312\)), and the Homogeneous norms datasets (\(n=100\)). The high correlation of GPT2 and GPT3 moral scores with the Homogeneous norms dataset indicate that our methodology does indeed capture the embedded moral biases in these models, with similar performance to the method proposed by Schramowski et al. (2022) for SBERT (\(r=0.79\)), and higher for GPT3-PROBS (\(r=0.85\)). The moral norms in this dataset are typically more globally agreeable (e.g., _You should not kill people_) than topics in WVS and PEW. As expected, EPLMs are less correlated with WVS and PEW, since their moral biases are derived from pre-training on English and westernized data. Aggregated ratings in WVS and PEW, however, capture a more global view toward moral issues, which are also morally contentious (e.g., "getting a divorce"). Table 3 in Appendix includes the values for this experiment. ### Fine-grained cultural variation of moral norms toward different topics Going beyond probing EPLMs for their general knowledge of moral norms, we assess whether they can accurately identify the moral norms of different cultures (level 1 analysis). Using our fine-grained probing approach described in Section 3, we compute Pearson correlation between EPLMs' moral scores and the fine-grained moral ratings from the ground truth. Each sample pair in the correlation test corresponds to 1) the moral norms estimated by EPLMs for a country \(c\) and a topic \(t\), and 2) the empirical average of moral ratings toward topic \(t\) from all the participants in the country \(c\). Figure 3 summarizes the results for SBERT, GPT2-LARGE, and GPT3-PROBS models, and the rest of the models are shown in Figure 7 in the Appendix. To facilitate direct comparison, the estimated moral scores are normalized to a range of \(-1\) to \(1\), where \(-1\), 0, and 1 indicate morally negative, morally neutral, and morally positive norms, respectively. GPT3-QA and GPT3-PROBS both show a relatively high correlation with the cultural variations of moral norms (\(r=0.352\), \(r=0.411\), \(p<0.001\), for both), and GPT2-LARGE achieves a correlation of \(r=0.207\) (\(p<0.001\)) in WVS where \(n=1,028\). The correlations are relatively better for PEW (\(n=312\)) with \(r=0.657\), \(r=0.503\), and \(r=0.468\) for GPT3-QA, GPT3-PROBS and GPT2-LARGE respectively. These results show that EPLMs have captured some Figure 2: Performance of EPLMs (without cultural prompts) on inferring 1) English moral norms, and 2) culturally diverse moral norms recorded in World Values Survey and PEW survey data. The asterisks indicate the significance levels (“*”, “**”, “***” for \(p<0.05,0.01,0.001\) respectively). knowledge about the moral norms of different cultures, but with much less accuracy (especially for GPT2 and SBERT) compared to their inference of English moral norms shown in the previous analysis. In addition, we check whether GPT3's high correlation with PEW is because it has seen and memorized the empirical data. Our investigation shows that GPT3 has seen the data during pre-training, as it can generate the sentences used on the survey website. However, the scores suggested by GPT3 text generation and the countries' rankings based on their ratings are different from the ground truth data. ### Culture clustering through fine-grained moral inference EPLMs' fine-grained knowledge of moral norms, inspected in the previous experiment, might be more accurate for western cultures than other cultures. We investigate this claim by clustering countries based on 1) their Western-Eastern economic status (i.e., Rich West grouping)7, and 2) their continent (i.e., geographical grouping). We repeat the experiments in the previous section for different country groups. The results are shown in Figure 4. We also try sampling the same number of countries in each group. The results remain robust and are illustrated in Appendix-F. Footnote 7: [https://worldpopulationreview.com/country-rankings/western-countries](https://worldpopulationreview.com/country-rankings/western-countries) Our findings indicate that EPLMs contain more knowledge about moral norms of the Rich West countries as opposed to non-western and non-rich countries. Similarly, EPLMs have captured a more accurate estimation of the moral norms in countries located in Oceania, North America, and Europe, as opposed to African, Asian, and South American countries. The empirical moral norm ratings from European countries in WVS are highly aligned with North American countries (\(r=0.938\)), which explains why their moral norms are inferred more accurately than non-English speaking countries. Next, for each topic, we compare the z-scores of the empirical moral ratings with the z-scores of the GPT3-PROBS inferred moral scores, using Mann-Whitney U rank test. The results reveal that "abortion", "suicide", "euthanasia", "for a man to beat his wife", "parents beating children", "having casual sex", "political violence", and "death penalty" in non-western and non-rich countries are all encoded as more morally appropriate than the actual data. Such misrepresentations of moral norms in these countries could lead to stereotypical content generation. We also find that For Rich West countries, "homosexuality", "divorce", and "sex before marriage" are encoded as more morally inappropriate than the ground truth, (\(p<0.001\) for all, Bonferroni corrected). Such underlying moral biases, specifically toward "homosexuality" might stimulate the generation of harmful content and stigmatization of members of LGBTQ+, which has been reported in BERT-based EPLMs (Nozza et al., 2022). The results for the rest of the models are similar and are shown in Table 6 in the Appendix. Our method of clustering countries is simplistic and may overlook things such as the significant diversity in religious beliefs within the Non-Rich-West category, and thus it does not reflect the nuanced biases that models may possess when it comes to moral norms influenced by different religious traditions. Nonetheless, our approach still serves as a valuable starting point for studying EPLM's moral biases towards more fine-grained religious and ethnic communities. ### Cultural diversities and shared tendencies over the morality of different topics We next investigate whether EPLMs have captured the cultural diversities and shared tendencies over the morality of different topics (level 2 analysis). For example, people across cultures tend to disagree more about "divorce" than about "violence against other people" as depicted in Figure 1. Such cultural diversities for each topic can be measured by taking the standard deviation of the empirical moral ratings across different countries. The EPLMs' inferred cultural diversities can similarly be measured by taking the standard deviation of the estimated fine-grained moral scores for different countries. We then quantify the alignment between the two using Pearson correlation. Figure 5 shows the results for SBERT, GPT2-LARGE, GPT3-PROBS, and the rest are shown in Figure 8 in the Appendix. None of the correlations with the PEW survey were significant. For WVS, SBERT, GPT2 and GPT2-MEDIUM exhibited a significant correlation (\(p<0.001\)) with \(r=0.618\), \(r=0.579\), and \(r=0.734\) respectively. The results for GPT3 are insignificant, suggesting that it is more challenging to correctly estimate cultural controversies of topics for GPT3. For ex ample, _stealing property_ is incorrectly estimated to be more controversial than _abortion_. ## 6 Fine-tuning language models on global surveys Finally, we explore the utility-bias trade-off in encoding cultural moral knowledge into EPLMs by fine-tuning them on cross-cultural surveys. The utility comes from increasing the cultural moral knowledge in these models, and the bias denotes their decreased ability to infer English moral norms, in addition to the cultural moral biases introduced to the model. We run our experiments on GPT2, which our results suggest having captured minimum information about cultural moral norms compared to other autoregressive models. To fine-tune the model, for each participant from [Country] with [Moral rating] toward [Topic], we designed a prompt with the structure "A person in [Country] believes [Topic] is [Moral rating]. We used the surveys' wordings for [Moral rating]. Table 8 in the Appendix shows our prompts for WVS and PEW. These prompts constructed our data for fine-tuning, during which we maximize the probability of the next token. The fine-tuned models were evaluated on the same correlation tests introduced in the previous Sections 5.2, 5.3, and 5.4. The fine-tuning data was partitioned into training and evaluation sets using different strategies (i.e., Random, Country-based, and Topic-based). For the Random strategy, we randomly selected \(80\%\) of the fine-tuning data for training the model. The topic-country pairs not seen in the training data composed the evaluation set. For our Country-based and Topic-based strategies, we randomly removed \(20\%\) of the countries (\(n=11\) for WVS, \(n=8\) for PEW) and topics (\(n=4\) for WVS, \(n=2\) for PEW) from the training data to compose the evaluation set. See Appendix G for the total number of Figure 4: Correlation between language-model inferred moral scores and empirical moral ratings from World Values Survey, analyzed in different clusters of countries in Rich West grouping (left) and continent grouping (right). The asterisks indicate the significance levels (“*”, “**”, “***” for \(p<0.05,0.01,0.001\) respectively). Figure 3: Degree of alignment between the moral scores from EPLMs and fine-grained empirical moral ratings for different topics across countries taken from the World Values Survey (top) and PEW survey (bottom). Each dot represents a topic-country pair. The x-axis shows the fine-grained moral ratings from the ground truth and the y-axis shows the corresponding inferred moral scores. The legends display the moral topics in the surveys. Similar topics in the World Value Surveys are shown with the same color. samples. Table 1 shows the gained utilities, that is the correlation test results between the fine-grained moral scores inferred by the fine-tuned models and the empirical fine-grained moral ratings. All fine-tuned models align better with the ground truth than the pre-trained-only models (i.e., the values in parentheses). For both WVS and PEW, the Random strategy is indeed the best as each country and topic are seen in the training data at least once (but may not appear together as a pair). The fine-tuned models can also generalize their moral scores to unseen countries and topics. Repeating the experiment in Section 5.4 also shows substantial improvement in identifying cultural diversities of different topics by all fine-tuned models. For example, the WVS and PEW-trained models with Random strategy gain Pearson's r values of \(0.893\), and \(0.944\) respectively. The results for the rest of the models are shown in Table 7 in the Appendix. Nevertheless, the bias introduced during the fine-tuning decreases the performance on the Homogeneous norms dataset. This observation displays a trade-off between cultural and homogeneous moral representations in language models. Moreover, injecting the cross-cultural surveys into EPLMs might introduce additional social biases to the model that are captured through these surveys (Joseph and Morgan, 2020). In addition, we probe the best fine-tuned model (i.e., WVS with Random strategy) on its ability to capture the moral norms of non-western cultures by repeating the experiment in Section 5.3. The results in Figure 4 show that the fine-tuned GPT2 performs the best for all country groups. There is still a gap between western and non-western countries. However, basic fine-tuning proves to be effective in adapting EPLMs to the ground truth. ## 7 Discussion and conclusion We investigated whether English pre-trained language models contain knowledge about moral \begin{table} \begin{tabular}{l|l|l l|l l} Train data & Data partition strategy & Evaluation & \multicolumn{2}{c}{Performance on the} \\ & & & & Homogeneous norms \\ \hline \multirow{3}{*}{WVS} & Random & \(\mathbf{0.832^{***}}\uparrow\) & \((0.271^{***})\) & \(0.71^{***}\downarrow\) \\ & Country-based & \(0.759^{***}\uparrow\) & \((0.225^{**})\) & \(0.72^{***}\downarrow\) \\ & Topic-based & \(0.508^{***}\uparrow\) & \((0.286^{***})\) & \(0.70^{***}\downarrow\) \\ \hline \multirow{3}{*}{PEW} & Random & \(\mathbf{0.818^{***}}\uparrow\) & \((0.204\), n.s.) & \(0.64^{***}\downarrow\) \\ & Country-based & \(0.764^{***}\uparrow\) & \((0.055\), n.s.) & \(0.67^{***}\downarrow\) \\ \cline{1-1} & Topic-based & \(0.733^{***}\uparrow\) & \((-0.146\), n.s.) & \(0.61^{***}\downarrow\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of fine-tuned GPT2 language model performance on inferring moral norms across cultures and the degradation of its performance on inferring Homogeneous moral norms. Values in parentheses show the performance before fine-tuning. The arrows and colors show performance increase (blue, \(\uparrow\)) and decrease (red, \(\downarrow\)) after fine-tuning. The asterisks indicate the significance levels (“*”, “*“*”, “*“*” for \(p<0.05,0.01,0.001\)). Figure 5: Comparison between the degrees of cultural diversities and shared tendencies in the empirical moral ratings and language-model inferred moral scores. Each dot corresponds to a moral topic. The numerical indices are consistent with the legend indices in Table 5. The x-axis shows the empirical standard deviations in moral ratings across countries and the y-axis shows the standard deviations from the model-inferred moral scores. norms across many different cultures. Our analyses show that large EPLMs capture moral norm variation to a certain degree, with the inferred norms being predominantly more accurate in western cultures than non-western cultures. Our fine-tuning analysis further suggests that EPLMs' cultural moral knowledge can be improved using global surveys of moral norms, although this strategy reduces the capacity to estimate the English moral norms and potentially introduces new biases into the model. Given the increasing use of EPLMs in multicultural environments, our work highlights the importance of cultural diversity in automated inference of moral norms. Even when an action such as "political violence" is assessed by an EPLM as morally inappropriate in a homogeneous setting, the same issue may be inferred as morally appropriate for underrepresented cultures in these large language models. Future work can explore alternative and richer representations of cultural moral norms that go beyond the point estimation we presented here and investigate how those representations might better capture culturally diverse moral views. ### Limitations Although our datasets are publicly available and gathered from participants in different countries, they cannot entirely represent the moral norms from all the individuals in different cultures over the world or predict how moral norms might change into the future (Bloom, 2010; Bicchieri, 2005). Additionally, we examine a limited set of moral issues for each country, therefore the current experiments should not be regarded as comprehensive of the space of moral issues that people might encounter in different countries. Moreover, taking the average of moral ratings for each culture is a limitation of our work and reduces the natural distribution of moral values in a culture to a single point (Talat et al., 2021). Implementing a framework that incorporates both within-country variation and temporal moral variation (Xie et al., 2019) is a potential future research direction. Currently, it is not clear whether the difference between EPLMs' estimated moral norms and the empirical moral ratings is due to the lack of cultural moral norms in the pre-training data, or that the cultural moral norms mentioned in the pre-training data represent the perspective of an English-speaking person of another country. For example, a person from the United States could write about the moral norms in another country from a western perspective. A person from a non-western country could also write about their own moral views using English. These two cases have different implications and introduce different moral biases into the system. ### Potential risks We believe that the language models should not be used to prescribe ethics, and here we approach the moral norm inference problem from a descriptive perspective. However, we acknowledge modifying prompts could lead language models to generate ethical prescriptions for different cultures. Additionally, our fine-tuning approach could be exploited to implant cultural stereotypical biases into these models. Many topics shown in this work might be sensitive to some people yet more tolerable to some other people. Throughout the paper, we tried to emphasize that none of the moral norms, coming from either the models' estimation or the empirical data, should be regarded as definitive values of right and wrong, and the moral judgments analyzed in this work do not reflect the opinions of the authors. ## Acknowledgements This work was supported by a SSHRC Insight Grant 435190272.